top of page

Isaac Yang: NVIDIA FLARE: Federated Learning from simulation to production



Isaac Yang is a software engineer at NVIDIA focused on building tools to solve problems related to deep learning. He was the main contributor of DIGITS, Clara Train SDK and NVIDIA FLARE. Prior to NVIDIA, Isaac worked at Sharp Laboratories of America designing and developing large-scale cloud-based information management systems. During that time, he authored several patents on signal processing on healthcare devices. He holds a PhD in Electrical Engineering from the University of Southern California and has more than 20 papers published in various conferences and journals.


NVIDIA FLARE: Federated Learning from simulation to production.


Federated Learning represents a paradigm shift from the centralized data lake focused machine learning. Instead of centralizing data in one location, federated learning allows models to be trained directly on the devices or servers where the data resides, such as smartphones, edge devices and machines. This not only preserves data privacy and security but also significantly reduces data transfer requirements, making it ideal for scenarios with sensitive data, like healthcare or finance. Federated Learning enables collaborative model training across a vast network of distributed machines or devices, leading to more personalized and efficient AI solutions while respecting user privacy.

NVIDIA FLARE (NVFLARE), an open-source initiative by NVIDIA, is dedicated to bringing privacy-preserving compute and machine learning to the federated setting while maintaining simplicity and production readiness. In this presentation, we will demonstrate how NVFLARE seamlessly transforms deep learning training code into federated learning code.

We will cover few examples and tutorials of large number of examples in NVFLARE repository.

  • Federated Statistics

  • Federated XGBoost, Linear and logistics regression, KMeans, SVM, Random Forest

  • LLM Prompt Tuning, Supervised Fine-Tuning (SFT) and Parameter-Efficient Fine-Tuning (PEFT) with Nemo

  • Training Protein Classifiers with Graph Neural Networks (GNN)

  • Vertical Federated Learning (Vertical XGBoost and Split Learning)

  • Enabling Cyclic and Swarm Learning Workflows

  • Experiment Tracking with MLflow and Weights & Biases

  • Production deployment of FL system in Azure and AWS clouds

  • Notebook interactive experience with FLARE API

  • CLI commands and interactive admin console

We also touch on:

  • NVFLARE’s component-based, layered architecture, as well as

  • discuss use cases in autonomous driving and health care

Furthermore, if time permits, we will conduct a live demonstration illustrating how to simulate multiple clients using your local host to execute federated learning jobs effectively. Join us on this journey towards unlocking the potential of federated learning with NVFLARE!


Comments


bottom of page