Virtual Seminar: Optimization algorithms for heterogeneous clients in Federated Learning

Event Status
Scheduled

Federated Learning has emerged as an important paradigm in modern large-scale machine learning, where the training data remains distributed over a large number of clients, which may be phones, network sensors, hospitals, etc. A major challenge in the design of optimization methods for Federated Learning is the heterogeneity (i.e. non i.i.d. nature) of client data. This problem affects the currently dominant algorithm deployed in practice known as Federated Averaging (FedAvg): we provide results for FedAvg quantifying the degree to which this problem causes unstable or slow convergence. We develop two optimization algorithms to handle this problem for two different settings of Federated Learning. In the cross-silo setting of Federated Learning, we propose a new algorithm called SCAFFOLD which uses control variates to correct for client heterogeneity and prove that SCAFFOLD requires significantly fewer communication rounds and is not affected by data heterogeneity or client sampling. In the cross-device setting of Federated Learning, we propose a general framework called Mime which mitigates client-drift and adapts arbitrary centralized optimization algorithms (e.g. SGD, Adam, etc.) to Federated Learning via a combination of control-variates and server-level statistics (e.g. momentum) at every client-update step. Our theoretical and empirical analyses establish the superiority of SCAFFOLD and Mime over other baselines in their respective settings.

Time: 1:30 PM – 2:30 PM Central (CDT; UTC -5)

 

AccessSeminar was delivered live via Zoom on October 2, 2020. You can watch a video recording of the talk here.

Date and Time
Oct. 2, 2020, All Day