Virtual Seminar - Robust Distributed Training! But at What Cost?

Event Status
Scheduled

Join us for a special virtual installment of the ML Seminar Series:

In this talk, we aim to quantify the robustness of distributed training against worst-case failures and adversarial nodes. We show that there is a gap between robustness guarantees, depending on whether adversarial nodes have full control of the hardware, the training data, or both. Using ideas from robust statistics and coding theory we establish robust and scalable training methods for centralized, parameter server systems.

Perhaps unsurprisingly, we prove that robustness is impossible when a central authority does not own the training data, e.g., in federated learning systems. We then provide a set of attacks that force federated models to exhibit poor performance on either the training, test, or out-of-distribution data sets. Our results and experiments cast doubts on the security presumed by federated learning system providers, and show that if you want robustness, you probably have to give up some of your data privacy.

 

This seminar was delivered live via Zoom on Friday, May 8, 2020. You can watch a recording of the talk on WNCG's YouTube channel HERE.

Date and Time
May 8, 2020, All Day