Events

Upcoming Events

Fri
Sep 25
9:00 AM
Online

The application of supervised learning techniques for the design of the physical layer of a communication link is often impaired by the limited amount of pilot data available for each device; while the use of unsupervised learning is typically limited by the need to carry out a large number of training iterations. In this talk, meta-learning, or learning-to-learn, is introduced as a tool to alleviate these problems. The talk will consider an Internet-of-Things (IoT) scenario in which devices transmit sporadically using short packets with few pilot symbols over a fading channel.

Fri
Oct 02
1:30 PM
Online

Satyen Kale (Google Research) 

Date: Friday, October 2, 2020
Time: 1:30 PM – 2:30 PM (CDT; UTC -5)
Location: Online (Zoom link will be provided)

Title: TBD

Abstract: TBD

Fri
Oct 09
11:00 AM
Online

Rahul Jain (USC)

Date: Friday, October 9, 2020
Time: 11:00 AM – 12:00 PM (CDT; UTC -5)
Location: Online (Zoom link will be provided)

Title: TBD 

Abstract: TBD

Fri
Oct 16
11:00 AM
Online

Rayadurgam Srikant (UIUC)

Date: Friday, October 16, 2020
Time: 11:00 AM – 12:00 PM (CDT; UTC -5)
Location: Online (Zoom link will be provided)

Title: TBD 

Abstract: TBD

Fri
Oct 23
11:00 AM
Online
Fri
Nov 13
11:00 AM
Online

Maryam Fazel (University of Washington)

Date: Friday, November 13, 2020
Time: 11:00 AM – 12:00 PM (CDT; UTC -5)
Location: Online (Zoom link will be provided)

Title: TBD

Abstract: TBD

Fri
Nov 20
11:00 AM
Online

Stefanie Jegelka (MIT)

Date: Friday, November 20, 2020
Time: 11:00 AM – 12:00 PM (CDT; UTC -5)
Location: Online (Zoom link will be provided)

Title: TBD

Abstract: TBD

Recent Events

18 Sep 2020

Many supervised learning methods are naturally cast as optimization problems. For prediction models which are linear in their parameters, this often leads to convex problems for which many guarantees exist. Models which are non-linear in their parameters such as neural networks lead to non-convex optimization problems for which guarantees are harder to obtain.

11 Sep 2020

In this talk, we will focus on the recently-emerged field of (adversarially) robust learning. This field began by the observation that modern learning models, despite the breakthrough performance, remain fragile to seemingly innocuous changes in the data such as small, norm-bounded perturbations of the input data. In response, various training methodologies have been developed for enhancing robustness.

08 May 2020

Join us for a special virtual installment of the ML Seminar Series:

In this talk, we aim to quantify the robustness of distributed training against worst-case failures and adversarial nodes. We show that there is a gap between robustness guarantees, depending on whether adversarial nodes have full control of the hardware, the training data, or both. Using ideas from robust statistics and coding theory we establish robust and scalable training methods for centralized, parameter server systems.