Past Events
Event Status
Scheduled
Sept. 18, 2020, All Day
Many supervised learning methods are naturally cast as optimization problems. For prediction models which are linear in their parameters, this often leads to convex problems for which many guarantees exist. Models which are non-linear in their parameters such as neural networks lead to non-convex optimization problems for which guarantees are harder to obtain. In this talk, I will consider two-layer neural networks with homogeneous activation functions where the number of hidden neurons tends to infinity, and show how qualitative convergence guarantees may be derived.
Event Status
Scheduled
Sept. 11, 2020, All Day
In this talk, we will focus on the recently-emerged field of (adversarially) robust learning. This field began by the observation that modern learning models, despite the breakthrough performance, remain fragile to seemingly innocuous changes in the data such as small, norm-bounded perturbations of the input data. In response, various training methodologies have been developed for enhancing robustness. However, it is fair to say that our understanding in this field is still at its infancy and several key questions remain widely open. We will consider two such questions.
Event Status
Scheduled
Sept. 4, 2020, All Day
A welcome back meeting for new and returning students who are part of the Wireless Networking & Communications Group. Connect with WNCG faculty, staff, and students as we gear up to start a new academic year. Get a refresher on who we are and what we do, and catch up on the latest developments in WNCG research through a series of casual lightning pitch updates from some of your fellow researchers.
Access: Zoom meeting details will be provided via email to the WNCG student list.
Event Status
Scheduled
May 22, 2020, All Day
no results
Event Status
Scheduled
May 15, 2020, All Day
Seminar time shown in CDT (UTC -5)
Event Status
Scheduled
May 8, 2020, All Day
Join us for a special virtual installment of the ML Seminar Series:
In this talk, we aim to quantify the robustness of distributed training against worst-case failures and adversarial nodes. We show that there is a gap between robustness guarantees, depending on whether adversarial nodes have full control of the hardware, the training data, or both. Using ideas from robust statistics and coding theory we establish robust and scalable training methods for centralized, parameter server systems.
Event Status
Scheduled
May 7, 2020, All Day
Few-shot classification, the task of adapting a classifier to unseen classes given a small labeled dataset, is an important step on the path toward human-like machine learning. I will present what I think are some of the key advances and open questions in this area. I will then focus on the fundamental issue of overfitting in the few-shot scenario. Bayesian methods are well-suited to tackling this issue because they allow practitioners to specify prior beliefs and update those beliefs in light of observed data.
Event Status
Scheduled
March 6, 2020, All Day
Large-scale machine learning training, in particular, distributed stochastic gradient descent (SGD), needs to be robust to inherent system variability such as unpredictable computation and communication delays. This work considers a distributed SGD framework where each worker node is allowed to perform local model updates and the resulting models are averaged periodically. Our goal is to analyze and improve the true speed of error convergence with respect to wall-clock time (instead of the number of iterations).
Event Status
Scheduled
Feb. 21, 2020, All Day
Machine learning today bears resemblance to the field of aviation soon after the Wright Brothers’ pioneering flights in the early 1900s. It took half a century of aeronautical engineering advances for the ‘Jet Age’ (i.e., commercial aviation) to become a reality. Similarly, machine learning (ML) is currently experiencing a renaissance, yet fundamental barriers must be overcome to fully unlock the potential of ML-powered technology. In this talk, I describe our work to help democratize ML by tackling barriers related to scalability, privacy, and safety.
Event Status
Scheduled
Feb. 20, 2020, All Day
Much of the prior work on scheduling algorithms for wireless networks focuses on maximizing throughput. However, for many real-time applications, delays and deadline guarantees on packet delivery can be more important than long-term throughput. In this talk, we consider the problem of scheduling deadline-constrained packets in wireless networks, under a conflict-graph interference model. The objective is to guarantee that at least a certain fraction of packets of each link are delivered within their deadlines, which is referred to as delivery ratio.