Past Events

CalendarUpcoming Events Past Events
Event Status
Scheduled
Oct. 23, 2020, All Day
The focus of our work is to obtain finite-sample and/or finite-time convergence bounds of various model-free Reinforcement Learning (RL) algorithms. Many RL algorithms are special cases of Stochastic Approximation (SA), which is a popular approach for solving fixed point equations when the information is corrupted by noise. We first obtain finite-sample bounds for general SA using a generalized Moreau envelope as a smooth potential/ Lyapunov function.
Event Status
Scheduled
Oct. 16, 2020, All Day
Overparameterized neural networks have proved to be remarkably successful in many complex tasks such as image classification and deep reinforcement learning. In this talk, we will consider the role of explicit regularization in training overparameterized neural networks. Specifically, we consider ReLU networks and show that the landscape of commonly used regularized loss functions have the property that every local minimum has good memorization and regularization performance. Joint work with Shiyu Liang and Ruoyu Sun. Time: 11:00 AM – 12:00 PM Central (CDT; UTC -5) 
Event Status
Scheduled
Oct. 13, 2020, All Day
Abstract TBA   Event time is 11:00AM - 12:00PM Central (CDT; UTC -5)      Access: Seminar will be delivered live; on the date and time shown above via Zoom. Access link TBA. The Zoom conferencing system is accessible to UT faculty, staff, and students with support from ITS. Otherwise, you can sign up for a free account on the Zoom website.
Event Status
Scheduled
Oct. 9, 2020, All Day
Meeting Time: 11:00 AM – 12:00 PM Central (CDT; UTC -5)
Event Status
Scheduled
Oct. 2, 2020, All Day
We revisit the fundamental problem of physical layer communications, namely reproducing at one point a message selected at another point, to finally arrive at a trainable system that inherently learns to communicate and adapts to any channel environment. As such, we realize a data-driven system design, based on deep learning algorithms, leading to a universal framework that allows end-to-end optimization of the whole data-link without the need for prior mathematical modeling and analysis.
Event Status
Scheduled
Oct. 2, 2020, All Day
Federated Learning has emerged as an important paradigm in modern large-scale machine learning, where the training data remains distributed over a large number of clients, which may be phones, network sensors, hospitals, etc. A major challenge in the design of optimization methods for Federated Learning is the heterogeneity (i.e. non i.i.d. nature) of client data. This problem affects the currently dominant algorithm deployed in practice known as Federated Averaging (FedAvg): we provide results for FedAvg quantifying the degree to which this problem causes unstable or slow convergence.
Event Status
Scheduled
Sept. 25, 2020, All Day
The application of supervised learning techniques for the design of the physical layer of a communication link is often impaired by the limited amount of pilot data available for each device; while the use of unsupervised learning is typically limited by the need to carry out a large number of training iterations. In this talk, meta-learning, or learning-to-learn, is introduced as a tool to alleviate these problems. The talk will consider an Internet-of-Things (IoT) scenario in which devices transmit sporadically using short packets with few pilot symbols over a fading channel.
Event Status
Scheduled
Sept. 25, 2020, All Day
no results
Event Status
Scheduled
Sept. 18, 2020, All Day
Many supervised learning methods are naturally cast as optimization problems. For prediction models which are linear in their parameters, this often leads to convex problems for which many guarantees exist. Models which are non-linear in their parameters such as neural networks lead to non-convex optimization problems for which guarantees are harder to obtain. In this talk, I will consider two-layer neural networks with homogeneous activation functions where the number of hidden neurons tends to infinity, and show how qualitative convergence guarantees may be derived.
Event Status
Scheduled
Sept. 11, 2020, All Day
In this talk, we will focus on the recently-emerged field of (adversarially) robust learning. This field began by the observation that modern learning models, despite the breakthrough performance, remain fragile to seemingly innocuous changes in the data such as small, norm-bounded perturbations of the input data. In response, various training methodologies have been developed for enhancing robustness. However, it is fair to say that our understanding in this field is still at its infancy and several key questions remain widely open. We will consider two such questions.