Past Events
Event Status
Scheduled
Oct. 30, 2020, All Day
Machine learning as a service (MLaaS) has emerged as a paradigm allowing clients to outsource machine learning computations to the cloud. However, MLaaS raises immediate security concerns, specifically relating to the integrity (or correctness) of computations performed by an untrusted cloud, and the privacy of the client’s data. In this talk, I discuss frameworks we built on cryptographic tools that can be used for secure deep learning based inference on an untrusted cloud: CryptoNAS (building models for private inference) and SafetyNets (addressing correctness).
Event Status
Scheduled
Oct. 30, 2020, All Day
Seminar will be delivered live via Zoom on Friday, October 30, 2020 at 9:00AM - 10:00AM U.S. Central Time (CDT / UTC -5).
The Zoom conferencing system is accessible to UT faculty, staff, and students with support from ITS. Otherwise, you can sign up for a free account on the Zoom website.
Event Status
Scheduled
Oct. 23, 2020, All Day
The focus of our work is to obtain finite-sample and/or finite-time convergence bounds of various model-free Reinforcement Learning (RL) algorithms. Many RL algorithms are special cases of Stochastic Approximation (SA), which is a popular approach for solving fixed point equations when the information is corrupted by noise. We first obtain finite-sample bounds for general SA using a generalized Moreau envelope as a smooth potential/ Lyapunov function.
Event Status
Scheduled
Oct. 16, 2020, All Day
Overparameterized neural networks have proved to be remarkably successful in many complex tasks such as image classification and deep reinforcement learning. In this talk, we will consider the role of explicit regularization in training overparameterized neural networks. Specifically, we consider ReLU networks and show that the landscape of commonly used regularized loss functions have the property that every local minimum has good memorization and regularization performance. Joint work with Shiyu Liang and Ruoyu Sun.
Time: 11:00 AM – 12:00 PM Central (CDT; UTC -5)
Event Status
Scheduled
Oct. 13, 2020, All Day
Abstract TBA
Event time is 11:00AM - 12:00PM Central (CDT; UTC -5)
Access: Seminar will be delivered live; on the date and time shown above via Zoom. Access link TBA.
The Zoom conferencing system is accessible to UT faculty, staff, and students with support from ITS. Otherwise, you can sign up for a free account on the Zoom website.
Event Status
Scheduled
Oct. 9, 2020, All Day
Meeting Time: 11:00 AM – 12:00 PM Central (CDT; UTC -5)
Event Status
Scheduled
Oct. 2, 2020, All Day
We revisit the fundamental problem of physical layer communications, namely reproducing at one point a message selected at another point, to finally arrive at a trainable system that inherently learns to communicate and adapts to any channel environment. As such, we realize a data-driven system design, based on deep learning algorithms, leading to a universal framework that allows end-to-end optimization of the whole data-link without the need for prior mathematical modeling and analysis.
Event Status
Scheduled
Oct. 2, 2020, All Day
Federated Learning has emerged as an important paradigm in modern large-scale machine learning, where the training data remains distributed over a large number of clients, which may be phones, network sensors, hospitals, etc. A major challenge in the design of optimization methods for Federated Learning is the heterogeneity (i.e. non i.i.d. nature) of client data. This problem affects the currently dominant algorithm deployed in practice known as Federated Averaging (FedAvg): we provide results for FedAvg quantifying the degree to which this problem causes unstable or slow convergence.
Event Status
Scheduled
Sept. 25, 2020, All Day
no results
Event Status
Scheduled
Sept. 25, 2020, All Day
The application of supervised learning techniques for the design of the physical layer of a communication link is often impaired by the limited amount of pilot data available for each device; while the use of unsupervised learning is typically limited by the need to carry out a large number of training iterations. In this talk, meta-learning, or learning-to-learn, is introduced as a tool to alleviate these problems. The talk will consider an Internet-of-Things (IoT) scenario in which devices transmit sporadically using short packets with few pilot symbols over a fading channel.