Past Events
Event Status
Scheduled
Feb. 21, 2020, All Day
Machine learning today bears resemblance to the field of aviation soon after the Wright Brothers’ pioneering flights in the early 1900s. It took half a century of aeronautical engineering advances for the ‘Jet Age’ (i.e., commercial aviation) to become a reality. Similarly, machine learning (ML) is currently experiencing a renaissance, yet fundamental barriers must be overcome to fully unlock the potential of ML-powered technology. In this talk, I describe our work to help democratize ML by tackling barriers related to scalability, privacy, and safety.
Event Status
Scheduled
Feb. 20, 2020, All Day
Much of the prior work on scheduling algorithms for wireless networks focuses on maximizing throughput. However, for many real-time applications, delays and deadline guarantees on packet delivery can be more important than long-term throughput. In this talk, we consider the problem of scheduling deadline-constrained packets in wireless networks, under a conflict-graph interference model. The objective is to guarantee that at least a certain fraction of packets of each link are delivered within their deadlines, which is referred to as delivery ratio.
Event Status
Scheduled
Feb. 3, 2020, All Day
In this talk, I will talk about principled ways of solving a classical reinforcement learning (RL) problem and introduce its robust variant.
Event Status
Scheduled
Jan. 31, 2020, All Day
We will discuss two problems that have different application spaces but share a common mathematical core. These problems combine stochastic approximation, an iterative method for finding the fixed point of a function from noisy observations, and consensus, a general averaging technique for multiple agents to cooperatively solve a distributed problem.
Event Status
Scheduled
Dec. 6, 2019, All Day
Submodular functions model the intuitive notion of diminishing returns. Due to their far-reaching applications, they have been rediscovered in many fields such as information theory, operations research, statistical physics, economics, and machine learning. They also enjoy computational tractability as they can be minimized exactly or maximized approximately. The goal of this talk is simple. We see how a little bit of randomness, a little bit of greediness, and the right combination can lead to pretty good methods for offline, streaming, and distributed solutions.
Event Status
Scheduled
Nov. 12, 2019, All Day
Join us for the 17th Texas Wireless Summit on November 12, 2019.
This year's Summit will highlight advances and opportunities at the intersection of human-centered computing, sensing and connectivity. Sessions and panels will focus on wearables, virtual and mixed reality, bio-interfaces, and perception. We will explore the challenges and demands of the communication infrastructure required to support and enhance devices and experiences.
Event Status
Scheduled
Nov. 8, 2019, All Day
I will talk about finite sample expressivity, aka memorization power of ReLU networks. Recent results showed (unsurprisingly) that arbitrary input data could be perfectly memorized using a shallow ReLU network with one hidden layer having N hidden nodes. I will describe a more careful construction that trades of width with depth to show that a ReLU network with 2 hidden layers, each with 2*sqrt(N) hidden nodes, can perfectly memorize arbitrary datasets. Moreover, we prove that width of Θ(sqrt(N)) is necessary and sufficient for having perfect memorization power.
Event Status
Scheduled
Sept. 6, 2019, All Day
no results
Event Status
Scheduled
May 10, 2019, All Day
no results
Event Status
Scheduled
May 3, 2019, All Day
Many modern neural networks are trained in an over-parameterized regime where the parameters of the model exceed the size of the training dataset. Due to their over-parameterized nature these models in principle have the capacity to (over)fit any set of labels including pure noise. Despite this high fitting capacity, somewhat paradoxically, models trained via first-order methods (often with early stopping) continue to predict well on yet unseen test data.