Multimedia

Abstract: Many machine learning tasks can be posed as structured prediction, where the goal is to predict a labeling or structured object. For example, the input may be an image or a sentence, and the output is a labeling such as an assignment of each pixel in the image to foreground or background, or the parse tree for the sentence. Despite marginal and MAP inference for many of these models being NP-hard in the worst-case, approximate inference algorithms are remarkably successful and as a result structured prediction is widely used.

What makes these real-world instances different from worst-case instances? One key difference is that in all of these applications, there is an underlying "ground truth" which structured prediction is aiming to find. In this talk, I will introduce a new theoretical framework for analyzing structured prediction algorithms in terms of their ability to achieve small Hamming error. We study the computational and statistical trade-offs that arise in this setting, and illustrate a setting where polynomial-time algorithms can perform optimal prediction, despite the corresponding MAP inference task being NP-hard.

Based on joint work with Amir Globerson, Ofer Meshi, Tim Roughgarden, and Cafer Yildirim.

Speaker Bio: David Sontag is an Assistant Professor of Computer Science and Data Science at NYU. Computer Science is part of the Courant Institute of Mathematical Sciences. His research focuses on machine learning and probabilistic inference, with a particular focus on applications to clinical medicine. For example, he is developing algorithms to learn probabilistic models for medical diagnosis directly from unstructured clinical data, automatically discovering and predicting latent (hidden) variables. Prof. Sontag collaborates with the Emergency Medicine Informatics Research Lab at Beth Israel Deaconess Medical Center and with Independence Blue Cross.

Previously, he was a post-doc at Microsoft Research New England. His Ph.D. is in Computer Science from MIT, where he worked with Tommi Jaakkola on approximate inference and learning in probabilistic models. Prof. Sontag received a bachelors degree from UC Berkeley in Computer Science, where he worked with Stuart Russell's First-Order Probabilistic Logic group.

Abstract: Submodular functions capture a wide spectrum of discrete problems in machine learning, signal processing and computer vision. They are characterized by intuitive notions of diminishing returns and economies of scale, and often lead to practical algorithms with theoretical guarantees.

In the first part of this talk, I will give a general introduction to the concept of submodular functions, their optimization and example applications in machine learning.

In the second part, I will demonstrate how the close connection of submodularity to convexity leads to fast algorithms for minimizing a subclass of submodular functions - those decomposing as a sum of submodular functions. Using a specific relaxation, the algorithms solve the discrete submodular optimization problem as a "best approximation" problem. They are easy to use and parallelize, and solve both the convex relaxation and the original discrete problem. Their convergence analysis combines elements of geometry and spectral graph theory.

This is joint work with Robert Nishihara, Francis Bach, Suvrit Sra and Michael I. Jordan.

Speaker Bio: Stefanie Jegelka is the X-Consortium Career Development Assistant Professor in the Department of EECS at MIT, and a member of CSAIL and the Institute for Data, Systems and Society. Before joining MIT, she was a postdoctoral scholar in the AMPLab at UC Berkeley. She earned her PhD from ETH Zurich in collaboration with the Max Planck Institutes in Tuebingen, Germany, and a Diplom from the University of Tuebingen. Her research interests lie in algorithmic and combinatorial machine learning, with applications in computer vision, materials science, and biology. She has been a fellow of the German National Academic Foundation, and has received several other fellowships, as well as a Best Paper Award at ICML and the 2015 Award of the German Society for Pattern Recognition.

Abstract: Submodular functions capture a wide spectrum of discrete problems in machine learning, signal processing and computer vision. They are characterized by intuitive notions of diminishing returns and economies of scale, and often lead to practical algorithms with theoretical guarantees.

In the first part of this talk, I will give a general introduction to the concept of submodular functions, their optimization and example applications in machine learning.

In the second part, I will demonstrate how the close connection of submodularity to convexity leads to fast algorithms for minimizing a subclass of submodular functions - those decomposing as a sum of submodular functions. Using a specific relaxation, the algorithms solve the discrete submodular optimization problem as a "best approximation" problem. They are easy to use and parallelize, and solve both the convex relaxation and the original discrete problem. Their convergence analysis combines elements of geometry and spectral graph theory.

This is joint work with Robert Nishihara, Francis Bach, Suvrit Sra and Michael I. Jordan.

Speaker Bio: Stefanie Jegelka is the X-Consortium Career Development Assistant Professor in the Department of EECS at MIT, and a member of CSAIL and the Institute for Data, Systems and Society. Before joining MIT, she was a postdoctoral scholar in the AMPLab at UC Berkeley. She earned her PhD from ETH Zurich in collaboration with the Max Planck Institutes in Tuebingen, Germany, and a Diplom from the University of Tuebingen. Her research interests lie in algorithmic and combinatorial machine learning, with applications in computer vision, materials science, and biology. She has been a fellow of the German National Academic Foundation, and has received several other fellowships, as well as a Best Paper Award at ICML and the 2015 Award of the German Society for Pattern Recognition.

The Television Academy announced today that Alan Bovik, professor in the Cockrell School of Engineering at The University of Texas at Austin and member of the Wireless Networking and Communications Group (WNCG), and his team of former students and collaborators will be honored with a 2015 Primetime Engineering Emmy Award for Outstanding Achievement in Engineering Development. The team will be recognized for their development of an advanced algorithm that enhances the video viewing experiences for tens of millions of people throughout the world. 

The awards were presented at the 67th Engineering Emmy Awards on October 28th in Hollywood and hosted by Josh Brener of the HBO show “Silicon Valley.”

Pages