Stefanie Jegelka: An introduction to submodularity, Part 2

Share this content

Published:
November 16, 2015

Abstract: Submodular functions capture a wide spectrum of discrete problems in machine learning, signal processing and computer vision. They are characterized by intuitive notions of diminishing returns and economies of scale, and often lead to practical algorithms with theoretical guarantees.

In the first part of this talk, I will give a general introduction to the concept of submodular functions, their optimization and example applications in machine learning.

In the second part, I will demonstrate how the close connection of submodularity to convexity leads to fast algorithms for minimizing a subclass of submodular functions - those decomposing as a sum of submodular functions. Using a specific relaxation, the algorithms solve the discrete submodular optimization problem as a "best approximation" problem. They are easy to use and parallelize, and solve both the convex relaxation and the original discrete problem. Their convergence analysis combines elements of geometry and spectral graph theory.

This is joint work with Robert Nishihara, Francis Bach, Suvrit Sra and Michael I. Jordan.

Speaker Bio: Stefanie Jegelka is the X-Consortium Career Development Assistant Professor in the Department of EECS at MIT, and a member of CSAIL and the Institute for Data, Systems and Society. Before joining MIT, she was a postdoctoral scholar in the AMPLab at UC Berkeley. She earned her PhD from ETH Zurich in collaboration with the Max Planck Institutes in Tuebingen, Germany, and a Diplom from the University of Tuebingen. Her research interests lie in algorithmic and combinatorial machine learning, with applications in computer vision, materials science, and biology. She has been a fellow of the German National Academic Foundation, and has received several other fellowships, as well as a Best Paper Award at ICML and the 2015 Award of the German Society for Pattern Recognition.

News category:
Multimedia