Convex Schmonvex: Dropping Convexity for Faster Matrix Estimation

Share this content

Published:
September 22, 2015
Abstract: Fitting a low-rank matrix to data is a fundamental and widely used primitive in machine learning. For most problems beyond the very basic PCA, theoretically sound methods have overwhelmingly combined statistical models of the data with convex optimization. As the size and dimensionality of data increases, this approach is overly computationally wasteful, not least because it represents an nr dimensional object with n^2 parameters. 
 
In this talk we present several of our recent results in understanding and designing much faster non-convex algorithms, and characterizing their statistical performance.
 
Prof. Sujay Sanghavi Bio: Sujay Sanghavi is an Associate Professor in the Department of Electrical and Computer Engineering at The University of Texas at Austin. Dr. Sanghavi joined UT ECE in July 2009. He got his PhD from UIUC, and a postdoc from MIT. His research interests lie at the intersection of two central challenges in modern systems: large-scale networking and high-dimensional data analysis, with a focus on algorithm design and evaluation. He received the NSF CAREER award in January 2010.
News category:
Multimedia