Virtual Seminar: Representation and Learning in Graph Neural Networks

Seminar
Friday, November 20, 2020
11:00am - 12:00pm
Online

Graph Neural Networks (GNNs) have become a popular tool for learning representations of graph-structured inputs, with applications in computational chemistry, recommendation, pharmacy, reasoning, and many other areas.

This talk will show recent results on representational power and learning in GNNs. First, we will address representational power and important limitations of popular message passing networks and of some recent extensions of these models. Second, we consider learning, and provide new generalization bounds for GNNs. Third, although many networks may be able to represent a task, some architectures learn it better than others. I will show results that connect the architectural structure and learning behavior, in and out of the training distribution.

This talk is based on joint work with Keyulu Xu, Behrooz Tahmasebi, Jingling Li, Mozhi Zhang, Simon S. Du, Ken-ichi Kawarabayashi, Vikas Garg and Tommi Jaakkola.

 

Access: Seminar will be delivered live on Friday, November 20, at 11:00 AM – 12:00 PM Central (CST; UTC -6) via the following link: Zoom link

A Zoom account is required in order to access the seminar. Zoom is accessible to UT faculty, staff, and students with support from ITS. Otherwise, you can sign up for a free personal account on the Zoom website.

 

Speaker

Photo: Stefanie Jegelka
Associate Professor
Massachusetts Institute of Technology

Stefanie Jegelka is an X-Window Consortium Career Development Associate Professor in the Department of EECS at MIT, and a member of the Computer Science and AI Lab (CSAIL). Before joining MIT, she was a postdoctoral researcher at UC Berkeley, and obtained her PhD from ETH Zurich and the Max Planck Institute for Intelligent Systems. Stefanie has received a Sloan Research Fellowship, an NSF CAREER Award, a DARPA Young Faculty Award, the German Pattern Recognition Award and a Best Paper Award at ICML. Her research interests span the theory and practice of algorithmic machine learning, including discrete and continuous optimization, discrete probability, and learning with structured data.