Virtual Seminar - Invariant Risk Minimization Games

Event Status
Scheduled
The standard risk minimization paradigm of machine learning is brittle when operating in environments whose test distributions are different from the training distribution due to spurious correlations. Training on data from many environments and finding invariant predictors reduces the effect of spurious features by concentrating models on features that have a causal relationship with the outcome. In this work, we pose such invariant risk minimization as finding the Nash equilibrium of an ensemble game among several environments. By doing so, we develop a simple training algorithm that uses best response dynamics and, in our experiments, yields similar or better empirical accuracy with much lower variance than the challenging bi-level optimization problem of Arjovsky et al. 2019. One key theoretical contribution is showing that the set of Nash equilibria for the proposed game are equivalent to the set of invariant predictors for any finite number of environments, even with nonlinear classifiers and transformations. As a result, our method also retains the generalization guarantees to a large set of environments shown in Arjovsky et al. 2019.  The proposed algorithm adds to the collection of successful game-theoretic machine learning algorithms such as generative adversarial networks.
 
Access:
This seminar will be delivered live via Zoom (sign-in required). The conference room will be active at the time and date specified above via THIS LINK.
 
Zoom is a remote conferencing system, available to UT students, faculty, and staff with support from IT Services. Members of the public can create a free login via the Zoom website.
Date and Time
May 22, 2020, All Day