We consider the problem of training and evaluating generative models with a Generative Adversarial Network (GAN). Although GANs can accurately model complex distributions, they are known to be difficult to train due to instabilities caused by a difficult minimax optimization problem. In this work, we view the problem of training GANs as finding a mixed strategy in a zero-sum game. This leads to a new training method that builds upon ideas from game theory and online learning.
We provide theoretical guarantees, and develop an efficient heuristic guided by our theoretical results, which we apply to commonly used GAN architectures. On several tasks our approach exhibits improved stability and performance compared to standard GAN training.