Fast Convergence of Regularized Learning in Games
We show that natural classes of regularized learning algorithms with a form of recency bias achieve faster convergence rates to approximate efficiency and to coarse correlated equilibria in multiplayer normal form games. When each player in a game uses an algorithm from our class, their individual regret decays at O(T^-3/4), while the sum of utilities converges to an approximate optimum at O(T^-1) --an improvement upon the worst case O(T^-1/2) rates. We show a black-box reduction for any algorithm in the class to achieve Õ(T^-1/2) rates against an adversary while maintaining the faster rates against algorithms in the class. Our results extend those of Rakhlin and Shridharan, and Daskalakis et al., who only analyzed two-player zero-sum games for specific algorithms.
Stay in the loop.
Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.