The Many Faces of Exponential Weights in Online Learning.

RSS Source
Authors
Dirk van der Hoeven, Tim van Erven, Wojciech Kotłowski

A standard introduction to online learning might place Online GradientDescent at its center and then proceed to develop generalizations andextensions like Online Mirror Descent and second-order methods. Here we explorethe alternative approach of putting exponential weights (EW) first. We showthat many standard methods and their regret bounds then follow as a specialcase by plugging in suitable surrogate losses and playing the EW posteriormean. For instance, we easily recover Online Gradient Descent by using EW witha Gaussian prior on linearized losses, and, more generally, all instances ofOnline Mirror Descent based on regular Bregman divergences also correspond toEW with a prior that depends on the mirror map. Furthermore, appropriatequadratic surrogate losses naturally give rise to Online Gradient Descent forstrongly convex losses and to Online Newton Step. We further interpret severalrecent adaptive methods (iProd, Squint, and a variation of Coin Betting forexperts) as a series of closely related reductions to exp-concave surrogatelosses that are then handled by Exponential Weights. Finally, a benefit of ourEW interpretation is that it opens up the possibility of sampling from the EWposterior distribution instead of playing the mean. As already observed byBubeck and Eldan, this recovers the best-known rate in Online Bandit LinearOptimization.

Stay in the loop.

Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.