Content Tags

There are no tags.

Constant Regret, Generalized Mixability, and Mirror Descent.

RSS Source
Authors
Zakaria Mhammedi, Robert C. Williamson

We consider the setting of prediction with expert advice; a learner makespredictions by aggregating those of a group of experts. Under this setting, andwith the right choice of loss function and "mixing" algorithm, it is possiblefor the learner to achieve constant regret regardless of the number ofprediction rounds. For example, constant regret can be achieved with\emph{mixable} losses using the \emph{Aggregating Algorithm} (AA). The\emph{Generalized Aggregating Algorithm} (GAA) is a name for a family ofalgorithms parameterized by convex functions on simplices (entropies), whichreduce to the AA when using the \emph{Shannon entropy}. For a given entropy$\Phi$, losses for which constant regret is possible using the GAA are called$\Phi$-mixable. Which losses are $\Phi$-mixable was previously left as an openquestion. We fully characterize $\Phi$-mixability, and answer other openquestions posed by \cite{Reid2015}. We also elaborate on the tight link betweenthe GAA and the \emph{mirror descent algorithm} which minimizes the weightedloss of experts.

Stay in the loop.

Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.