Online Variance Reduction for Stochastic Optimization.
Modern stochastic optimization methods often rely on uniform sampling whichis agnostic to the underlying characteristics of the data. This might degradethe convergence by yielding estimates that suffer from a high variance. Apossible remedy is to employ non-uniform importance sampling techniques, whichtake the structure of the dataset into account. In this work, we investigate arecently proposed setting which poses variance reduction as an onlineoptimization problem with bandit feedback. We devise a novel and efficientalgorithm for this setting that finds a sequence of importance samplingdistributions competitive with the best fixed distribution in hindsight, thefirst result of this kind. While we present our method for sampling datapoints,it naturally extends to selecting coordinates or even blocks of thereof.Empirical validations underline the benefits of our method in several settings.
Stay in the loop.
Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.