High-dimensional Bayesian inference via the Unadjusted Langevin Algorithm.

RSS Source
Authors
Alain Durmus, Eric Moulines

We consider in this paper the problem of sampling a high-dimensionalprobability distribution $\pi$ having a density \wrt\ the Lebesgue measure on$\mathbb{R}^d$, known up to a normalization factor $x \mapsto \pi(x)=\mathrm{e}^{-U(x)}/\int_{\mathbb{R}^d} \mathrm{e}^{-U(y)} \mathrm{d}y$. Suchproblem naturally occurs for example in Bayesian inference and machinelearning. Under the assumption that $U$ is continuously differentiable, $\nablaU$ is globally Lipschitz and $U$ is strongly convex, we obtain non-asymptoticbounds for the convergence to stationarity in Wasserstein distance of order $2$and total variation distance of the sampling method based on the Eulerdiscretization of the Langevin stochastic differential equation, for bothconstant and decreasing step sizes. The dependence on the dimension of thestate space of the obtained bounds is studied to demonstrate the applicabilityof this method. The convergence of an appropriately weighted empirical measureis also investigated and bounds for the mean square error and exponentialdeviation inequality are reported for functions which are measurable andbounded. An illustration to Bayesian inference for binary regression ispresented.

Stay in the loop.

Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.