Content Tags

There are no tags.

Estimation of KL Divergence: Optimal Minimax Rate.

RSS Source
Authors
Yuheng Bu, Shaofeng Zou, Yingbin Liang, Venugopal V. Veeravalli

The problem of estimating the Kullback-Leibler divergence $D(P\|Q)$ betweentwo unknown distributions $P$ and $Q$ is studied, under the assumption that thealphabet size $k$ of the distributions can scale to infinity. The estimation isbased on $m$ independent samples drawn from $P$ and $n$ independent samplesdrawn from $Q$. It is first shown that there does not exist any consistentestimator that guarantees asymptotically small worst-case quadratic risk overthe set of all pairs of distributions. A restricted set that contains pairs ofdistributions, with density ratio bounded by a function $f(k)$ is furtherconsidered. {An augmented plug-in estimator is proposed, and its worst-casequadratic risk is shown to be within a constant factor of$(\frac{k}{m}+\frac{kf(k)}{n})^2+\frac{\log ^2 f(k)}{m}+\frac{f(k)}{n}$, if $m$and $n$ exceed a constant factor of $k$ and $kf(k)$, respectively.} Moreover,the minimax quadratic risk is characterized to be within a constant factor of$(\frac{k}{m\log k}+\frac{kf(k)}{n\log k})^2+\frac{\log ^2f(k)}{m}+\frac{f(k)}{n}$, if $m$ and $n$ exceed a constant factor of$k/\log(k)$ and $kf(k)/\log k$, respectively. The lower bound on the minimaxquadratic risk is characterized by employing a generalized Le Cam's method. Aminimax optimal estimator is then constructed by employing both the polynomialapproximation and the plug-in approaches.

Stay in the loop.

Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.