Convergence rates for smooth k-means change-point detection.
In this paper, we consider the estimation of a change-point for possiblyhigh-dimensional data in a Gaussian model, using a k-means method. We provethat, up to a logarithmic term, this change-point estimator has a minimax rateof convergence. Then, considering the case of sparse data, with a Sobolevregularity, we propose a smoothing procedure based on Lepski's method and showthat the resulting estimator attains the optimal rate of convergence. Ourresults are illustrated by some simulations. As the theoretical statementrelying on Lepski's method depends on some unknown constant, practicalstrategies are suggested to perform an optimal smoothing.
Continue reading and listening
Stay in the loop.
Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.