Spatial Decompositions for Large Scale SVMs.

RSS Source
Authors
Philipp Thomann, Ingrid Blaschzyk, Mona Meister, Ingo Steinwart

Although support vector machines (SVMs) are theoretically well understood,their underlying optimization problem becomes very expensive, if, for example,hundreds of thousands of samples and a non-linear kernel are considered.Several approaches have been proposed in the past to address this seriouslimitation. In this work we investigate a decomposition strategy that learns onsmall, spatially defined data chunks. Our contributions are two fold: On thetheoretical side we establish an oracle inequality for the overall learningmethod using the hinge loss, and show that the resulting rates match thoseknown for SVMs solving the complete optimization problem with Gaussian kernels.On the practical side we compare our approach to learning SVMs on small,randomly chosen chunks. Here it turns out that for comparable training timesour approach is significantly faster during testing and also reduces the testerror in most cases significantly. Furthermore, we show that our approacheasily scales up to 10 million training samples: including hyper-parameterselection using cross validation, the entire training only takes a few hours ona single machine. Finally, we report an experiment on 32 million trainingsamples. All experiments used liquidSVM (Steinwart and Thomann, 2017).

Stay in the loop.

Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.