Learning Time/Memory-Efficient Deep Architectures with Budgeted Super Networks.

RSS Source
Authors
Tom Veniat, Ludovic Denoyer

We propose to focus on the problem of discovering neural networkarchitectures efficient both in terms of prediction quality and cost. Forinstance, our approach is able to solve the following tasks: 'learn a neuralnetwork able to predict well in less than 100 milliseconds' or 'learn anefficient model that fits in a 50 Mb memory'. Our contribution is a novelfamily of models called Budgeted Super Networks. They are learned usinggradient descent techniques applied on a budgeted learning objective functionwhich integrates a maximum authorized cost where this cost can be of differentnature. We present a set of experiments on computer vision problems and analyzethe ability of our technique to deal with three different costs: thecomputation cost, the memory consumption cost, and also a distributedcomputation cost. We particularly show that our model can discover neuralnetwork architectures that have a better accuracy than the ResNet and CNFarchitectures on CIFAR-10 and CIFAR-100, at a lower cost.

Stay in the loop.

Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.