Stable recovery of deep linear networks under sparsity constraints.

RSS Source
Authors
Francois Malgouyres (IMT), Joseph Landsberg (TAMU)

We study a deep linear network expressed under the form of a matrixfactorization problem. It takes as input a matrix $X$ obtained by multiplying$K$ matrices (called factors and corresponding to the action of a layer). Eachfactor is obtained by applying a fixed linear operator to a vector ofparameters satisfying a sparsity constraint. In machine learning, the errorbetween the product of the estimated factors and $X$ (i.e. the reconstructionerror) relates to the statistical risk. The stable recovery of the parametersdefining the factors is required in order to interpret the factors and theintermediate layers of the network. In this paper, we provide sharp conditionson the network topology under which the error on the parameters defining thefactors (i.e. the stability of the recovered parameters) scales linearly withthe reconstruction error (i.e. the risk). Therefore, under these conditions onthe network topology, any successful learning tasks leads to robust andtherefore interpretable layers. The analysis is based on the recently proposedTensorial Lifting. The particularity of this paper is to consider a sparseprior. As an illustration, we detail the analysis and provide sharp guaranteesfor the stable recovery of convolutional linear network under sparsity prior.As expected, the condition are rather strong.

Stay in the loop.

Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.