Out-distribution training confers robustness to deep neural networks.

RSS Source
Authors
Mahdieh Abbasi, Christian Gagné

The easiness at which adversarial instances can be generated in deep neuralnetworks raises some fundamental questions on their functioning and concerns ontheir use in critical systems. In this paper, we draw a connection betweenover-generalization and adversaries: a possible cause of adversaries lies inmodels designed to make decisions all over the input space, leading toinappropriate high-confidence decisions in parts of the input space notrepresented in the training set. We empirically show an augmented neuralnetwork, which is not trained on any types of adversaries, can increase therobustness by detecting black-box one-step adversaries, i.e. assimilated toout-distribution samples, and making generation of white-box one-stepadversaries harder.

Stay in the loop.

Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.