Content Tags

There are no tags.

Detecting Learning vs Memorization in Deep Neural Networks using Shared Structure Validation Sets.

RSS Source
Authors
Elias Chaibub Neto

The roles played by learning and memorization represent an important topic indeep learning research. Recent work on this subject has shown that theoptimization behavior of DNNs trained on shuffled labels is qualitativelydifferent from DNNs trained with real labels. Here, we propose a novelpermutation approach that can differentiate memorization from learning in deepneural networks (DNNs) trained as usual (i.e., using the real labels to guidethe learning, rather than shuffled labels). The evaluation of weather the DNNhas learned and/or memorized, happens in a separate step where we compare thepredictive performance of a shallow classifier trained with the featureslearned by the DNN, against multiple instances of the same classifier, trainedon the same input, but using shuffled labels as outputs. By evaluating theseshallow classifiers in validation sets that share structure with the trainingset, we are able to tell apart learning from memorization. Application of ourpermutation approach to multi-layer perceptrons and convolutional neuralnetworks trained on image data corroborated many findings from other groups.Most importantly, our illustrations also uncovered interesting dynamic patternsabout how DNNs memorize over increasing numbers of training epochs, and supportthe surprising result that DNNs are still able to learn, rather than onlymemorize, when trained with pure Gaussian noise as input.

Stay in the loop.

Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.