Learning Inductive Biases with Simple Neural Networks.

RSS Source
Authors
Reuben Feinman, Brenden M. Lake

People use rich prior knowledge about the world in order to efficiently learnnew concepts. These priors - also known as "inductive biases" - pertain to thespace of internal models considered by a learner, and they help the learnermake inferences that go beyond the observed data. A recent study found thatdeep neural networks optimized for object recognition develop the shape bias(Ritter et al., 2017), an inductive bias possessed by children that plays animportant role in early word learning. However, these networks useunrealistically large quantities of training data, and the conditions requiredfor these biases to develop are not well understood. Moreover, it is unclearhow the learning dynamics of these networks relate to developmental processesin childhood. We investigate the development and influence of the shape bias inneural networks using controlled datasets of abstract patterns and syntheticimages, allowing us to systematically vary the quantity and form of theexperience provided to the learning algorithms. We find that simple neuralnetworks develop a shape bias after seeing as few as 3 examples of 4 objectcategories. The development of these biases predicts the onset of vocabularyacceleration in our networks, consistent with the developmental process inchildren.

Stay in the loop.

Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.