Content Tags

There are no tags.

Learning to Play with Intrinsically-Motivated Self-Aware Agents.

RSS Source
Authors
Nick Haber, Damian Mrowca, Li Fei-Fei, Daniel L. K. Yamins

Infants are experts at playing, with an amazing ability to generate novelstructured behaviors in unstructured environments that lack clear extrinsicreward signals. We seek to mathematically formalize these abilities using aneural network that implements curiosity-driven intrinsic motivation. Using asimple but ecologically naturalistic simulated environment in which an agentcan move and interact with objects it sees, we propose a "world-model" networkthat learns to predict the dynamic consequences of the agent's actions.Simultaneously, we train a separate explicit "self-model" that allows the agentto track the error map of its own world-model, and then uses the self-model toadversarially challenge the developing world-model. We demonstrate that thispolicy causes the agent to explore novel and informative interactions with itsenvironment, leading to the generation of a spectrum of complex behaviors,including ego-motion prediction, object attention, and object gathering.Moreover, the world-model that the agent learns supports improved performanceon object dynamics prediction, detection, localization and recognition tasks.Taken together, our results are initial steps toward creating flexibleautonomous agents that self-supervise in complex novel physical environments.

Stay in the loop.

Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.