Email updates

Keep up to date with the latest news and content from BMC Neuroscience and BioMed Central.

This article is part of the supplement: Sixteenth Annual Computational Neuroscience Meeting: CNS*2007

Open Access Poster presentation

Unsupervised learning is crucial to learning the names of objects

Timothy P Lillicrap1*, Blake A Richards2 and Stephen H Scott13

Author Affiliations

1 Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada, K7L 3N6

2 Centre for Cognitive Neuroscience, Oxford University, Oxford, UK

3 Dept. of Anatomy and Cell Biology, Queen's University, Kingston, Ontario, Canada, K7L 3N6

For all author emails, please log on.

BMC Neuroscience 2007, 8(Suppl 2):P205  doi:10.1186/1471-2202-8-S2-P205

The electronic version of this article is the complete one and can be found online at:


Published:6 July 2007

© 2007 Lillicrap et al; licensee BioMed Central Ltd.

Poster presentation

Children learn to name the objects they see by forming general associations between the words they hear and the images arriving at their retina. Discriminative neural network models can also be taught to classify objects, but to do so they require more information about how images pair with words (i.e. supervised data) than the brain seems to receive. We propose that the brain exploits unsupervised learning on raw sensory input to compensate for the scarcity of supervised data in its environment. Here we show that artificial neural networks which first develop a statistical model of the world in an unsupervised fashion are capable of learning good image-word pairings using dramatically less supervised data. This idea may help to explain how the brain learns sensorimotor problems for which there is little feedback available about the success of selected actions.