Email updates

Keep up to date with the latest news and content from BMC Neuroscience and BioMed Central.

This article is part of the supplement: Nineteenth Annual Computational Neuroscience Meeting: CNS*2010

Open Access Poster Presentation

Where-what networks for motor invariance without any internal master map

Juyang Weng123* and Matthew D Luciw12

Author Affiliations

1 Department of Computer Science and Engineering, Michigan State University, East Lansing, MI 48824 USA

2 Cognitive Science Program, Michigan State University, East Lansing, MI 48824 USA

3 Neuroscience Program, Michigan State University, East Lansing, MI 48824 USA

For all author emails, please log on.

BMC Neuroscience 2010, 11(Suppl 1):P132  doi:10.1186/1471-2202-11-S1-P132

The electronic version of this article is the complete one and can be found online at:

Published:20 July 2010

© 2010 Weng and Luciw; licensee BioMed Central Ltd.

Poster Presentation

The adult brain appears to have a capability of location invariance: No matter where on the retina an object appears, the brain recognizes the object. Yet, this does not mean that the brain drops the location information, since it needs such information for arm reaching. Mishkin and coworkers 1983 [1] reported that the dorsal and ventral streams of the brain are correlated to space (“where”) and object (“what”) information, respectively, based largely on their brain lesion studies. Many later experimental studies verified and enriched this discovery, the working and learning of these two streams have been elusive (Deco & Rolls 2004 [2]). It has been known that feedback connections are widely present along these streams, but computational understanding and analysis are lacking.

On the other hand, the sensory cortex alone seems to use distributed representations. Each feature neuron has a receptive field, corresponding to a patch in the retina. There are multiple nearby neurons whose receptive fields almost completely overlap and they detect different features of the overlapping patches (e.g., each for a different edge orientation). However, such distributed “patch representations” must be combined somehow to give rise to behaviors that demonstrate invariant object recognition. Ann Treisman [2] and David Van Essen et al. [4] proposed the existence of a mater feature map.

Following neuro-anatomical data, our visuomotor model Where What Network (WWN) suggests that to understand the causality of the above phenomenon, it is beneficial to go beyond PP and IT to include the premotor and motor areas in the frontal cortex. We introduce two motor areas as the integral parts of cortical object representation: location motor (LM) and type motor (TM). The former correlates the frontal eye field (FEF) and the location-relevant control areas in the pre-motor and motor areas. The latter corresponds to the ventral frontal cortex (VFC) and the verbal control area in the pre-motor and motor areas. The dorsal stream plus LM learns type invariance and location specificity (for, e.g., arm reaching). The ventral stream plus TM learns location invariance and type specificity (for, e.g., pronounce the object type). Bottom-up and top-down connections from LM and TM dynamically wire connections and shape the corresponding streams, resulting in complementary representations: invariance with one is specificity with the other.

WWNs were tested to deal with the tightly intertwined attention and recognition for vision in the presence of complex backgrounds. Each of attention and recognition has been modeled separately in previous work, e.g., visual saliency guides covert attention sifts. How the visual cortex deals with both attention and recognition from complex natural backgrounds conjunctively has been elusive. WWN gives the first biological plausible theory for this joint problem. With general object in complex new backgrounds, WWN reported 95% in classification rate and under 2-pixel location error, when about 75% images areas are from unknown complex backgrounds. Each WWN epigenetically generates and adapts emergent representations using Hebbian like neuronal learning mechanisms. WWN explains how top-down attention originates from LM for location-based and TM for type-based top-down attention. This model does not need an the appearance-kept internal master feature maps proposed earlier.


  1. Mishkin M, Unterleider LG, Macko KA: Object vision and space vision: Two cortical pathways.

    Trends in Neuroscicence 1983, 6:414-417. Publisher Full Text OpenURL

  2. Deco G, Rolls ET: A neurodynamical cortical model of visual attention and invariant object recognition.

    Vision Res 2004, 40:621-642. Publisher Full Text OpenURL

  3. Treisman AM: A feature-integration theory of attention.

    Cogn Psychol 1980, 12(1):97-136. PubMed Abstract | Publisher Full Text OpenURL

  4. Olshausen BA, Anderson CH, Van Essen DC: A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information.

    Journal of Neuroscience 1993, 13(11):4700-4719. PubMed Abstract | Publisher Full Text OpenURL