Email updates

Keep up to date with the latest news and content from BMC Neuroscience and BioMed Central.

This article is part of the supplement: Eighteenth Annual Computational Neuroscience Meeting: CNS*2009

Open Access Poster presentation

Visualization of higher-level receptive fields in a hierarchical model of the visual system

Christian Hinze1*, Niko Wilbert12 and Laurenz Wiskott12

Author Affiliations

1 Institute for Theoretical Biology, Humboldt University, 10115 Berlin, Germany

2 Bernstein Center for Computational Neuroscience, Humboldt University, 10099 Berlin, Germany

For all author emails, please log on.

BMC Neuroscience 2009, 10(Suppl 1):P158  doi:10.1186/1471-2202-10-S1-P158


The electronic version of this article is the complete one and can be found online at: http://www.biomedcentral.com/1471-2202/10/S1/P158


Published:13 July 2009

© 2009 Hinze et al; licensee BioMed Central Ltd.

Poster presentation

Early visual receptive fields have been measured extensively and are fairly well mapped. Receptive fields in higher areas, on the other hand, are very difficult to characterize, because it is not clear what they are tuned to and which stimuli to use to study them. Early visual receptive fields have been reproduced by computational models. Slow feature analysis (SFA), for instance, is an algorithm that finds functions that extract most slowly varying features from a multi-dimensional input sequence [1]. Applied to quasi-natural image sequences, i.e. image sequences derived from natural images by translation, rotation and zoom, SFA yields many properties of complex cells in V1 [2].

A hierarchical network of SFA units learns invariant object representations much like in IT [3]. These successes suggest that units of intermediate layers in the network might share properties with cells in V2 or V4. The goal of this project is therefore to develop techniques to visualize and characterize such units to understand how cells in V2/V4 might work. This is nontrivial because the units are highly nonlinear. The algorithm is gradient-based and applied in a cascade within the network. We start with a natural image patch as an input, which then gets optimized by gradient ascent to maximize the output of one particular unit. Figure 1 shows such optimal stimuli for units in the first (a, b) and the second layer (c, d). The latter can be associated with cells in V2/V4. We plan to extend this to higher layers and larger receptive fields and will also develop techniques to visualize the invariances of the units, i.e. those variations to the input that have little effect on the unit's output. The long-term goal is to provide a good stimulus set for characterizing cells in V2/V4.

thumbnailFigure 1. Optimal stimuli of units in the first layer (a, b) and the second layer (c, d) of a hierarchical SFA network optimized for slowness and trained with quasi-natural image sequences.

References

  1. Wiskott L, Sejnowski TJ: Slow feature analysis: Unsupervised learning of invariances.

    Neural Computation 2002, 14:715-770. PubMed Abstract | Publisher Full Text OpenURL

  2. Berkes P, Wiskott L: Slow feature analysis yields a rich repertoire of complex cell properties.

    J Vision 2005, 5:579-602. Publisher Full Text OpenURL

  3. Franzius M, Wilbert N, Wiskott L: Invariant object recognition with slow feature analysis.

    Proc 18th Int'l Conf on Artificial Neural Networks 2008, 961-970. OpenURL