Email updates

Keep up to date with the latest news and content from BMC Neuroscience and BioMed Central.

This article is part of the supplement: Seventeenth Annual Computational Neuroscience Meeting: CNS*2008

Open Access Poster presentation

A simple spiking retina model for exact video stimulus representation

Aurel A Lazar and Eftychios A Pnevmatikakis*

Author Affiliations

Department of Electrical Engineering, Columbia University, New York, New York 10027, USA

For all author emails, please log on.

BMC Neuroscience 2008, 9(Suppl 1):P130  doi:10.1186/1471-2202-9-S1-P130

The electronic version of this article is the complete one and can be found online at: http://www.biomedcentral.com/1471-2202/9/S1/P130


Published:11 July 2008

© 2008 Lazar and Pnevmatikakis; licensee BioMed Central Ltd.

Poster presentation

A computational model for the representation of visual stimuli with a population of spiking neurons is presented. We show that under mild conditions it is possible to faithfully represent an analog video stream into a sequence of spike trains and provide an algorithm that recovers the video input by using only the spike times of the population.

In our model an analog, bandlimited in time, video stream approaches the dendritic trees of a neural population. At each neuron, the multi-dimensional video input is filtered by the neuron's spatiotemporal receptive field, and the one-dimensional output dendritic current enters the soma of the neuron (see Figure 1). The set of the spatial receptive fields is modeled as a Gabor filterbank. The spike generation mechanism is threshold based: Each time the dendritic current exceeds a threshold a spike is fired and the membrane potential is reset by a negative potential through a negative feedback loop that gets triggered by the spike. This simple spike mechanism has been shown to accurately model the responses of various neurons in the early visual system [1].

thumbnailFigure 1. Encoding and decoding mechanisms for video stimuli: The stimulus is filtered by the receptive fields of the neurons and enters the soma. Spike generation is threshold based and a negative feedback mechanism resets the membrane potential after each spike. In the decoding part each spike, represented by a delta pulse, is weighted by an appropriate coefficient and then filtered from the same receptive field for stimulus reconstruction. The total sum is passed from a low pass filter to recover the original input stimulus.

We prove and demonstrate that we can recover the whole video stream based only on the knowledge of the spike times, provided that the size of the neural population is sufficiently big. Increasing the number of neurons to achieve better representation is consistent with basic neurobiological thought [2].

Although very precise, the responses of visual neurons show some variability between subsequent stimulus repeats, which can be attributed to various noise sources [1]. We examine the effect of noise on our algorithm and show that the reconstruction quality gracefully degrades when white noise is present at the input or at the feedback loop.

Acknowledgements

This work is supported by NIH grant R01 DC008701-01 and NSF grant CCF-06-35252. EA Pnevmatikakis is also supported by Onassis Public Benefit Foundation.

References

  1. Keat J, Reinagel P, Reid RC, Meister M: Predicting every spike: A model for the responses of visual neurons.

    Neuron 2001, 30:803-817. PubMed Abstract | Publisher Full Text OpenURL

  2. Lazar AA, Pnevmatikakis E: Faithful representation of stimuli with a population of integrate-and-fire neurons.

    Neural Computation 2008.

    to appear.

    PubMed Abstract | Publisher Full Text OpenURL