Skip to main content
  • Poster presentation
  • Open access
  • Published:

Microsaccades enable efficient synchrony-based visual feature learning and detection

Fixational eye movements are common across vertebrates, yet their functional roles, if any, are debated [1]. To investigate this issue, we exposed the Virtual Retina simulator [2] to natural images, generated realistic drifts and microsaccades using the model of ref. [3], and analyzed the output spike trains of the parvocellular retinal ganglion cells (RGC).

We first computed cross-correlograms between pairs of RGC that are strongly excited by the image corresponding to the mean eye position. Not surprisingly, in the absence of eye movements, that is when analyzing the tonic (sustained) response to a static image, these cross-correlograms are flat. Adding some slow drift (~20 min/s, self-avoiding random walk) creates long timescale (>1s) correlations because both cells tend to have high firing rates for central positions. Adding microsaccades (~0.5° in 25ms, that is ~20°/s) creates short timescale (tens of ms) correlations: cells that are strongly excited at a particular landing location tend to spike synchronously shortly after the landing.

What do the patterns of synchronous spikes represent? To investigate this issue, we fed the RGC spike trains to neurons equipped with spike timing-dependent plasticity (STDP) and lateral inhibitory connections, as in ref. [4]. Neurons self-organized, and each one selected a set of afferents that consistently fired synchronously. We then reconstructed the corresponding visual stimuli by convolving the synaptic weight matrices with the RGC receptive fields. In most cases, we could easily recognize what was learned (e.g. a face), and the neuron was selective (e.g. only responded for microsaccades that landed on a face). Without eye movements, or with only the drift, the STDP-based learning failed, because it needs correlations at a timescale roughly matching the STDP time constants [5].

Microsaccades are thus necessary to generate a synchrony-based coding scheme. More specifically, after each microsaccade landing, cells that are strongly excited by the image corresponding to the landing location tend to fire their first spikes synchronously. Patterns of synchronous spikes can be decoded rapidly – as soon as the first spikes are received – by downstream “coincidence detector” neurons, which do not need to know the landing times. Finally, the required connectivity to do so can spontaneously emerge with STDP. As a whole, these results suggest a new role for microsaccades – to enable efficient visual feature learning and detection thanks to synchronization – that differs from other proposals such as time-to-first spike coding with respect to microsaccade landing times.

References

  1. Martinez-Conde S, Otero-Millan J, Macknik SL: The impact of microsaccades on vision: towards a unified theory of saccadic function. Nat Rev Neurosci. 2013, 14: 83-96.

    Article  CAS  PubMed  Google Scholar 

  2. Wohrer A, Kornprobst P: Virtual Retina: a biological retina model and simulator, with contrast gain control. J Comput Neurosci. 2009, 26: 219-249. 10.1007/s10827-008-0108-4.

    Article  PubMed  Google Scholar 

  3. Engbert R, Mergenthaler K, Sinn P, Pikovsky A: An integrated model of fixational eye movements and microsaccades. Proc Natl Acad Sci U S A. 2011, 108: E765-70. 10.1073/pnas.1102730108.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  4. Masquelier T, Guyonneau R, Thorpe SJ: Competitive STDP-Based Spike Pattern Learning. Neural Comput. 2009, 21: 1259-1276. 10.1162/neco.2008.06-08-804.

    Article  PubMed  Google Scholar 

  5. Gilson M, Masquelier T, Hugues E: STDP allows fast rate-modulated coding with Poisson-like spike trains. PLoS Comput Biol. 2011, 7: e1002231-10.1371/journal.pcbi.1002231.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

Download references

Acknowledgements

We thank A. Wohrer for having developed the Virtual Retina simulator and for the quality of his support, as well as M. Gilson for insightful discussions. The research received (partial) financial support from the 7th Framework Program for Research of the European Commission, under Grant agreement no 600847: RENVISION project of the Future and Emerging Technologies (FET) program (Neuro-bio-inspired systems (NBIS) FET-Proactive Initiative).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Timothée Masquelier.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Masquelier, T., Portelli, G. & Kornprobst, P. Microsaccades enable efficient synchrony-based visual feature learning and detection. BMC Neurosci 15 (Suppl 1), P121 (2014). https://doi.org/10.1186/1471-2202-15-S1-P121

Download citation

  • Published:

  • DOI: https://doi.org/10.1186/1471-2202-15-S1-P121

Keywords