Skip to main content
  • Poster presentation
  • Open access
  • Published:

Neurodynamical model for visual action recognition

The recognition of body motion requires the temporal integration information over time. This integration likely is achieved by dynamic neural networks, which are composed from neurons that are selective for form and optic flow patterns [1]. Physiological evidence also indicates that many action-selective visual neurons are view-dependent, so that such representations likely represent not only the time structure of actions, but also the stimulus view. The dynamic and self-organization properties of such representations have rarely been studied.

We propose a neurodynamical model for the visual encoding of actions that is based on a two-dimensional neural field with the defining equations

τ u u . ( φ , θ , t ) = - u ( φ , θ , t ) + w ( φ , θ ) * [ u ( φ , θ , t ) ] + + s ( φ , θ , t ) - α a ( φ , θ , t ) + ξ ( φ , θ , t ) τ a ȧ ( φ , θ , t ) = - a ( φ , θ , t ) + [ u ( φ , θ , t ) ] +

u specifies the membrane potential and a the adaptation state. The recurrent interaction kernel w is symmetric with respect to the origin in ø-direction and asymmetric in ξ-direction, with an additional strong inhibitory component. The spatial convolution * is periodic. The stimulus signal s models an (idealized) activity distribution of shape- (or optic flow-) selective neurons that are maximally responding to stimulus frame θ and view angle ø of an ongoing action stimulus. The noise variable ξ is defined by a Gaussian process whose kernel was fitted in order to reproduce coarsely the correlation statistics, dependent on the tuning similarity of the neurons. Time scales and adaptation strength are specified by the positive constants τ u , τ v and α. The first equation defines a dynamic neural field that stabilizes a stimulus-locked travelling peak solution in ξ-direction, and a winner-takes-all competition in ø-direction. The second equation specifies a simple adaptation process.

Results

The model accounts simultaneously for a variety of dynamic properties of visual action-selective neurons and visual body motion perception: a) Temporal sequence selectivity, as observed in the STS and area F5 (e.g. [2]). b) Bistable perception of biological motion with spontaneous perceptual switching between multiple perceived heading directions from walker silhouettes [3]. The model allows to investigate the roles of adaptation and noise in such perceptual switching. c) Prediction of a bifurcation between a single stable and two competing travelling pulse solutions dependent on the view angle of the stimulus. d) The comparison of adaptation for static and dynamic adaptator stimuli provides possible explanation for the observed lack of adaptation in action-selective neurons (e.g. [4]). The adaptation characteristics reproduces the behavior of neurons in area IT ('input fatigue model') [5].

Conclusions

The new model provides a basis for the quantitative study of dynamic and self-organization phenomena in action perception. Simplicity of the model makes it accessible for mathematical analysis, e.g. of bifurcations.

References

  1. Giese MA, Poggio T: Neural mechanisms for the recognition of biological movements. Nat Rev Nsc. 2003, 4 (3): 179-192.

    Article  CAS  Google Scholar 

  2. Barraclough NE, Keith RH, Xiao D, Oram MW, Perrett DI: Visual adaptation to goal-directed hand actions. J Cogn Neurosci. 2009, 21: 1806-1820.

    Article  PubMed  Google Scholar 

  3. Vanrie J, Dekeyser M, Verfaillie K Perception: Bistability and biasing effects in the perception of ambiguous point-light walkers. Perception. 2004, 33 (5): 547-560. 10.1068/p5004.

    Article  PubMed  Google Scholar 

  4. Caggiano V, et al: Mirror neurons in monkey area F5 do not adapt to the observation of repeated actions. Nat Commun. 2013, 4: 1433-

    Article  PubMed  Google Scholar 

  5. De Baene W, Vogels R: Effects of adaptation on the stimulus selectivity of macaque inferior temporal spiking activity and local field potentials. Cereb Cortex. 2010, 20 (9): 2145-2165. 10.1093/cercor/bhp277.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

Supported by EC FP7 projects ABC, HBP, AMARSI, and KOIROBOT, DFG projects GI 305/4-1, and KA 1258/15-1, and BMBF, FKZ: 01GQ1002A.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Martin A Giese.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Giese, M.A., Fedorov, L. Neurodynamical model for visual action recognition. BMC Neurosci 15 (Suppl 1), P164 (2014). https://doi.org/10.1186/1471-2202-15-S1-P164

Download citation

  • Published:

  • DOI: https://doi.org/10.1186/1471-2202-15-S1-P164

Keywords