Email updates

Keep up to date with the latest news and content from BMC Neuroscience and BioMed Central.

This article is part of the supplement: Twentieth Annual Computational Neuroscience Meeting: CNS*2011

Open Access Poster presentation

The influence of behavioral context on sensory encoding

Matthew Chalk*, Iain Murray and Peggy Seriès

Author Affiliations

School of Informatics, University of Edinburgh, Edinburgh, EH8 8AB, UK

For all author emails, please log on.

BMC Neuroscience 2011, 12(Suppl 1):P295  doi:10.1186/1471-2202-12-S1-P295

The electronic version of this article is the complete one and can be found online at: http://www.biomedcentral.com/1471-2202/12/S1/P295


Published:18 July 2011

© 2011 Chalk et al; licensee BioMed Central Ltd.

This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Poster presentation

The properties of sensory neurons are not fixed, but change dynamically according to the task being performed [1]. While there has been a huge experimental effort put into characterizing these effects, a clear understanding of why they occur is lacking.

A large body of research focuses on the idea that the visual system learns a probabilistic model of natural image statistics. Typically, the goal of the visual system is seen as inferring the hidden causes underlying a given sensory input [2]. While this framework is successful in helping to understand the properties of neurons in the early sensory cortex, it presents a passive view of learning: the sensory representation is optimized independently of behavioral demands.

We propose an alternative normative framework for modeling visual processing, with the representation optimized adaptively in order to facilitate interaction with the environment [3,4]. This framework is used to ask the following questions: (1) under what conditions should behavioral context influence the responses of sensory neurons; (2) what are the expected changes in receptive field properties?

We simulate a visual detection task where an agent is presented with stimuli at various locations, and has to report whether or not a stimulus is present at a single task-relevant location. Stimuli are represented by binary latent variables (each variable corresponding to a different spatial location), which combine linearly to produce the sensory input. The task is thus to infer the state of a single task-relevant latent variable based on the sensory input: correct responses result in an immediate reward.

In performing the task, the agent is assumed to follow a strategy whereby they parameterize the expected value of state-response pairs using a probabilistic model describing how sensory data are generated from hidden causes (‘sensory encoding’), and a utility function describing the expected reward for different responses, given the hidden state (‘reward encoding’). The parameters of the model and of the utility function are learned simultaneously to fit both the distribution of inputs, and ‘desired' responses in the task (estimated using the received reward).

When there were no limitations on what could be learned, we found that the task had no influence on the learned model. Therefore, we hypothesized that task-dependent changes in sensory encoding occur due to a computational resource limitation, when there is a compromise between explaining all the sensory inputs, and enabling good task performance. Specifically, we considered the case where there are fewer hidden units in the learned model than in the true model generating the data.

In this condition, we found that the task strongly affected the learned model, with basic functions corresponding to task-relevant hidden units achieving the closest fit to the true model. Significantly, task-relevant hidden units showed increased activation in response to preferred stimuli, compared to task-irrelevant hidden units (due to the reduction in uncertainty associated with learning a better model), analogous to the experimental effects of attention, found in low to mid-level areas of the visual cortex [1]. Finally, we tested the ability of the model to account for other observed effects of attention, such as multiplicative scaling of neuronal tuning curves, biased competition and modulation of centre-surround interactions [1].

Acknowledgements

EPSRC, MRC, BBSRC

References

  1. Reynolds JH, Heeger DJ: The normalization model of attention.

    Neuron 2009, 61(2):168-185. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL

  2. Hyvärinen A: Statistical models of natural images and cortical visual representation.

    Trends Cogn Sci 2010, 2(2):251-264. OpenURL

  3. Sahani M: A biologically plausible algorithm for reinforcement-shaped representational learning.

    NIPS 2006. OpenURL

  4. Gershmann S, Wilson R: The neural costs of optimal control.

    NIPS 2010. OpenURL