Email updates

Keep up to date with the latest news and content from BMC Neuroscience and BioMed Central.

This article is part of the supplement: Eighteenth Annual Computational Neuroscience Meeting: CNS*2009

Open Access Poster presentation

Multilinear models for the auditory brainstem

Bernhard Englitzu12*, Misha Ahrens3, Sandra Tolnai24, Rudolf Rübsamen2, Maneesh Sahani3 and Jürgen Jost1

Author Affiliations

1 Max-Planck-Institute for Mathematics in the Sciences, 04103 Leipzig, Germany

2 Institute for Biology II, University of Leipzig, 04103 Leipzig, Germany

3 Gatsby Computational Neuroscience Unit, UCL, London, WC1N 3AR, UK

4 Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, OX1 3PT, UK

For all author emails, please log on.

BMC Neuroscience 2009, 10(Suppl 1):P312  doi:10.1186/1471-2202-10-S1-P312

The electronic version of this article is the complete one and can be found online at: http://www.biomedcentral.com/1471-2202/10/S1/P312


Published:13 July 2009

© 2009 Englitzu et al; licensee BioMed Central Ltd.

Poster presentation

The representation of acoustic stimuli on the level of the brainstem forms the basis for further auditory processing. While some simple characteristics of this representation are widely accepted, it remains a challenge to predict the firing rate at high temporal resolution in response to arbitrary stimuli. Such predictive models would be helpful tools for further investigations, in particular sound localization. Devising a model involves several choices: the stimulus representation, the modeling framework, and the performance measure. In this study we explore these choices for single cell responses from the medial nucleus of the trapezoid body (MNTB), which constitute a well-identifiable and homogeneous neuronal population. Detailed models of MNTB responses have not been studied before. We estimate a recently introduced family of models, the multilinear models ([1], Figure 1), which encompass the classical spectrotemporal receptive field (STRF) and allows arbitrary input nonlinearities and certain multiplicative time-frequency interactions. To reliably quantify the explained variance for noisy responses, we use the predictive power [2] as performance measure. We find that nonlinear models and a cochlear-like (gamma-tone) stimulus representation lead to significant improvements in predictive power. On average, 75% of the explainable variance can be predicted. Since the models deliver faithful predictions, a meaningful interpretation of the estimated model structures becomes possible. Including multiplicative interactions strongly reduce the inhibitory fields in the linear kernels. Together with their spectrotemporal location, this suggests cochlear suppression as their source. Similar improvements in predictive power are obtained for input and output-nonlinearities, with best performance for the combination of both. In conclusion, the context model provides a rich and still interpretable extension over other nonparametric models for modeling responses in the MNTB.

thumbnailFigure 1. Schematic overview of estimated models. An acoustic stimulus is created from broadband amplitude modulations. Three spectrotemporal representations of the sound are used as input for the following models: First, a multilinear model (dimensions: time, frequency, level) is estimated, e.g. a STRF, an input nonlinearity model (IN+STRF) or a Context model. Second, an estimated output linearity rescales the multilinear predicton to the final firing rate prediction. We compare the performance contributed by the individual parts.

References

  1. Ahrens MB, Linden JF, Sahani M: Nonlinearities and contextual influences in auditory cortical responses modeled with multilinear spectrotemporal methods.

    J Neurosci 2008, 28:1929-42. PubMed Abstract | Publisher Full Text OpenURL

  2. Sahani M, Linden J: How linear are auditory cortical responses?

    Advances in Neural Information Processing Systems 2003, 15:301-308. OpenURL