Email updates

Keep up to date with the latest news and content from BMC Neuroscience and BioMed Central.

This article is part of the supplement: Eighteenth Annual Computational Neuroscience Meeting: CNS*2009

Open Access Poster presentation

Intrinsic neural response properties are sufficient to achieve time coding

Thomas Voegtlin* and Sam McKennoch

Author Affiliations

INRIA Lorraine, Campus Scientifique, F-54506 Vandoeuvre les Nancy, France

For all author emails, please log on.

BMC Neuroscience 2009, 10(Suppl 1):P133  doi:10.1186/1471-2202-10-S1-P133


The electronic version of this article is the complete one and can be found online at: http://www.biomedcentral.com/1471-2202/10/S1/P133


Published:13 July 2009

© 2009 Voegtlin and McKennoch; licensee BioMed Central Ltd.

Poster presentation

A fundamental question relative to time coding is how action potentials, which arrive from different synapses at different times, are integrated at the soma. In order to achieve interesting generalization capabilities, it has long been proposed that neural networks should use distributed representations, where the activities of different neurons can be combined in a meaningful way. In the context of time coding, this means that spikes arriving from different synapses at different times need to be integrated in a meaningful way.

Classically, computational models of spiking neural networks have achieved this combinatorial capability by exploiting the shape of post-synaptic currents [1,2]. In this approach, post-synaptic potentials (PSPs) arriving from different synapses at different times are combined linearly at the soma; a neuron's spike time depends on when this linear combination of PSPs crosses the firing threshold. This approach, however, has serious limitations. For example, the effective coding interval is limited by the lengths of the rising segments of PSPs, which are very small in practice [2].

We have developed a different method for combining spike times based on the fact that a neuron's response to synaptic currents depends on its internal state [1]. In general, this dependency is expressed by the Phase Response Curve (PRC) of the neuron. In PSP-based models, this dependency has either been neglected, or has only been considered as a possible refinement of the PSP-based approach. However, we have shown that it is possible to perform rich computations by considering only the response properties of the neurons (PRC) and by completely neglecting the shape of PSPs (synaptic currents are modeled as Dirac pulses).

We have derived a learning rule for theta neurons that is adapted to their PRC. The result is a network with learning and generalization capabilities similar to that of a non-linear perceptron. In addition, our approach does not suffer the limitations of PSP-based models. The coding interval is much larger than the length of PSPs; it is as long as the monotonic segments of the PRC. Our results suggest that the response properties of neurons are more relevant to neural computation than the exact shape of synaptic currents.

References

  1. Gütig R, Sompolinsky H: The tempotron: a neuron that learns spike timing-based decisions.

    Nature Neuroscience 2006, 9:420-428. PubMed Abstract | Publisher Full Text OpenURL

  2. McKennoch S, Voegtlin T, Bushnell L: Spike-timing error backpropagation in theta neuron networks.

    Neural Computation 2009, 21:9-45. PubMed Abstract | Publisher Full Text OpenURL