Email updates

Keep up to date with the latest news and content from BMC Neuroscience and BioMed Central.

This article is part of the supplement: Seventeenth Annual Computational Neuroscience Meeting: CNS*2008

Open Access Poster presentation

Spike-based reinforcement learning of navigation

Eleni Vasilaki1*, Robert Urbanczik2, Walter Senn2 and Wulfram Gerstner1

Author Affiliations

1 Laboratory of Computational Neuroscience, School of Computer and Communications Sciences and Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, CH-1015, Switzerland

2 Institute of Physiology, University of Bern, Buehlplatz 5, 3012 Bern, Switzerland

For all author emails, please log on.

BMC Neuroscience 2008, 9(Suppl 1):P72  doi:10.1186/1471-2202-9-S1-P72

The electronic version of this article is the complete one and can be found online at: http://www.biomedcentral.com/1471-2202/9/S1/P72


Published:11 July 2008

© 2008 Vasilaki et al; licensee BioMed Central Ltd.

Introduction

We have studied a spiking, reinforcement learning model derived from reward maximization [1,2] where causal relations between pre-and postsynaptic activity set a synaptic eligibility trace [2,3]. Neurons are modeled according to the "Integrate-and-Fire" model with escape noise. Synapses are binary and are modulated via the release probability. The synaptic release probability is updated when a global reward signal (such as dopamine) is received.

We have used the learning algorithm in a model of the Morris Water Maze task. The simulated rat explores the environment in random search. After only few trials the rat has learned to approach the goal from arbitrary start conditions, see Figure 1. The model features automatic generalization in state and action space due to coding by overlapping profiles of place cell and action cells [4].

thumbnailFigure 1. Escape latency versus number of trials. Escape latency measures the time it takes the simulated rat to reach a hidden platform starting from arbitrary initial conditions. Learning is achieved in less than 20 trials. Error bars indicate 25% and 75% percentiles.

References

  1. Pfister JP, Toyoizumi T, Barber D, Gerstner W: Optimal Spike-Timing Dependent Plasticity for Precise Action Potential Firing in Supervised Learning.

    Neural Computation 2006, 18(6):1309-1339. Publisher Full Text OpenURL

  2. Florian RV: Reinforcement learning through modulation of spike-timing-dependent synaptic plasticity.

    Neural Computation 2007, 19(6):1468-1502. PubMed Abstract | Publisher Full Text OpenURL

  3. Izhikevich EM: Solving the Distal Reward Problem through Linkage of STDP and Dopamine Signaling.

    Cerebral Cortex 2007, 17:2443-2452. PubMed Abstract | Publisher Full Text OpenURL

  4. Strösslin T, Sheynikhovich D, Chavarriaga R, Gerstner W: Robust self-localisation and navigation based on hippocampal place cells.

    Neural Networks 2005, 18(9):1125-1140. PubMed Abstract | Publisher Full Text OpenURL