Skip to main content
  • Poster presentation
  • Open access
  • Published:

Controlling neuronal fluctuations for directed exploration during reinforcement learning

Introduction

Neuronal and synaptic fluctuations have both been proposed to underly reward controlled learning [1, 2] and have been used to explain song learning in songbird area RA [3]. The songbird area LMAN provides perturbations to area RA that are necessary for learning [4], suggesting that LMAN might target specific subsets of RA neurons and control the corresponding noise level for directed experimentation. Here we explore this hypothesis by investigating algorithms for controlling the amount of noise in order to yield efficient reinforcement learning in large networks. Our research is guided by previous work on exploration for learning which exploits information gain [5]. We find that noise control can strongly increase learning efficiency thereby attenuating the curse of dimensionality. Our results suggest that area LMAN controls experimentation by targeted control and injection of noise into RA, which might have testable implications also for learning in other motor pathways.

References

  1. Xie X, Seung HS: Learning in neural networks by reinforcement of irregular spiking. Physical Review E 69. 2004, 041909-10.1103/PhysRevE.69.041909.

    Google Scholar 

  2. Seung HS: Learning in spiking neural networks by reinforcement of stochastic synaptic transmission. Neuron. 2003, 40: 1063-1073. 10.1016/S0896-6273(03)00761-X.

    Article  CAS  PubMed  Google Scholar 

  3. Fiete I, Fee M, Seung HS: Model of birdsong learning based on gradient estimation by dynamic perturbation of neural conductances. Journal of Neurophysiology. 2007, 98: 2038-2057. 10.1152/jn.01311.2006.

    Article  PubMed  Google Scholar 

  4. Ölveczky B, Andalman A, Fee M: Vocal experimentation in the juvenile songbird requires a basal ganglia circuit. PLoS Biol. 2005, 3: e153-10.1371/journal.pbio.0030153.

    Article  PubMed Central  PubMed  Google Scholar 

  5. Si B, Pawelzik K: Robot exploration by subjectively maximizing objective information gain. Robotics and Biometrics IEEE International Conference. 2004, 930-935.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Orlando Areval.

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Areval, O., Pawelzik, K. Controlling neuronal fluctuations for directed exploration during reinforcement learning. BMC Neurosci 10 (Suppl 1), P138 (2009). https://doi.org/10.1186/1471-2202-10-S1-P138

Download citation

  • Published:

  • DOI: https://doi.org/10.1186/1471-2202-10-S1-P138

Keywords