Email updates

Keep up to date with the latest news and content from BMC Neuroscience and BioMed Central.

This article is part of the supplement: Nineteenth Annual Computational Neuroscience Meeting: CNS*2010

Open Access Open Badges Poster Presentation

General form of learning algorithms for neuromorphic hardware implementation

Anatoli Gorchetchnikov1*, Massimiliano Versace1, Heather M Ames1, Ben Chandler1, Arash Yazdanbakhsh1, Jasmin Léveillé1, Ennio Mingolla1 and Greg Snider2

Author Affiliations

1 Department of Cognitive and Neural Systems, Boston University, Boston, MA 02215, USA

2 HP Labs, Palo Alto, CA 94304, USA

For all author emails, please log on.

BMC Neuroscience 2010, 11(Suppl 1):P91  doi:10.1186/1471-2202-11-S1-P91

The electronic version of this article is the complete one and can be found online at:

Published:20 July 2010

© 2010 Gorchetchnikov et al; licensee BioMed Central Ltd.

Poster Presentation

The DARPA Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) initiative aims to create a new generation of high-density, low-power consumption chips capable of replicating adaptive and intelligent behavior observed in animals. To ensure fast speed, low power consumption, and parallel learning in billions of synapses, the learning laws that govern the adaptive behavior must be implemented in hardware. Over the past decades, multitudes of learning laws have been proposed in the literature to explain how neural activity shapes synaptic connections to support adaptive behavior. In order to implement as many of these laws as possible on the hardware, some general and easily parameterized form of learning law has to be designed and implemented on the chip. Such a general form would allow instantiation of multiple learning laws through different parameterizations without rewiring the hardware.

From the perspectives of usefulness, stability, homeostatic properties, and spatial and temporal locality, this project analyzes four categories of existing learning rules:

1. Hebb rule derivatives with various methods for gating learning and decay;

2. Threshold rule variations including the covariance and BCM families;

3. Error-based learning rules; and

4. Reinforcement rules

For each individual category a general form that can be implemented in hardware was derived. Even more general forms that include multiple categories are further suggested.


AG, MV, HMA, BC, AY, JL and EM were partially supported by CELEST, an NSF Science of Learning Center [SBE-0354378]. AG, MV, HMA, BC, EM, and GS were partially supported by the SyNAPSE program of the Defense Advanced Research Project Agency [HR001109-03-0001]. BC was partially supported by the National Science Foundation’s IGERT Program [DGE-0221680].