Email updates

Keep up to date with the latest news and content from BMC Neuroscience and BioMed Central.

This article is part of the supplement: Sixteenth Annual Computational Neuroscience Meeting: CNS*2007

Open Access Oral presentation

Efficient supervised learning in networks with binary synapses

Carlo Baldassi1*, Alfredo Braunstein1, Nicolas Brunel12 and Riccardo Zecchina13

Author Affiliations

1 ISI Foundation, Torino, Italy

2 Laboratory of Neurophysics and Physiology, CNRS-Un. Paris 5, Paris, France

3 Int. Centre for Theoretical Physics (ICTP), Trieste, Italy

For all author emails, please log on.

BMC Neuroscience 2007, 8(Suppl 2):S13  doi:10.1186/1471-2202-8-S2-S13

The electronic version of this article is the complete one and can be found online at:


Published:6 July 2007

© 2007 Baldassi et al; licensee BioMed Central Ltd.

Oral presentation

Recent experiments [1,2] have suggested single synapses could be similar to noisy binary switches. Binary synapses would have the advantage of robustness to noise and hence could preserve memory over longer time scales compared to analog systems. Learning in systems with discrete synapses is known to be a computationally hard problem. We developed and studied a neurobiologically plausible on-line learning algorithm that is derived from Belief Propagation algorithms. This algorithm performs remarkably well in a model neuron with N binary synapses, and a discrete number of 'hidden' states per synapse, that has to learn a random classification problem. Such a system is able to learn a number of associations which is close to the information theoretic limit, in a time which is sub-linear in system size, corresponding to very few presentations of each pattern. Furthermore, performance is optimal for a finite number of hidden states, that scales as N1/2 for dense coding, but is much lower (~10) for sparse coding (see Figure 1). This is to our knowledge the first on-line algorithm that is able to achieve efficiently a finite capacity (number of patterns learned per synapse) with binary synapses.

thumbnailFigure 1. Learning capacity and learning time. (left) achieved capacity vs. the number of synapses N, with different numbers of hidden states, in the sparse coding case: the algorithm can achieve up to 70% of the maximal theoretical capacity at N ~10000 with 10 hidden states; (right) average learning time (number of presentations per pattern) versus number of patterns to be learned, for N = 64000: less than 100 presentations are required up to the critical point where learning fails.

The algorithm is similar to the standard 'perceptron' learning algorithm, but with an additional rule for synaptic transitions which occur only if a currently presented pattern is 'barely correct' (that is, a single synaptic flip would have caused an error). In this case, the synaptic changes are meta-plastic only (change in hidden states and not in actual synaptic state), and go towards stabilizing the synapse in its current state. This rule is crucial to the algorithm's performance, and we suggest that it is sufficiently simple to be easily implemented by neurobiological systems.

References

  1. Petersen CC, Malenka RC, Nicoll RA, Hopfield JJ: All-or-none potentiation at CA3-CA1 synapses.

    Proc Natl Acad Sci USA 1998, 95:4732-4737. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL

  2. O'Connor DH, Wittenberg GM, Wang SSH: Graded bidirectional synaptic plasticity is composed of switch-like unitary events.

    Proc Natl Acad Sci USA 2005, 102:9679-9684. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL