Email updates

Keep up to date with the latest news and content from BMC Neuroscience and BioMed Central.

This article is part of the supplement: Eighteenth Annual Computational Neuroscience Meeting: CNS*2009

Open Access Open Badges Poster presentation

Back-engineering of spiking neural networks parameters

Horacio Rostro-Gonzalez1*, Bruno Cessac12, Juan Carlos Vasquez1 and Thierry Viéville3

Author Affiliations

1 NEUROMATHCOMP, INRIA Sophia-Antipolis Méditerranée, France

2 LJAD, University of Nice, Sophia, France


For all author emails, please log on.

BMC Neuroscience 2009, 10(Suppl 1):P289  doi:10.1186/1471-2202-10-S1-P289

The electronic version of this article is the complete one and can be found online at:

Published:13 July 2009

© 2009 Rostro-Gonzalez et al; licensee BioMed Central Ltd.


We consider the deterministic evolution of a time-discretized spiking network of neurons with connection weights with delays, taking network of generalized integrate and fire (gIF) neuron model with synapses into account [1]. The purpose is to study a class of algorithmic methods able to calculate the proper parameters (weights and delayed weights) allowing the reproduction of a spike train produced by an unknown neural network.


The problem is known as NP-hard when delays are to be calculated. We propose here a reformulation, now expressed as a Linear-Programming (LP) problem, thus allowing to provide an efficient resolution. It is clear that this does not change the maximal complexity of the problem, whereas the practical complexity is now dramatically reduced at the implementation level. More precisely we make explicit the fact that the back-engineering of a spike train (i.e., finding out a set parameters, given a set of initial conditions), is a Linear (L) problem if the membrane potentials are observed and a LP problem if only spike times are observed, for a gIF model. Numerical robustness is discussed. We also explain how it is the use of a generalized IF neuron model instead of a leaky IF model that allows to derive this algorithm. Furthermore, we point out how the L or LP adjustment mechanism is distributed and has the same architecture as a "Hebbian" rule. A step further, this paradigm is easily generalizable to the design of input-output spike train transformations.


Numerical implementations are proposed in order to verify that is always possible to simulate an expected spike train. The results obtained shows that this is true, expect for singular cases. In a first experiment, we consider the linear problem and use the singular value decomposition (SVD) in to obtain a solution, allowing a better understanding the geometry of the problem. When the aim is to find the proper parameters from the observation of spikes only, we consider the related LP problem and the numerical solutions are derived thanks to the well-established improved simplex method as implemented in GLPK library. Several variants and generalizations are carefully discussed showing the versatility of the method.


Learning parameters for the neural network model is a complex issue. In biological context, this learning mechanism is mainly related to synaptic weights plasticity and as far as spiking neural network are concerned STDP [2]. In the present study, the point of view is quite different since we consider supervised learning, in order to implement the previous capabilities. To which extends we can "back-engineer" the neural network parameters in order to constraint the neural network activity is the key question addressed here.


Partially supported by the ANR MAPS & the MACCAC ARC projects.


  1. Cessac B, Viéville T: On dynamics of integrate-and-fire neural networks with adaptive conductances.

    Front Comput Neurosci 2008, 2:2.


    PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL

  2. Bohte SM, Mozer MC: Reducing the variability of neural responses: A computational theory of spike-timing-dependent plasticity.

    Neural Computation 2007, 19:371-403. PubMed Abstract | Publisher Full Text OpenURL

  3. Baudot P: Nature is the code: high temporal precision and low noise in V1.

    PhD thesis. 2007. OpenURL