Email updates

Keep up to date with the latest news and content from BMC Neuroscience and BioMed Central.

This article is part of the supplement: Nineteenth Annual Computational Neuroscience Meeting: CNS*2010

Open Access Poster Presentation

Using GLMs to recover sparse connectivity in complex networks

Daniel Cook1*, Duncan Gillies1 and Simon Schultz2

Author Affiliations

1 Department of Computing, Imperial College, London, UK

2 Department of Bioengineering, Imperial College, London, UK

For all author emails, please log on.

BMC Neuroscience 2010, 11(Suppl 1):P53  doi:10.1186/1471-2202-11-S1-P53

The electronic version of this article is the complete one and can be found online at: http://www.biomedcentral.com/1471-2202/11/S1/P53


Published:20 July 2010

© 2010 Cook et al; licensee BioMed Central Ltd.

Poster Presentation

Information processing at the network level is highly dependent on the underlying network structure. Existing functional connectivity measures are useful but it is unclear how well they coincide with the real underlying anatomical connectivity of the network. In particular, it is difficult to avoid false positives - finding links between neurons that are correlated but not directly connected. Indeed, standardized measures do not currently appear to be defined in the literature for evaluating the performance of connectivity recovery algorithms.

We introduce an efficient technique for discovering the sparse network structure of complex neuronal networks from spike train data, and a simple metric by which the performance of this algorithm (and others) may be judged. We then test our technique by generating simulated spike trains from networks with a known structure, and comparing the recovered connectivity against the original (anatomical) one.

Network connectivity is recovered in two stages. We begin by fitting (using maximum likelihood) a generalized linear model (GLM) to the spike train data. The model includes interneuronal coupling terms to model the network's pairwise connectivity. We then use the Bayesian Information Criterion (BIC) to remove extraneous edges: any couplings which, when set to zero, improve the BIC score, are removed. This technique requires only a single trial of data (multiple trials can be combined through simple concatenation), and can be used in both evoked and spontaneous contexts.

To test the technique's performance, we simulate spontaneous spike trains from a medium sized network (150 neurons) of Izhikevich neurons. This network has a ``clustered'' structure, as depicted in Figure 1. Qualitatively, we can see that the clustered structure of the networks is recovered. We can also quantitatively characterize reconstruction quality by considering our algorithm to be a binary classifier (edges exist or don't). Figure 2 shows the receiver operating curve (ROC) of our classifier as data length is increased for the medium-sized simulation, from 100 to 2000 spikes per neuron. Our technique provides adequate reconstructions with even moderate amounts of data, and more data leads to drastic improvements in performance (roughly, ten-fold more true positives than false positives). It requires few computational resources, and identifies sparse network structures quite successfully.

thumbnailFigure 1. Connectivity Matrices

thumbnailFigure 2. Recovery Performance

Future work will test the method under noise regimes including dropped spikes and “dark” neurons. The technique will be applied to large-scale spike train recordings from electrophysiological or optogenetic experiments, to test hypotheses concerning cortical connectivity patterns.