Email updates

Keep up to date with the latest news and content from BMC Neuroscience and BioMed Central.

This article is part of the supplement: Abstracts from the Twenty Third Annual Computational Neuroscience Meeting: CNS*2014

Open Access Open Badges Keynote speaker presentation

How to build large, multi-scale, functional brain models

Chris Eliasmith

Author Affiliations

Department of Systems Design Engineering, University of Waterloo, Waterloo, Ontario, N2L 3G1 Canada

BMC Neuroscience 2014, 15(Suppl 1):A1  doi:10.1186/1471-2202-15-S1-A1

The electronic version of this article is the complete one and can be found online at:

Published:21 July 2014

© 2014 Eliasmith; licensee BioMed Central Ltd.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Keynote speaker presentation

Recently, several large-scale brain models have been presented, including those from the Human Brain Project and IBM's Synapse Project. However, these large, complex models do not exhibit interesting psychological (i.e., motor, perceptual, and cognitive) behaviors. Consequently, they are difficult to compare to much of what we know about the brain. In this talk, I describe the methods (e.g., the Neural Engineering Framework) and tools (e.g., Nengo ( webcite)) used to construct what is currently the largest *functional* brain simulation. This model is called the Semantic Pointer Architecture Unified Network (Spaun) and uses 2.4 million spiking neurons organized to respect known anatomical and physiological constraints. I demonstrate the variety of behaviors the model exhibits and show that it is similar in many respects to human and animal behaviour. I show how Spaun allows comparison of the model to data across scales and across measurement modalities (e.g. spike trains, reaction times, error rates). I argue that constructing such large-scale simulations that permit this broad range of comparison to data is critical for advancing our understanding of neural and cognitive function, and I suggest that it helps to unify our understanding of how the mind works.