Sean Hill on the Human Brain Project: computing challenges to research advances

Posted by Biome on 9th July 2014 - 0 Comments


Human Brain Project logoThe Human Brain Project is an ambitious initiative that aims to simulate a complete human brain in a supercomputer, in the hopes of bringing about a better understanding of the human brain and diseases that afflict it, as well as stimulating progress in computing technologies. Funded by the European Commission, this ten year programme is tasked with developing six Information and Communications Technology (ICT) platforms to provide neuroscientists with access to data across the six major areas of research underpinning the project, from neuroinformatics and neurorobotics, to high performance computing and medical informatics. We asked Sean Hill, Professor at the École polytechnique fédérale de Lausanne (EPFL), Switzerland, and co-Director of neuroinformatics at the Human Brain Project, to talk us through the aims of the project, the challenges it faces, and the potential benefits to be gained.

 

What is the rationale behind the Human Brain Project?

The Human Brain Project is a ten year effort, funded by the European Commission in the context of a Future and Emerging Technologies (FET) Flagship programme. Its goal is really to develop and accelerate our understanding of the human brain both for the purposes of neuroscience but also for better diagnosis and treatment of diseases, as well as developing computing technologies.

 

What will the six Information and Communication Technology (ICT) platforms planned for the Human Brain Project provide?

I think the core idea with the Human Brain Project is we’ll provide these platforms where neuroscientists can come to find data, to search for data, to find relationships between data and then to identify both. One of them is the neuroinformatics platform, which has as its goal to help organise and make accessible data in a way where you can find it both spatially and semantically, and that you can also build models from it.

If I want to understand a particular brain structure – for example if I want to build a model of the thalamus – where can I find all the data that I need to build that model and also to validate it. There is a lot of data out there that maybe you can’t use to build a model but you can use to validate it. These platforms are the place where you can come to find data (not just for building models but for any type of analysis), to build a model, validate it, and to couple it to, for example a virtual agent with a simulated sensory input and motor output. You then have a tool for a neuroscientist to really understand the function of this region of the brain under particular conditions.

For example, I want to understand during wakefulness how visual perception operates. I want to configure a visual thalamocortical circuit, provide the simulated retinal input and put it into an environment where I can see what actually happens, and then compare that to all the data that is available that shows the real circuit under those conditions.

 

How is funding for the project being prioritised across different branches of neuroscience?

Our focus is really on these ICT platforms. We’re not changing the neuroscience research agenda. What we are doing is contributing a tool to neuroscientists so that those that are doing the research can find relationships across the different subdomains of neuroscience. These platforms are the core.

 

The ultimate goal of the Human Brain Project is to build a computer simulation of the human brain. What challenges will arise from this?

I think the real challenge is that there are so many principles that we have to learn in order to predict the composition and wiring of the human brain. The challenge is to generate the tools that you can use to predict what the gene expression relationship is. If you want to understand why a neuron in the brain has a particular morphology, then what are the genes required to define that particular morphology? Or those electrical properties, or connectivity properties, or those synapses? It’s putting in place the tools to find these principles that will allow you to predict, within a species but ideally across species, so that we can really start to understand the way the brain is wired up and how it is built. That will be a tool for helping us find the right experiments to perform next. Ideally we want to make the discovery process more efficient and more rapid.

 

Previous work has shown that patterns in the brain, such as where synapses form, are amenable to modeling. How important is pattern recognition in the Human Brain Project?

This is an example of one of these principles that are so important for us to keep discovering – principles of how the nervous system is built so that we can learn what happens when something goes wrong. For example, from a project where we learned how to predict the positions of the neurons, we can see right away that the connectivity will be changed in a very specific way if the morphologies of neurons are altered, for example, through developmental problems. We also learnt that the cortical circuits are actually incredibly robust even if you start removing individual populations of neurons.

These principles are extremely valuable, first in understanding the way that the brain is built and therefore aspects of its function, but also in interpreting and understanding where our problems are likely to occur and where there is built-in robustness.

 

What are the computing challenges faced by the Human Brain Project?

We have the challenge of first of all federating and accessing large amounts of data, and extracting features from all around the world. But we also have the supercomputing challenge for simulation. I am also the Scientific Director for the International Neuroinformatics Coordinating Facility in Sweden, and one of the challenges that we are tackling is exactly that – data federation. It’s a familiar problem and it’s solvable, but then to run simulations and manage all of that outcome takes another set of expertise. In the Blue Brain Project we also built up a lot of experience doing that. So we have a handle on the boundaries of the problem. It is going to grow as we move up in scale but luckily we have a number of leaders in these areas as part of the project. For example, at the Jülich Supercomputing Centre, Germany, and the Swiss National Supercomputing Centre, Switzerland, they’ve been thinking about and are eager to tackle these challenges.

 

How does the European Human Brain Project compare to the US BRAIN Initiative?

It’s wonderful because whenever I meet with a representative of that project they are always saying that you couldn’t have designed two more perfectly complimentary projects. They’re really focused on getting multiple levels and layers of data about the brain, and developing the technologies to measure more and more information about the brain, and we’re focused on data integration. So every bit of data that they produce, we will be in a position to provide a platform where they can put it together and they can ask questions from it, and everybody in the world can collaboratively advance our understanding of the brain.

 

How open is data from the Human Brain Project?

So the clinical data that we are talking about is not available at an individually identifiable level but, in principle, the platform should be available for clinicians to ask questions, and to do queries from. The general policy is one of open access, so that doesn’t mean that there is no use agreement, but it means that it will be open access.

 

Despite the open neuroscience movement, motivating scientists to share their data is still an issue. Why do you think this is?

I think it is very clear. There is no incentive right now. At a fundamental institutional level there is no incentive to share your data. What we are seeing though is that there are many good reasons to share your data, both in terms of reproducibility but also in terms of gaining insight by combining your data with other sets of data and being able to grow collaboration. I think new models of publication that value and reward making data available and new indexes that will give impact scores to those who share data are important. Different ways of recognising and rewarding data sharing are essential, and are coming.

 

More about the researcher

 

Sean Hill, Professor, École polytechnique fédérale de Lausanne, Switzerland.

Sean Hill, Professor, École polytechnique fédérale de Lausanne, Switzerland.

Sean Hill is Professor at the École polytechnique fédérale de Lausanne (EPFL), Switzerland, co-Director of the Blue Brain Project and co-Director of neuroinformatics at the Human Brain Project. He received his PhD in computational neuroscience from the University of Lausanne, Switzerland, and went to pursue his postdoctoral research in the USA at the Neurosciences Institute and the University of Wisconsin. Hill then joined the IBM T J Watson Research Center where he was project manager for computational neuroscience in the Blue Brain Project, after which he joined the EPFL. He currently also serves as the Scientific Director of the International Neuroinformatics Coordinating Facility (INCF) at the Karolinska Institutet, Sweden. His research expertise centres around building and simulating large-scale models of brain circuitry, with a particular interest in the structure and dynamics of neocortical and thalamocortical microcircuitry.