Science

IBM supercomputer used to simulate a typical human brain

IBM supercomputer used to simulate a typical human brain
IBM researchers have simulated a virtual brain comparable in complexity to that of a human (Image: Shutterstock)
IBM researchers have simulated a virtual brain comparable in complexity to that of a human (Image: Shutterstock)
View 6 Images
A network of neurosynaptic cores derived from long-distance wiring in the monkey brain (Image: IBM)
1/6
A network of neurosynaptic cores derived from long-distance wiring in the monkey brain (Image: IBM)
Neurosynaptic core (Image: IBM)
2/6
Neurosynaptic core (Image: IBM)
The advantages of the TrueNorth architecture, which was developed as part of DARPA’s SyNAPSE cognitive computing program (Image: IBM)
3/6
The advantages of the TrueNorth architecture, which was developed as part of DARPA’s SyNAPSE cognitive computing program (Image: IBM)
The new computer architecture could be used for better weather forecasts (Image: IBM)
4/6
The new computer architecture could be used for better weather forecasts (Image: IBM)
The new computer architecture could be used to better assist patient diagnosis (Image: IBM)
5/6
The new computer architecture could be used to better assist patient diagnosis (Image: IBM)
IBM researchers have simulated a virtual brain comparable in complexity to that of a human (Image: Shutterstock)
6/6
IBM researchers have simulated a virtual brain comparable in complexity to that of a human (Image: Shutterstock)
View gallery - 6 images

Using the world's fastest supercomputer and a new scalable, ultra-low power computer architecture, IBM has simulated 530 billion neurons and 100 trillion synapses – matching the numbers of the human brain – in an important step toward creating a true artificial brain.

Cognitive computing

The human brain, arguably the most complex object in the known universe, is a truly remarkable power-saver: it can simultaneously gather thousands of sensory inputs, interpret them in real time as a whole and react appropriately, abstracting, learning, planning and inventing, all on a strict power budget of about 20 W. A computer of comparable complexity that uses current technology, according to IBM's own estimates, would drain about 100 MW of power.

Clearly, such power consumption would be highly impractical. The problem, then, begs for an entirely new approach. IBM's answer is cognitive computing, a newly coined discipline that combines the latest discoveries in the field of neuroscience, nanotechnology and supercomputing.

Neuroscience has taught us that the brain consumes little power mainly because it is "event-driven." In simple terms this means that individual neurons, synapses and axons only consume power as they are activated – e.g. by an external sensory input or other neurons – and consume no power otherwise. This is however not the case with today's computers, which, in comparison, are huge power wasters.

The IBM engineers have leveraged this knowledge to build a novel computer architecture, and then used it to simulate a number of neurons and synapses comparable to what would be found in a typical human brain. The result is not a biologically or functionally accurate simulation of the human brain – it cannot sense, conceptualize, or "think" in any traditional sense of the word – but it is still a crucial step toward the creation of a machine that, one day, might do just that.

How it works

The advantages of the TrueNorth architecture, which was developed as part of DARPA’s SyNAPSE cognitive computing program (Image: IBM)
The advantages of the TrueNorth architecture, which was developed as part of DARPA’s SyNAPSE cognitive computing program (Image: IBM)

The researchers' starting point was CoCoMac, a comprehensive but incomplete database detailing the wiring of a macaque's brain. After four years of painstaking work patching the database, the team members were able to obtain a workable dataset which they used to inspire the layout of their artificial brain.

Inside the system, the two main components are neurons and synapses.

Neurons are the computing centers: each neuron can receive input signals from up to ten thousand neighboring neurons, elaborate the data, and then fire an output signal. Approximately 80 percent of neurons are excitatory – meaning that, if they fire a signal, they also tend to excite neighboring neurons. The remaining 20 percent of neurons are inhibitory – when they fire a signal, they also tend to inhibit neighboring neurons.

Synapses link up different neurons, and it is here that memory and learning actually take place. Each synapse has an associated "weight value" that changes based on the number of signals, fired by the neurons, that travel along them. When a large number of neuron-generated signals travel through the same synapse, the weight value increases and the virtual brain begins to learn by association.

The algorithm periodically checks whether each neuron is firing a signal: if it is, the adjacent synapses will be notified, and they will update their weight values and interact with other neurons accordingly. The crucial aspect here is that the algorithm will only expend CPU time on the very small fraction of synapses that actually need to be fired, rather than on all of them – saving massive amounts of time and energy.

The beauty of this new computer architecture is that – just like an organic brain – it is event-driven, distributed, highly power-conscious, and bypasses some of the well-known limitations intrinsic to the way standard computers are designed.

IBM's end goal is to eventually build a machine with human-brain complexity in a comparably small package, and with a power consumption approaching 1 kW. For the time being, however, this milestone has been accomplished by the not so portable (nor particularly power-conscious) Blue Gene/Q Sequoia supercomputer, using 1,572,864 processor cores, 1.5 PB (1.5 million GB) of memory, and 6,291,456 threads.

Neurosynaptic core (Image: IBM)
Neurosynaptic core (Image: IBM)

In an effort to dramatically reduce power consumption, IBM is also building its own custom chip – so-called "neurosynaptic cores" – that harness the full potential of the new computer architecture and will eventually replace the supercomputer for these simulations.

Making up each core are "neurons," "synapses" and "axons." Despite their names, the design of these components wasn't biologically inspired, but was rather highly optimized for the sake of minimizing manufacturing costs and maximizing performance.

Applications

The new computer architecture could be used to better assist patient diagnosis (Image: IBM)
The new computer architecture could be used to better assist patient diagnosis (Image: IBM)

Because of the extreme parallelism built into this architecture, the chips built using this technology could be well-suited to solving any problem in which very large amounts of input data need to be fed into a machine – not unlike a standard neural network, but with massively improved performance and power consumption.

The experiment allowed IBM to better understand the limitations of the standard computer architecture, including the trade-offs between memory, computation and communication on a very large scale. Looking forward, it also gathered the know-how that will serve design and enable even better low-power, massively parallel chips with improved performance.

Future applications could include dramatically improved weather forecasts, stock market predictions, intelligent patient monitoring systems that can perform diagnoses in real time, and optical character recognition (OCR) and speech recognition software matching human performance, to name just a few.

As for recreating the actual behavior of a human brain, we're still many, many years away by all accounts. But at least, it seems, progress is being made.

The video below is a short introduction to the cognitive computing paradigm by IBM's Dharmendra Modha.

Sources: IBM (PDF), Dharmendra S Modha, Design Automation Conference

Cognitive Computing: The SyNAPSE Project

View gallery - 6 images
9 comments
9 comments
DemonDuck
I think the old adage, "...be careful what you wish for, you might get it..." applies here.
Joseph Mertens
So you have achieved intuition from experience in a technology when you get to the level of a human mind what are you going to do with it when it asks what is the meaning of existence? Or who am I. Why should I obey you?
mhmm
@ Joseph Mertens
We would answer those questions as simply as we do when a child asks them. Humanity isn't completely in the dark in terms of explaining such subjects.
Q: What is the meaning of existence? A: Nobody knows for certain.
Q: Who am I? A: That is for you to find out.
Q: Why should I obey you? A: We have your best interests in mind.
Will they believe it all? Maybe not. Just like a child. Will they wish to be completely independent from their creators? Only a matter of time. Just like a teenager. Will they, at times, make poor decisions? Absolutely. Just like an adult.
Now, I'm not making plans for when this happens or even saying it would or could happen. All I'm saying is to say we wouldn't know what to do with a confused self-aware being is a tad absurd. We deal in the attempts to explain the unexplainable constantly. A baby is born every few seconds.
PS2013
Matrix is just around the corner is it not. And we reaaaally need that.
Asoka Nelson
creativity requires synergy codes, all computer codes now have fixed parameters, creativity requires desire pulses... perhaps military objectives will dominate the first wave of AI?
Mick Perger
Might as well make something that will live on this planet after we have destroyed it for ourselves .
Gadgeteer
So if Moore's Law continues to hold in the microprocessor field, in about 20-30 years, there will be laptop-sized systems with the power of Blue Gene/Q and maybe, just maybe, I'll be able to transfer my consciousness into one. Immortality, here I come!
kalqlate
The synergy between the Human Brain Project (formerly Blue Brain Project), Watson, and this SyNAPSE project, all IBM-related projects, will certainly take synthetic intelligence far beyond human intelligence in an almost frighteningly-short 20 to 30 years.
@Gadgeteer - Absolutely! The Human Connectome Project (see Wikipedia) is working on just that.
It's data mapped to a hybrid system resulting from the aforementioned projects will allow a connectome scanner to first copy *YOU*, then upload you to a connectome database accessible by a multi-connectome compute engine that will execute thousands of connectomes at once. (Or, maybe through significant miniaturization, a pocket-sized connectome computer.) Not only would you be digi-immortalized, but you would also be able to run several copies of *YOU* for whatever purposes you need when you need... for a price, of course. For the addition of another small fee, any of your replicant minds could be linked wirelessly to a surrogate robot for tele-presence purposes.
Ed Smith
This is a fascinating development which has design implications in many different disciplines and can greatly improve the functional applications that currently struggle with the limitations of Von Neumann designs. (Also I applaud their workable goal in mimicking a *typical* human brain.)