Computers

Neuromorphic chips could help reverse-engineer the human brain

Neuromorphic chips could help reverse-engineer the human brain
Swiss researchers have taken an important step towards imitating the brain’s information processing (Image: Shutterstock)
Swiss researchers have taken an important step towards imitating the brain’s information processing (Image: Shutterstock)
View 3 Images
Layout of a multi-neuron chip comprising an array of analog/digital silicon neurons and synapse circuits (Photo: ETH Zurich)
1/3
Layout of a multi-neuron chip comprising an array of analog/digital silicon neurons and synapse circuits (Photo: ETH Zurich)
The neuromorphic chips are subjected to a visual cognitive test (Image: ETH Zurich)
2/3
The neuromorphic chips are subjected to a visual cognitive test (Image: ETH Zurich)
Swiss researchers have taken an important step towards imitating the brain’s information processing (Image: Shutterstock)
3/3
Swiss researchers have taken an important step towards imitating the brain’s information processing (Image: Shutterstock)
View gallery - 3 images

Researchers at the University of Zurich and ETH Zurich have designed a sophisticated computer system that is comparable in size, speed and energy consumption to the human brain. Based on the development of neuromorphic microchips that mimic the properties of biological neurons, the research is seen as an important step in understanding how the human brain processes information and opens the door to fast, extremely low-power electronic systems that can assimilate sensory input and perform user-defined tasks in real time.

Neuromorphic engineering

Layout of a multi-neuron chip comprising an array of analog/digital silicon neurons and synapse circuits (Photo: ETH Zurich)
Layout of a multi-neuron chip comprising an array of analog/digital silicon neurons and synapse circuits (Photo: ETH Zurich)

The human brain is a remarkable machine: with a power consumption of only about 20 W, it can outclass the fastest supercomputer in most real-world tasks – particularly those involving the processing of sensory input. Researchers believe that the brain's astounding abilities aren't down to mere processing speed, but rather to the highly efficient way in which it elaborates information.

Though we lack the tools to fully investigate the brain's "computing architecture," we know that unlike your standard CPU the brain uses a mixture of analog and digital signals at the same time; that information is processed on a massively parallel scale at relatively slow speeds; that memory and instruction signals are often seamlessly combined; and that continuous adaptation and self-organization of its neural networks play a crucial part in its function.

Established in the late 1980s, neuromorphic engineering is an interdisciplinary amalgam of neuroscience, biology, computer science and a number of other fields that attempts first to understand how the brain manipulates information, and then to replicate the same processes on a computer chip. The goal is the development of new, powerful computing architectures that could be used to model the brain and, perhaps, even serve as a stepping stone to a sophisticated, human-like artificial intelligence.

Most attempts at replicating a human brain involve simulating a very large number of neurons on a supercomputer; the neuromorphic approach, however, is quite different because it involves developing custom electronic circuits that simulate the neuron firing mechanisms in the actual brain and are similar to the brain in terms of size, speed and energy consumption.

"The neurons implemented with our approach have programmable time constants," Prof. Giacomo Indiveri, who led the research efforts, told Gizmag. "They can go as slow as real neurons or they can go significantly faster (e.g. >1000 times), but we slow them down to realistic time scales to be able to have systems that can interact with the environment and the user efficiently."

The silicon neurons, Indiveri told us, are comparable in size to actual neurons and they consume very little power. Compared to the supercomputer approach, their system consumes approximately 200,000 times less energy – only a few picojoules per spike.

A neuromorphic chip uses its most basic components in a radically different way than your standard CPU. Transistors, which are normally used as an on/off switch, here can also be used as an analog dial. The end result is that neuromorphic chips require far fewer transistors than the standard, all-digital approach. Neuromorphic chips also implement mechanisms that can easily modify synapses as data is processed, simulating the brain's neuroplasticity.

Soft state machines

The neuromorphic chips are subjected to a visual cognitive test (Image: ETH Zurich)
The neuromorphic chips are subjected to a visual cognitive test (Image: ETH Zurich)

Promising as they may be, neuromorphic neurons have proven difficult to organize in cooperative networks to perform a user-defined task. The Zurich researchers have now solved this problem by developing a sort of elementary structure – what they called a "soft state machine" (SSM) – that can be used to describe and implement complex behaviors in a neuromorphic system.

In computer science, a finite state machine (FSM) is a mathematical model similar to a flowchart that can be used to design computer programs and logic circuits. FSMs can implement context-dependent decision-making, "if-A-then-do-B" clauses, and use a short-term memory of sorts.

SSMs are neuronal state machines similar to FSMs that combine analog and digital signal processing. As such, they can be used to describe a complex behavior in a neuromorphic chip. The behavior can be first described in terms of a standard finite state machine, and then automatically translated into a SSM that can be implemented on a neuromorphic chip.

A smarter silicon retina

The researchers tested their findings on an advanced electronic camera known as silicon retina with a visual-processing-based task inspired by those used to evaluate the cognitive abilities of human subjects.

"The subject (our neuromorphic system in our case) is presented with a cue at the beginning of the experiment which specifies the rule to use for the task," Indiveri explained. "The subject is required to look at a screen in which a horizontal bar and a vertical bar are moving, and depending on the initial cue, the subject is supposed to report if and when a vertical bar crosses the middle of the screen from left to right, or if a horizontal bar crosses it from right to left."

Aside from real-time visual processing, the task also requires memory and context-dependent decision making, elements that are commonly accepted as signs of cognition. Interestingly, the neural structures that form as this visual test is performed has shown a remarkable similarity with neural structures in the mammalian brain.

"The recurrent neural circuits implemented in the system have the same type of connectivity patterns found in the visual cortex of the cat," says Indiveri. "In particular, they implement soft winner-take-all circuits that are based on descriptions of canonical microcircuits found in the visual cortex."

Applications

This work sheds light on how the neural networks in the brain implement the higher cognitive functions, and offers some valuable insights as to how future neuromorphic chips could go about increasing performance even further.

"One of the goals of our work, and neuromorphic engineering in general, is to use this technology as a medium for understanding the principles that underlie neural computation. So my hope is that our work can contribute to the task of reverse engineering the way a brain works," says Indiveri.

In the more immediate future, the researchers will combine the chips with several sensory components at once, such as an artificial cochlea or retina, to create complex cognitive systems that interact with their surroundings on multiple levels, all in real time.

A paper detailing the team's work was published on the journal Proceedings of the National Academy of Sciences.

Source: University of Zurich

View gallery - 3 images
5 comments
5 comments
MBadgero
It will be interesting to see what these can do if they become commercially available. I'm also looking forward to NVidia's Logan, http://www.gizmag.com/nvidia-project-logan/28455/. I could probably build a half-decent robot with four Logan chips.
John Sweet
build it all with the synthetic muscle tissues constructs.
Phyzzi
Hum, the most interesting part of this article (the ability to change "neuron" connections based on experience) is sort of hand-waved. More detail, or a link to another article talking about that part would be great.
MBadgero
Phyzzi,
They never tell you how these custom neural chips work, and so far, they have never made it to DigiKey. Still waiting for a commercially available chip.
Basically, neural chips use Hebbian learning and lateral inhibition to simulate real neurons. A connection that doesn't trigger is ignored, and a connection that triggers often is 'strengthened' (Hebbian learning). If a neuron in a layer triggers often it suppresses close neurons in the same layer (lateral inhibition). The two combined have been shown to be the equivalent to principal component analysis, but I don't remember who showed this (grad school was ten years ago, and I don't work in this field).
The trick for these chips is how to set up the inputs, layers and outputs, and this is usually (always?) confidential. It would have to be in the spec sheet if they were commercial, but I haven't seen any yet.
Facebook User
Cognition is defined as the ability to acquire knowledge: "The mental faculty or process of acquiring knowledge by the use of reasoning, intuition, or perception."
Based on the described artificial retina experiment I fail to follow how this feat can be described as cognition, unless the feature extraction is learned independently by the neuro chip.
Self directed learning algorithms have been around for quite some time, so I assume, this is what the experimenters did using their custom hardware?
At any rate, it is certainly, nice to see analog computing circuitry back in the news, given that's this is where it all began (http://wp.me/p2lHU6-53).