Science

Harvard scientists develop a transistor that learns

Harvard scientists develop a transistor that learns
Schematic of the ionic liquid-gated SmNiO3 synaptic transistor (Photo: Harvard Univ.)
Schematic of the ionic liquid-gated SmNiO3 synaptic transistor (Photo: Harvard Univ.)
View 4 Images
The rare earth metal samarium, while silvery in appearance, quickly tarnishes in air (Photo: Images of Elements)
1/4
The rare earth metal samarium, while silvery in appearance, quickly tarnishes in air (Photo: Images of Elements)
Schematic of the ionic liquid-gated SmNiO3 synaptic transistor (Photo: Harvard Univ.)
2/4
Schematic of the ionic liquid-gated SmNiO3 synaptic transistor (Photo: Harvard Univ.)
Diagram of the key features of neurons and synapses in the brain (Image: Mariana Ruiz Villarreal)
3/4
Diagram of the key features of neurons and synapses in the brain (Image: Mariana Ruiz Villarreal)
Comparison of the structures of a field effect transistor (left) and Harvard's synaptic transistor (right) (Image: B. Dodson)
4/4
Comparison of the structures of a field effect transistor (left) and Harvard's synaptic transistor (right) (Image: B. Dodson)
View gallery - 4 images

In a development that may enable a wholly new approach to artificial intelligence, researchers at Harvard University's School of Engineering and Applied Sciences (SEAS) have invented a type of transistor that can learn in ways similar to a neural synapse. Called a synaptic transistor, the new device self-optimizes its properties for the functions it has carried out in the past.

One of the more remarkable features of the human brain is it gets better at whatever it does. While your first day on an assembly line may be full of fumbling and confusion, in a week or two you will find yourself seemingly on autopilot, performing the set of required tasks without much mental effort. After a few months, you will respond automatically when a part comes through damaged or improperly oriented. Plasticity is the name for the brain's ability to change its own structure through thought and activity.

Diagram of the key features of neurons and synapses in the brain (Image: Mariana Ruiz Villarreal)
Diagram of the key features of neurons and synapses in the brain (Image: Mariana Ruiz Villarreal)

Most of this plasticity results from changes in the 100 trillion or so synapses, or interconnections, between brain cells. One of the ways through which sets of behaviors are reinforced, or learned, is called spike-timing dependent plasticity, or STDP.

Often summed up by the aphorism "Cells that fire together, wire together", when neuron A repeatedly sends a signal across a synapse that causes neuron B to fire, the synapse will strengthen, in effect making that decision easier to make in the future.

Comparison of the structures of a field effect transistor (left) and Harvard's synaptic transistor (right) (Image: B. Dodson)
Comparison of the structures of a field effect transistor (left) and Harvard's synaptic transistor (right) (Image: B. Dodson)

The synaptic transistor developed at Harvard mimics this behavior. So how does a synaptic transistor work? As shown above, the synaptic transistor has a structure quite similar to that of a field effect transistor, where a bit of ionic liquid takes the place of the gate insulating layer between the gate electrode and the conducting channel, and that channel is composed of samarium nickelate (SmNiO3, or SNO) rather than the field effect transistor's doped silicon.

A synaptic transistor has an immediate response, and also a much slower response related to learning. The immediate response is basically the same as that of a field effect transistor – the amount of current that passes between the source and drain contacts varies with the amount of voltage applied to the gate electrode. The learning response is that the conductivity of the SNO layer varies in response to the STDP history of the synaptic transistor, essentially by shuttling oxygen ions between the SNO and the ionic liquid.

The electrical analog of strengthening a synapse is to increase the conductivity of the SNO, which essentially increases the gain of the synaptic transistor. Similarly, weakening a synapse is analogous to decreasing the electrical conductivity of the SNO, thereby lowering the gain.

Note that the input and output of the synaptic transistor will be continuous analog values, rather than more restrictive digital on-off signals. This gives the artificial synapses the flexibility to learn "more or less" how to perform a task, and then to learn how to improve its earlier performance.

While the physical structure of Harvard's synaptic transistor has the potential to learn from history, in itself it contains no way to bias the transistor so as to properly control the SNO's memory effect. This function is carried out by an external supervisory circuit that converts the time delay between input and output into a voltage applied to the ionic liquid that either drives ions into the SNO or removes them. In response, the synaptic transistors become self-optimizing within a circuit being subjected to learning experiences.

The gain of the device adjusts over time to more efficiently provide the average performance asked of them during training. The result is that when a large network of synaptic transistors is assembled, it can learn particular responses to "sensory inputs", with those responses being learned through experience rather than directly programmed into the network.

"The transistor we've demonstrated is really an analog to the synapse in our brains," says co-lead author Jian Shi, a postdoctoral fellow at SEAS. "Each time a neuron initiates an action and another neuron reacts, the synapse between them increases the strength of its connection. And the faster the neurons spike each time, the stronger the synaptic connection. Essentially, it memorizes the action between the neurons."

The synaptic transistor could mark the beginning of a new kind of artificial intelligence: one embedded not in smart algorithms but in the very architecture of a computer. In principle, a system integrating millions of tiny synaptic transistors and neuron terminals could take parallel computing into a new era of ultra-efficient high performance.

"This kind of proof-of-concept demonstration carries that work into the 'applied' world," says research team leader Professor Shriram Ramanathan, "where you can really translate these exotic electronic properties into compelling, state-of-the-art devices." Hopefully those SOTA devices can someday be assembled into SOTA learning machines.

A paper detailing the team's findings was published last month in Nature Communications

Source: Harvard University

View gallery - 4 images
4 comments
4 comments
kilgatron
Quite an article. The beginning of artificial intelligence is really exciting in its many forms.
Mr. Dodson has written several articles in the past few weeks that establish him as one of the top science writers for Gizmag. His knowledge of science is neither too technical nor too spoon-fed. Keep up the good work!
sgdeluxedoc
OK. This, frankly, scares the living daylights out of me. Machines that can learn, and therefore think, and make decisions. But no ethical constraints? Even the three laws of robotics can be twisted upon their head. No doubt about it.. this is where it begins. A true turning point in AI history...
HighPockets
Mr. Dodson,
I have learned to recognize your work, and to deeply appreciate it. You might consider writing for Science News where your kind of journalism is valued and rewarded.
This particular piece is one of the most exciting I have read in some time. SOTA devices hold the promise/threat of using fuzzy logic to reach conclusions and, yeah, that is the way our minds work. Please write more on this issue: you may not have a choice.
Travis Moore
It would seem if it can not write it's own programing code then there is no risk of a terminator like event. It will only be like the terminator if it can learn to think for it self and become self aware.