Scientists use brain activity analysis to reconstruct words heard by test subjects
By Ben Coxworth
February 1, 2012
Last September, scientists from the University of California, Berkeley announced that they had developed a method of visually reconstructing images from peoples' minds, by analyzing their brain activity. Much to the dismay of tinfoil hat-wearers everywhere, researchers from that same institution have now developed a somewhat similar system, that is able to reconstruct words that people have heard spoken to them. Instead of being used to violate our civil rights, however, the technology could instead allow the vocally-disabled to "speak."
Epilepsy patients were enlisted for the study, who were already getting arrays of electrodes placed on the surface of their brains to identify the source of their seizures. The scientists used these electrodes to monitor the electrical activity in a region of their brains' auditory system, known as the superior temporal gyrus (STG). From there, it was a matter of observing the specific activity patterns that occurred when the subjects heard certain words.
When the electrodes' data was applied to a computational model, the computer was able to actually reproduce the sounds that had been heard - sort of. Although the noises made by the computer were somewhat garbled, they were close enough to the original words that the scientists were better able to identify those words than would be possible otherwise.
According to study leader Brian N. Pasley, there is evidence that the perception of real sounds and imagined ones may result in similar STG activity. If so, then the technology could perhaps someday be used in a gadget that "vocalizes" words or sentences thought out by people unable to speak.
"This research is based on sounds a person actually hears, but to use this for a prosthetic device, these principles would have to apply to someone who is imagining speech," he explained. "If you can understand the relationship well enough between the brain recordings and sound, you could either synthesize the actual sound a person is thinking, or just write out the words with a type of interface device."