February 1, 2012
Device Could One Day Read Your Mind By Decoding Brain Waves
[ Watch the Video ]
Have you ever imagined taking on the role of Spock in the popular Star Trek shows and films, using your mind melding abilities to read the thoughts of others. Well that could one day become a reality, in a roundabout way.
Reporting in the journal PLoS Biology, the UCB researchers said the method could one day help comatose and stroke patients communicate with the outside world. There have been several approaches in recent years that have suggested scientists were on track to tap into the minds of fellow man.
Robert Knight of UCB and Edward Chang of UCSF, senior authors of the study, said the process gives great insight into how the brain processes language. The brain breaks down words into complex patterns of electrical activity, which can be detected and translated back into an approximate version of the original sound.
Because the brain is believed to process thought similarly to how it processes sound, scientists hope the breakthrough can lead to an implant that would interpret thought into speech in people who cannot talk.
Unlike Spock, who could read people´s minds just by placing his hand in a particular fashion on the subject´s face, the technology behind this new breakthrough is precarious at best.
Any device capable of reading one´s mind is a long way off because researchers would have to make the technology much more accurate than it is now and also find a way to apply sounds which the patient merely thinks of, rather than hears.
It would also require electrodes to be placed beneath the skull onto the brain itself, because no sensors exist which could detect the tiny patterns of electrical activity non-invasively. But this doesn´t mean it could not one day be feasible.
“This is huge for patients who have damage to their speech mechanisms because of a stroke or Lou Gehrig´s disease and can´t speak,” said Knight. “If you could eventually reconstruct imagined conversations from brain activity, thousands of people could benefit.”
For the research, Knight and colleagues studied 15 epilepsy patients who were undergoing exploratory surgery to find the cause of their seizures, a process in which a series of electrodes are connected to the brain through a hole in the skull.
While the electrodes were attached, the team monitored activity in the temporal lobe -- the brain´s speech processing area -- as patients listened to about ten minutes of conversation. By breaking down the conversation into component sounds, the team was able to create two computer models which matched distinct signals in the brain to individual sounds.
They then tested the models by playing a recording of a single word to the patients, and predicting from the brain activity what word they had heard was.
One of the models was able to produce a close approximation of the word that scientists could guess what it was 90 percent of the time.
“This is exciting in terms of the basic science of how the brain decodes what we hear,” Knight, director of the Helen Wills Neuroscience Institute at UCB, told The Guardian's Ian Sample.
“The next step is to test whether we can decode a word when a person imagines it. That might sound spooky, but this could really help patients. Perhaps in 10 years it will be as common as grandmother getting a new hip,” Knight added.
Dr. Brian Pasley, the study´s lead author, compared the method to a pianist who could watch a piano being played in a soundproof room and “hear” the music just by watching the movement of the keys.
“This research is based on sounds a person actually hears, but to use this for a prosthetic device, these principles would have to apply to someone who is imagining speech,” cautioned Pasley in a recent statement. “There is some evidence that perception and imagery may be pretty similar in the brain. If you can understand the relationship well enough between the brain recordings and sound, you could either synthesize the actual sound a person is thinking, or just write out the words with a type of interface device.”
Knight had his doubts that the method could actually work, but was impressed with the results. “His computational model can reproduce the sound the patient heard and you can actually recognize the word, although not at a perfect level,” knight said of Pasley.
The ultimate goal of the study was to explore how the human brain encodes speech and determine which aspects of speech are most important for understanding.
“At some point, the brain has to extract away all that auditory information and just map it onto a word, since we can understand speech and words regardless of how they sound,” said Pasley. “The big question is, what is the most meaningful unit of speech? A syllable, a phone, a phoneme? We can test these hypotheses using the data we get from these recordings.”
Being able to read minds is a controversial subject. Ethical concerns have arisen that such technology could be used covertly or to interrogate criminals and terrorists.
But Knight said that only exists in the realm of science fiction. “To reproduce what we did, you would have to open up someone´s skull and they would have to cooperate.” Making a device to help people speak will not be easy. Brain signals that encode imagined words could be harder to decipher and the device must be small and operate wirelessly. It would also prove difficult distinguishing between words a person wants to speak and thoughts they wish to keep secret.
Jan Schnupp, professor of neuroscience at Oxford University, who thought the breakthrough was “remarkable,” said possible uses of the technology to use mind-reading devices to eavesdrop on the privacy of our minds is unjustified.
The scientists could only get their technique to work because of cooperation from patients who were willing to participate. You aren´t going to get willing parties in an interrogation setting, he noted.
“We can rest assured that our skulls will remain an impenetrable barrier for any would-be technological mind hacker for any foreseeable future,” Schnupp told Sample of The Guardian.
But the benefits of such devices could be transformative, said Mindy McCumber, a speech-language pathologist at Florida Hospital in Orlando.
“As a therapist, I can see potential implications for the restoration of communication for a wide range of disorders,” she told Jason Palmer of BBC News. “The development of direct neuro-control over virtual or physical devices would revolutionize ℠augmentative and alternative communication℠, and improve quality of life immensely for those who suffer from impaired communication skills or means.”
The report is accompanied by an interview with the authors for the PLoS Biology Podcast.
Image 2: An X-ray CT scan of the head of one of the volunteers, showing electrodes distributed over the brain´s temporal lobe, where sounds are processed. Credit: Adeen Flinker, UC Berkeley
On the Net:
- University of California, Berkeley
- University of California, San Francisco
- PLoS Biology
- PLoS Biology Podcast