For the first time ever, researchers have used a computer to read someone’s mind

By implanting electrodes in the temporal lobes of awake patients, scientists from the University of Washington and colleagues have decoded brain signals close to the speed of perception, a new study published Thursday in the journal PLOS Computational Biology has revealed.

In addition, UW computational neuroscientist Rajesh Rao, neurosurgeon Jeff Ojemann and their fellow researchers were able to analyze the study participants’ neural responses to two different categories of visual stimuli (images of faces and pictures of houses).

This, in turn, made it possible for them to predict which type of image the patients were viewing and when. Their efforts were 95 percent accurate, the study authors explained in a statement, and could help researchers better understand how the temporal lobe perceives different objects.

Rao, who is also a professor of computer science and engineering at the university and the head of the National Science Foundation’s Center for Sensorimotor Engineering, noted that he and his colleagues were also “trying to understand… how one could use a computer to extract and predict what someone is seeing in real time.”

Participants were having difficulty treating epilepsy symptoms

The study, which involved seven epilepsy patients receiving care at Harborview Medical Center in Seattle, could be considered “a proof of concept toward building a communication mechanism for patients who are paralyzed or have had a stroke and are completely locked-in,” he added.

Each of the patients had been experiencing epileptic seizures, and medication was doing little to alleviate their symptoms, explained Ojemann. So they decided to undergo a surgical procedure in which they had electrodes temporarily implanted in the temporal lobes of their brains in the hope that it would help doctors locate the focal points of those seizures.

As the authors explained, the temporal lobes, which are located behind the eyes and ears, process sensory input and are often the source of a patient’s epileptic seizures. They are also often linked to Alzheimer’s disease and dementia, and are more vulnerable to head trauma than other parts of the brain.

Each patient had electrodes from multiple locations on the temporal lobes connected to computer software which extracted two different brain signal properties: event-related potentials, which are the result of hundreds of thousands of neurons being activated after initial exposure to an image, and broadband spectral changes, which involve additional processing of already-presented data.

Predictions of an image’s content were 96 percent accurate

Once the procedure was complete, each patient were shown a random sequence of pictures on a computer monitor. Each image was either a face or a house, lasted just 400 milliseconds and was interspersed with blank gray screens. The subjects were asked to look for a picture of an upside-down house, while the software sampled and digitized brain signals 1,000 times per second.

mind reading

This illustrates brain signals representing activity spurred by visual stimuli experienced by the subjects in this study. In this example, images of human faces generated more brain activity than images of houses. (This was not the result in every case.) (Credit: Illustration by Kai Miller and Brian Donohue)

Rao said that the researchers received “different responses from different (electrode) locations; some were sensitive to faces and some were sensitive to houses.” The program also analyzed the information to determine which combination of electrode locations and signal types most closely matched what each of the patients actually saw.

“By training an algorithm on the subjects’ responses to the (known) first two-thirds of the images,” the university said, the researchers were able to “examine the brain signals representing the final third of the images… and predict with 96 percent accuracy whether and when (within 20 milliseconds) the subjects were seeing a house, a face or a gray screen.”

“Traditionally scientists have looked at single neurons. Our study gives a more global picture, at the level of very large networks of neurons, of how a person who is awake and paying attention perceives a complex visual object,” Rao said, adding that their technique was a step forward for brain mapping technology and could determine, in real time, what areas of the brain are sensitive to different kinds of data.

—–

Feature Image: This illustrates brain signals representing activity spurred by visual stimuli experienced by the subjects in this study. In this example, images of human faces generated more brain activity than images of houses. (This was not the result in every case.) (Credit: Illustration by Kai Miller and Brian Donohue)