New Study Relates Selective Hearing With Brain Function

Connie K. Ho for RedOrbit.com
Imagine this: you´re in a noisy room and you spot an old acquaintance.  You go up to speak to him and you can distinctly hear what he´s saying. While his words succinctly come out clearly, you blur out the rest of the room and don´t focus on the outside noise. This selective hearing, deemed the “cocktail party effect,” has made headway with a study by two scientists from the University of California, San Francisco (UCSF).
The research, published in the journal Nature, dissects the “cocktail party effect,” which is the ability to focus on a single speaker in any environment. It is a report by UCSF neurosurgeon Edward F. Chang, MD, a faculty member in the UCSF Department of Neurological Surgery and the Keck Center for Integrative Neuroscience, and UCSF postdoctoral fellow Nima Mesgarani, PhD, who worked with three patients who were undergoing brain surgery for severe epilepsy.
With the help of the UCSF epilepsy team, the surgery was able to identify the parts of the brain that disable seizures. In the experiment, a thin sheet of 256 electrodes were placed under the patients´ skulls to record activity in the temporal lobe, where the auditory cortex sits. Chang believes that, through the recording of the brain, he and Mesgarani were able to find out more about how the brain functions.
“The combination of high-resolution brain recordings and powerful decoding algorithms opens a window into the subjective experience of the mind that we’ve never seen before,” Chang commented in a prepared statement.
In the study, patients listened to two different speeches that contained different phrases by a variety of speakers. They were asked to focus on just one speaker. Following the experiment, the subjects told the researchers the phrases that they heard and remembered. To analyze the results, Chang and Mesgarani applied decoding methods to “reconstruct” what the patients had heard based on brain activity. Their decoding method demonstrated which speaker and which phrases the subject had picked up based on neural patterns. It also determined when a listener´s attention was straying to another speaker.
“The algorithm worked so well that we could predict not only the correct responses, but also even when they paid attention to the wrong word,” Chang noted in the statement.
Others in the science community believe that the algorithm Chang and Mesgarani utilized is ground breaking.
“I’ve never seen anything like this before,” says Martin Vestergaard, a neuroscientist at Cambridge University, in an article by New Scientist.
With the results of the experiment, there is progress being made in research regarding brain processing and the human language. The findings could be utilized in studies focused on those who have language learning disorders, autism, or an impairment related to aging.
“People with these disorders have problems with the ability to focus on a certain aspect of the environment,” Chang remarked in an interview with ABC News.  “They can´t always hear things correctly.”
Companies who work in voice recognition software also find these results of interest; the engineering required to separate a single voice from a group of voices has been difficult for tech and medical developers.
“It´s something that humans are remarkably good at, but it turns out that machine emulation of this human ability is extremely difficult,” commented Mesgarani in a prepared statement.