Quantcast

Speech Identification: It’s Not Just Acoustics

September 27, 2011

Grammatical Structures Key in Tuning Us to Speech

In a study published in the open-access journal Frontiers in Language Sciences, researchers led by Prof. Iris Berent of Northeastern University show that our ability to identify sounds as speech critically depends on linguistic structures as opposed to acoustic properties.   

In a world where the acoustic environment is rich with a buzzing, booming mixture of noises, bellows and musical tunes, humans identify speech rapidly and effortlessly from a very young age, but it is not clear how they do so. Acoustic properties alone cannot account for speech identification, as a single acoustic stimulus may be classified as speech or non-speech, depending on the context. But without first recognizing speech as such, how do we understand its contents? In other words, how can the human language system produce its outputs (linguistic structure) if it cannot first detect its inputs (speech)?

In this new linguistic take on the chicken or the egg dilemma, Berent and her colleagues Evan Balaban and Vered Vaknin-Nusbaum demonstrate for the first time that when people hear any acoustic stimulus (i.e., either nonspeech, or speech sounds that do not exist as words), they automatically assign linguistic structure to it.  If the resulting structure is well-formed in people´s native language, then they are more likely to identify those sounds as speech.

The intimate link between speech recognition and language structure carries broad practical implications, ranging from clinical applications for language disorders to speech recognition technology. Future research will further explore the nature of these grammatical restrictions and the extent to which they are partly shared across different languages.

On the Net:




comments powered by Disqus