Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS
How does the brain organize sounds into auditory scenes?

Article Category

Article available in the following languages:

Did you hear the one about how the brain organises sounds?

By uncovering some of the mechanisms that allow the brain to distinguish and group sounds, researchers have opened the door to creating advanced hearing aids.

Our ability to hear is rather remarkable. As a case in point, consider how one can follow a conversation even in a noisy pub. “This is possible because the brain is able to separate out frequency elements and appropriately group them so that sounds arising from the same source are perceptually distinct from those coming from other sources,” says Jennifer Bizley(opens in new window), a professor of Auditory Neuroscience at University College London(opens in new window). “This ability essentially allows us to block out the background noise and concentrate on the conversation in front of us.” Unfortunately, this ability is something that tends to dimmish with age. Because scientists really don’t know the mechanisms that enable such focused hearing, they have been unable to recreate it with digital technology. “Figuring out how the brain separates competing sounds could open the door to building machines that can hear as well as a young person or aids that can restore an ageing listener’s hearing,” adds Bizley. Helping to do exactly that is the EU-funded SOUNDSCENE(opens in new window) project.

How humans and animals hear

The project, which received support from the European Research Council(opens in new window), developed a range of listening tasks for both humans and animals to perform. Based on this work, researchers found that humans are particularly adept at estimating the statistical properties of background sounds. Furthermore, they demonstrated that a listener can improve their ability to discriminate speech in noise within just a few 100 milliseconds of hearing a new background sound. The project also studied how an animal’s brain estimates the statistics of a sound source and groups frequency elements into a single source. Using an animal model, researchers identified that the auditory cortex – a key brain region for processing sound – is essential for listening in noise. “We provided the first evidence that a non-human animal can extract temporal regularities to group sound elements together, and using modelling approaches we were able to show that, when additional memory constraints are imposed, ferret data looks very like human data,” explains Bizley, the project’s principal investigator. SOUNDSCENE researchers also developed three behavioural paradigms of speech discrimination in its animal model, as well as pioneered machine learning approaches they can use to help make sense out of the project’s rich datasets.

Next-generation listening devices and hearing aids

The SOUNDSCENE project’s work and resources have advanced scientists’ understanding of mechanisms that allow the brain to distinguish and group sounds. In particular, its demonstration that non-human animals can extract statistical regularities in a similar way to humans paves the way for understanding how the underlying neural circuit achieves this. “Given society’s ageing population and the recent discovery of hearing loss as a potential modifiable risk factor for dementia, the deeper understanding of the underlying neuronal mechanisms provided by our work is of great value to developing next-generation machine listening devices and hearing aids,” concludes Bizley.

My booklet 0 0