European Commission logo
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS

How does the brain organize sounds into auditory scenes?

Periodic Reporting for period 2 - SOUNDSCENE (How does the brain organize sounds into auditory scenes?)

Periodo di rendicontazione: 2020-03-01 al 2021-08-31

Listening involves making sense of the numerous competing sound sources that exist around us. The neuro-computational challenge faced by the brain is to pull apart the sound mixture that arrives at the ear and reconstruct the original sound sources; this process is known as auditory scene analysis. While young normal hearing listeners can parse an auditory scene with ease, the neural mechanisms that allow the brain to do this are unknown – and we are not yet able to recreate them with digital technology. Hearing loss, aging, impairments in central auditory processing, or an inability to appropriately engage attentional mechanisms can negatively impact the ability to listen in complex and noisy situations and an understanding of how the healthy brain organizes a sound mixture. into perceptual sources may guide rehabilitative strategies targeting these problems.

While functional imaging studies in humans highlight a network of brain regions that support auditory scene analysis, little is known about the cellular and circuit based mechanisms that operate within these brain networks. A critical barrier to advancing our understanding of how the brain solves the challenge of scene analysis has been a failure to combine behavioural testing, which provides a crucial measure of how any given sound mixture is perceived, with methods to record and manipulate neuronal activity in animal models. In SOUNDSCENE we combine complex behavioural tasks, that mimic those that human listeners face in everyday situations, with methods to observe and manipulate neural activity. Our goal is to understand how a network of brain regions: auditory cortex, prefrontal cortex and hippocampus enable scene analysis during active listening. We will understand how processing within each area, and the interactions between these areas, underpins auditory scene analysis. This knowledge will increase our knowledge of fundamental brain function, and may contribute to biologically inspired machine listening devices, and improvements in hearing aid and cochlear implant signal processing methods.
During the first phase of this project we have laid the foundation for our work to understand how the brain supports listening. This has required establishing a number of technical approaches, and developing and training behavioural tasks in our animal model. Since we cannot ask our animal model how they perceive a particular sound mixture, the development of appropriate behavioural paradigms that allow us to assess an animals perception is an essential prerequisite for our work. We have established two distinct behavioural paradigms. The first requires that animals detect the emergence of a repeating pattern in a sequence of randomly presented tones. Detecting such statistical regularities allows the brain to better detect significant changes in an acoustic scene. This paradigm has been successfully used in humans and we have provided the first demonstration that a non-human animal can detect and report the emergence of regularity. The second behavioural paradigm is based around human speech sounds and requires that animals listen for the occurence of a target word, which is presented randomly within a string of non-target words. While our animal model lacks the language capabilities of humans, the processing of the acoustic building blocks on which language is built are likely to be conserved across species. Animals are able to learn this task and we have demonstrated they can generalise their performance across different voices and across variation in voice pitch. Animals are also able to perform this task in the presence of competing background noise.

In addition to establishing these behavioural paradigms we have also made considerable progress in developing the technical expertise to record from multiple brain regions simultaneously, and in our ability to manipulate neural activity.
In the second half of the project we anticipate deploying the technological advances we have made to adapt state-of-the-art recording methods to our purpose. These approaches to recording neural activity will be combined with the behavioural paradigms that we have developed and together will give us unrivalled insights into how the listening brain processes sound. Our work will, for the first time, combine tasks that engage listeners in the sort of complex tasks that humans routinely face when listening in the real world, with the ability to record from a number of brain regions with single cell resolution. This combination of sophisticated listening behaviours and large scale high resolution neural recording will allow us to test different hypothesis about how the brain represents and selects target sounds in the presence of distractors.
Illustration of the challange of auditory scene analysis. Image credit EU Research SUM20/P14