Periodic Reporting for period 2 - SOUNDSCENE (How does the brain organize sounds into auditory scenes?)
Okres sprawozdawczy: 2020-03-01 do 2021-08-31
While functional imaging studies in humans highlight a network of brain regions that support auditory scene analysis, little is known about the cellular and circuit based mechanisms that operate within these brain networks. A critical barrier to advancing our understanding of how the brain solves the challenge of scene analysis has been a failure to combine behavioural testing, which provides a crucial measure of how any given sound mixture is perceived, with methods to record and manipulate neuronal activity in animal models. In SOUNDSCENE we combine complex behavioural tasks, that mimic those that human listeners face in everyday situations, with methods to observe and manipulate neural activity. Our goal is to understand how a network of brain regions: auditory cortex, prefrontal cortex and hippocampus enable scene analysis during active listening. We will understand how processing within each area, and the interactions between these areas, underpins auditory scene analysis. This knowledge will increase our knowledge of fundamental brain function, and may contribute to biologically inspired machine listening devices, and improvements in hearing aid and cochlear implant signal processing methods.
In addition to establishing these behavioural paradigms we have also made considerable progress in developing the technical expertise to record from multiple brain regions simultaneously, and in our ability to manipulate neural activity.