Skip to main content

Representational Mechanisms of Neural Location Encoding of Real-life Sounds in Normal and Hearing Impaired Listeners.

Periodic Reporting for period 1 - SOLOC (Representational Mechanisms of Neural Location Encoding of Real-life Sounds in Normal and Hearing Impaired Listeners.)

Reporting period: 2020-06-01 to 2021-05-31

Humans make use of spatial hearing to make sense of the world around continuously, even without realizing it. For example, to rapidly localize events in the environment: When you ride your bike to work, your sound localization skills warn you of a car approaching from behind. But, sound localization is also important for communication in noisy situations: Once you arrive at your office, spatial hearing helps you to focus on the voice of your colleague in the midst of noise generated by voices of other colleagues and computers whizzing. Thus, spatial hearing is crucial for humans and the inability to localize sounds hampers communication in everyday life. Yet, it is still unknown how the human brain computes the location of real-life sounds in real-world listening situations because prior research concentrated on localization of simple sounds (for example, pure tones) in strictly controlled listening situations and experiments.

Importantly, knowledge of these brain mechanisms is needed to help hearing impaired listeners. HI listeners (over 34 million EU citizens and 5% of the worldwide population) experience great difficulties with understanding speech in noise environments. These problems persist even when hearing impaired listeners make use of an assistive hearing device such as a cochlear implant and are partially caused by reduced spatial hearing, which makes it difficult to filter out the voice of interest to the listener based on its position. As a result of these persistent communication problems, hearing impaired listeners are more prone to social isolation, low academic achievements, and unemployment. Besides the high personal impact, this also has a high economic impact on society.

In this research project, I take a novel approach bringing together multiple scientific disciplines to address this problem: I will make a major step forward into understanding how the brain encodes sound location in real-life listening situations using cutting edge neuroscience and artificial intelligence techniques, and utilize this knowledge to investigate clinical applications that improve spatial processing in the brain of cochlear implant users to improve their speech-in-noise perception. Objectives of this Marie Sklodowska-Curie Action (MSCA) are to (1) develop a neurobiological-inspired deep neural network (DNN) model of location encoding of real-life sounds in the human brain; (2) validate deep neural networks as models of sound location encoding in the human brain using measurements of neural activity; and (3) employ the DNNs to investigate the neural representation of sound location cochlear implant users and to develop signal processing strategies for cochlear implants that optimize subsequent spatial processing in the brain.
Work was divided into three work packages (WPs) focused on research and an additional four WPs dedicated to management of the project (WP4), training and transfer of knowledge (WP5), dissemination and exploitation (WP6), and communication (WP7). In WP1, I created deep neural network (DNN) models of neural location processing of real-life sounds in real-world listening environments. This WP yielded a journal publication and a depository of spatialized, real-life sounds that will be released to the public in the second phase of the Fellowship. The database can be used by the wider scientific community for further research in neuroscience, audition and computational modelling. WP2 aims to evaluate the validity of the deep neural networks as a model of sound location processing in the human brain using measurements of neural activity. I conducted a study utilizing invasive intracranial recordings in neurosurgical patients to obtain insight into single-source sound location processing in multi-source listening scenes to better understand one of the fundamental problems of hearing impaired listeners: speech-in-noise perception. WP2 yielded a conference presentation and another conference presentation and one journal manuscript are currently underway. To promote transfer-of-knowledge, the Fellow organized three workshops for academics and assisted in the supervision of early career researchers. Executing the projects of the fellowship in a timely manner strengthened the management and administrative skills of the Fellow. Finally, during the first period, the Fellow earned a teaching qualification in the Netherlands (University Teaching Qualification) and was appointed a part-time position at Columbia University to continue her successful collaboration with the outgoing host after the end the Fellowship.

The research outcomes were presented in a high-quality scientific paper in a computational neuroscience journal and at a scientific conference in the field of cognitive neuroscience and audition. Future research outcomes will also be presented in high-quality scientific papers in (1) computational neuroscience journals, (2) cognitive neuroscience journals and (3) audition and clinical audiology journals. Furthermore, the research materials and data sets collected over the course of the MSCA will propel numerous scientific studies forward and contribute to future publications of other research groups.
This MCSA progresses research beyond the current state-of-the-art by investigating the computational and representational mechanisms underlying the transformation from real-life sounds in ecologically valid listening settings to a neural representation of sound location. The results are expected to spark a range of new questions and to open up more research avenues concentrating on the neural mechanisms of sound processing of real-life, ecologically valid listening situations. Further, this MCSA pushes the frontiers of the use of deep neural network models to comprehend neurophysiological processing mechanisms by developing a novel, neurobiological-inspired DNN-model of location encoding of real-life sounds in the human auditory pathway. That is, as the large majority of existing DNN-based research in neuroscience concentrates on neural processing in the visual system, the outcomes of the present MCSA highlight the potential of the use of artificial intelligence techniques such as deep learning to increase our understanding auditory neural processing.

This MCSA also boosts clinical developments in speech-in-noise understanding in cochlear implant users beyond the current state-of-the-art by using a cutting-edge computational modelling approach to (1) investigate the – distortions in the – brain representation of sound location in cochlear implant users, (2) explore signal processing algorithms that maximize the availability of spatial cues for cochlear implant users for later neural processing. More specifically, the MCSA will result in new insights into optimal signal processing for cochlear implant users, and these results can be utilized directly by industry professionals (e.g. cochlear implant manufacturers) to optimize development in cochlear implants. Thus, this fellowship has a uniquely multidisciplinary and intersectional approach which connects experts and expertise across cognitive neuroscience, computational modelling and clinical audiology, and brings together scientists, clinicians and industry professionals to progress research and applications beyond the current state-of-the-art.
modelbasedapproach-schemacycle-withwps-v3.jpg