Skip to main content

Deciphering deep architectures underlying structured perception in auditory networks

Periodic Reporting for period 2 - DEEPEN (Deciphering deep architectures underlying structured perception in auditory networks)

Reporting period: 2020-03-01 to 2021-08-31

Despite continuous progress through decades of research, the computational mecanisms by which neural networks of the brain generate sencory perception are still incompletely understood. As artificial intelligence (AI) methods develop in our society, better knowlegde of the neural algorithm at play in the brain to generate perception is crucial both to evaluate the similarity between AI systems and brains of human and animals and eventually to improve AI systems with computational or engeneering features that are efficient in the brain.

The difficulty to establish computational principles for sensory perception are partly due to limitations in the exploration of brain activity across sensory systems, in partcular in the sampling of the tremendous number of neurons that they include. The idea of the DEEPEN project is to acquire datasets of neuronal activity matching the scale of the networks underlying auditory perception in the mouse and to use this data to construct precise data-driven models of auditory processing throughout the system, which can be compared to artificial intelligence network built for similar purposes. In addition the DEEPEN project proposes to induce perturbation in biological network to further refine the obtain models and test for causality of particular brain networks in perception.
Since the beginning of the project we have established methods to perform the large scale sampling of auditory system activity and process the large datasets. We have then acquired complete datasets for the three main auditory networks in the mammalian brain: the auditory cortex, thalamus, and colliculus. In addition we have obtained, through a collaboration simulations, for the main input to the auditory system: the cochlea. These four datasets have allowed us to identify systematically some of the important transformations of auditory information across these stages of the auditory system. These transformations were compared to transformations operated in deep neural networks (the most widely used AI tool at the moment) used to classify sounds. This showed some stricking similarities that were, to some extent expected, but also some clear differences on which we know concentrate our efforts to understand their computational purposes.
In addition, we have established perturbation methods based on optogenetic tools, and have shown that, in the mouse, the region of the auditory system which are involved in perception depend on the difficulty of the perceptual task through which we measured it. We have also used these perturbation methods to demonstrate that targeted activation of the final stage of the auditory system can influence auditory perception directly.
The DEEPEN project has already provided several results beyond the state of the art. The first one is a dataset whose size and versatility has by far never been achieved. Second, we has used this novel dataset to identify important analogies and dissimilarities between biological and artificial networks for sound processing. These analogies and dissimilarities can be refined and we will do so during the next stage of the project. Third we have shown that it is possible to change auditory perception by targeted activation of the last stage of the auditory system. This will be used in the next stage of the project to better relate computations observed in the auditory system to perception. This opens interesting avenues towards a new type of biomedical implants for hearing impaired that we will explore in another project.