Skip to main content
CORDIS - Forschungsergebnisse der EU
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Deciphering deep architectures underlying structured perception in auditory networks

Periodic Reporting for period 4 - DEEPEN (Deciphering deep architectures underlying structured perception in auditory networks)

Berichtszeitraum: 2023-03-01 bis 2024-02-29

Despite continuous progress through decades of research, the computational mechanisms by which neural networks of the brain generate sensory perception are still incompletely understood. As artificial intelligence (AI) methods develop in our society, better knowledge of the neural algorithm at play in the brain to generate perception is crucial both to evaluate the similarity between AI systems and brains of human and animals and eventually to improve AI systems with computational or engineering features that are efficient in the brain.

The difficulty to establish computational principles for sensory perception ais partly due to the limited data available about brain activity across different parts of sensory systems. The idea of the DEEPEN project was to acquire datasets of neuronal activity matching the scale of the networks underlying auditory perception in the mouse and to use this data to construct precise data-driven models of auditory processing throughout the system. This model can then be compared to artificial intelligence networks built for sound processing. In addition, the DEEPEN project aimed to induce perturbations in the biological network to further refine the obtained models and test for the causality of specific brain networks in perception.

The project has successfully acquired an unmatched dataset of neural data from all stage of the auditory system during awake perception. It has demonstrated the importance of performing such large-scale recordings in the state of awake perception by showing that anesthesia modifies drastically sound representations. The project has then successfully demonstrated the strong similarity between the transformations of sound representations in the mouse auditory system with the transformations observed in deep neural networks for sound processing, and it has shown that simple mathematical operations explain these transformations. An important output of the project is the demonstration that the transformations arising in the last stage of the auditory system have a causal impact on the association of sounds with behavioral decisions. Beyond their fundamental implications, these discoveries open new avenues to design devices interfacing the brain for channeling auditory information, for example as a new high-resolution solution to severe hearing loss.
1/ We have established new methods to perform the large scale sampling of auditory system activity and process the large datasets. We have then acquired complete datasets for the main auditory networks in the mammalian brain: the auditory cortex, thalamus, colliculus, the superior olivary complex and the cochlear nucleus. In addition, we have obtained, through a collaboration with J. Bourien of the University of Montpellier, detailed simulations of the main input to the auditory system: the cochlea.

2/ These four datasets have allowed us to identify systematically some important transformations of auditory information across these stages of the auditory system that had not been systematically described before. First, we have observed that the temporal information in sounds is transformed into spatial information without loss of temporal resolution over the scale of few 100ms. This transformation had been hypothesized in specific models of auditory processing but never clearly demonstrated through a systematic comparison of the emerging properties of the auditory code across regions of the auditory system. This transformation goes together a complexification of sound offset responses which carry more and more information about recent acoustic cues. Another novel computation that we have discovered is the transformation of sound amplitude information into a spatial code with neurons that are specific both of sound identity and sound loudness. This transformation emerges early in the auditory system and is refined up to the cortex. We have also observed that neurons become more and more specific of feature combinations from the beginning to the end of the the auditory system. Responses to multicomponent sounds that are not anymore the sum of single components in the cortex, while the early representations are more linear.

3/ We compared these transformations were compared to transformations operated in deep neural networks used to classify sounds. We observed that the classification of multiple sound feature categories generates a structure in deep network representations that resemble the auditory system in many respects. This includes all the transformations we identified in the auditory system. This indicates that the auditory system transforms sound representations to perform multipurpose categorization.

4/ We have shown that the transformation observed across the auditory system can to a great part be explained by linear-non-linear operations across stages of the auditory system with a stronger non-linear step between the thalamus and the cortex. This allowed us to model step by step the transformations of the auditory system.

5/ We have established perturbation methods based on optogenetic tools, and have shown that, in the mouse, the region of the auditory system which are involved in perception depend on the difficulty of the perceptual task through which we measured it. We have also used these perturbation methods to demonstrate that targeted activation of the final stage of the auditory system can influence auditory perception directly.

6/ Following these studies have also established optogenetic tools to design spatio-temporal activity patterns in the auditory cortex. These tools have allowed us to establish causally the role of some of the transformation performed by the auditory system, in particular the emergence of spatial codes for temporal cues. We have observed that temporal information that exits the auditory system cannot be associated to discriminative behavioral decisions, which provides a fundamental reason why it must be transformed to spatial information. We are further developing these tools in two follow-up projects: one for the design of cortical implant for auditory rehabilitation (EIC project Hearlight 2021-25), one for high resolution stimulation within neural network.

7/ We have shown that the transformation of auditory information within the auditory system is highly sensitive to anesthesia, during which representations are strongly disrupted. In sleep, in the contrary transformations are preserved.

These results are published in scientific journals, preprint server or their publication is under preparation.
Our results unveils novel transformations in auditory processing, demonstrating temporal to spatial information conversion without loss, complex sound response patterns emergence, and neurons specificity enhancement across auditory stages.
Comparisons with deep neural networks indicate that the auditory system performs multipurpose sound categorization based on linear-non-linear operations.

Perturbation methods, including optogenetics, reveal region-specific involvement in auditory perception and direct influence of the final auditory stage on perception.
Optogenetic tools confirm the emergence of spatial codes for temporal cues and offer potential applications like auditory rehabilitation implants.
3D reconstruction of the neurons imaged in auditory cortex