Skip to main content

The multisensory dimension of memory, from single neuron to neural network. A multiscale electrophysiological approach to reveal the mechanism of face-voice association for person identity recognition

Periodic Reporting for period 1 - MIMe (The multisensory dimension of memory, from single neuron to neural network.A multiscale electrophysiological approach to reveal the mechanism of face-voice association forperson identity recognition)

Reporting period: 2018-05-01 to 2020-04-30

Multisensory integration is a fundamental aptitude of our brain: it links the information provided by the senses. This process allows to build (e.g. in memory) a multimodal object as a whole from its different sensory features; like a face and a voice that are bound to form person identity. Multisensory integration is also crucial to take a decision: our senses provide us complementary information that we combine optimally when we are making a choice. For instance, walking in the wood at dusk, we may decide to change our path if, while hearing a faint howl, we are starting to distinguish a barely visible wolf from the rising mist. Currently, two major unknowns reside in how our brain constructs multimodal association and how our brain implements the interplay between multisensory integration, memory and decision making.

A recent stream of research in neuroscience proposed that the transfer of information in brain circuitry relies on neuronal oscillations. This premise is based on the observation that ongoing oscillations reflect variations in neuronal activity. Thus, when two groups of neurons present an optimal alignment of their phases of excitability, they are more likely to exchange information. While it remains to be generalized, this framework provides a neuronal mechanism by which the brain may associate sensory inputs to create a multimodal construct. In our research project, the first aim was directed at assessing if multimodal association relies on the phase synchronization between distant neuronal.

So far, perceptual decision making has been studied mainly in the context of one modality at a time and has been described as a chain of processing steps from perception to the realization of an action. First, a sensory signal is encoded in the related sensory cortex. Thereafter, this inherently noisy sensory signal is accumulated over time in associative regions to form a decision. Lastly, if a decision criterion is reached, an appropriate motor response is triggered. Thus, formally perceptual decision making can be divided into sensory encoding and decision formation stages. Given these two stages, the second aim of our project was to evaluate if multisensory integration takes place: either during sensory encoding only (i.e. before a supramodal decision formation step), or during decision formation (which would be fostered by information originated from the different senses), or both during sensory encoding and decision formation.

The aims of the project were twofold. (i) To reveal the neuronal mechanism underlying multimodal association within brain networks. (ii) To answer the question whether multisensory integration is accomplished before and/or during perceptual decision making (i.e. during sensory encoding and decision formation).
To achieve our objectives we conducted two streams of investigation: at the intracranial scale to access the neuronal mechanism of multisensory integration and at the whole brain level to get a global understanding of the interplay between multisensory integration, memory and decision making processes.

Our first approach relied on the rare chance to record intracranial neuronal oscillatory activity in epileptic patients while they were undergoing a strict and independent clinical procedure. During the experiment, participants were exposed to illusory contours accompanied or not by a sound. Illusory contours are visual illusions that evoke the perception of a shape without the presence of edges (e.g. Kanizsa triangle). Our analysis was directed at oscillatory activity to investigate the modulation of functional connectivity between brain regions. To precisely localize the intracranial electrodes in the brain, we developed a semi-automatized analysis pipeline utilizing multimodal imaging (i.e. CT-scan and MRI). We report two main findings. First, the phase of ongoing oscillations in auditory cortex is strongly re-aligned by illusory contours as to compare to non-illusory contours. That is, auditory region was informed about the visual binding of illusory contours into a shape. Second, when the illusory contours was associated with a sound, ongoing oscillations in auditory and visual regions were greatly synchronized. This communication between distant neuronal populations represents the encoding of a multimodal object: the sound was associated with the shape formed by the illusory contours.

Our second approach consisted in performing a study in healthy participants using EEG. While their brain activity was recorded, the participants were exposed to dynamic sequence of stimuli. Each sequence consisted in a cacophony of audio-visual noise, where they had to detect, or to categorize, an unpredictable target cue presented either in the auditory domain, in the visual domain or both. To investigate the behavioral benefit of multisensory integration, we utilized computational models of decision making. The idea of a computational model is to map different cognitive processes to different psychologically meaningful parameters. This approach revealed that to explain the faster multisensory responses, there were two key parameters: one corresponding to sensory encoding process, the other to the process of decision formation. Next, to analyze EEG signal, we utilized a machine learning method. That is, we trained an algorithm to decode experimental conditions based on the recorded brain activity. First, using the decoding from unisensory conditions, we mapped cognitive processes over time. Then we performed a cross-condition decoding, generalized in time, to decode the multisensory condition. The results demonstrated that multisensory integration accelerated the brain dynamic during both the encoding of sensory information and when the decision was formed before participant’s response. In the two tasks, the result from electrophysiological data was coherent with our behavioral modeling.

In summary, our research indicated that multisensory integration is pervasive in human brain and completes different processes along the cortical hierarchy. First, we showed that to build multimodal association, sensory regions exchange information through the synchronization of their oscillatory activity. Second, we evidenced that after sensory encoding; multisensory integration implies associative regions to mediate the formation of a decision, in link with representations stored in memory.
In fact, while the existence of interplay between multisensory research with the fields of memory and perceptual decision making was recognized, the basis of their reciprocal influences remained largely unknown. Yet, this research paves the way to bridge these cognitive processes.
The aftermaths of this project are also methodological, with two tools developed within the framework of our research. First, based on machine learning, we implemented a time generalized cross-condition decoding approach for electrophysiological signal. Our comparison with classical analysis revealed a manifest ascendancy of this innovation to analyze brain dynamic, both in term of sensibility and in term of timing bias. Second, utilizing multimodal imaging, we carried out a semi-automatic pipeline to localize intracranial electrodes in the brain. These innovations provide to the research community systematic approaches that are freely available in open-access, and thus support reproducibility and open science, cornerstones of scientific progress.
Mechanisms of multisensory integration: from encoding to the making of a decision