CORDIS - Forschungsergebnisse der EU
CORDIS

Understanding the neural mechanisms of multisensory perception based on computational principles

Periodic Reporting for period 4 - DynaSens (Understanding the neural mechanisms of multisensory perception based on computational principles)

Berichtszeitraum: 2019-04-01 bis 2020-10-31

Our brain has access to information from different sensory modalities, such as sight, hearing, or touch. Depending on our goal and environment we can selectively combine the information from some senses and choose to neglect that from others. For example, when trying to cross a street on a foggy morning, we may emphasize what we hear over what we see, but still use the collective information to decide when to move. While our brain is adept at combing multisensory information almost instantly, we still have a limited understanding of how the brain implements this feat. Knowledge of how the brain implements the relevant sensory and cognitive processes is critical for understanding how conscious perception emerges in general, but also understand known perceptual deficits in the elderly and neurocognitive disorders, where it has been speculated that some deficits emerge specifically in the process of combining multisensory information.
This project aims to advance our basic understanding of the neurophysiological mechanisms underlying multisensory perception by addressing the following questions: What are the neural processes transforming and combining multiple sensory inputs to guide behaviour? How does the brain decide which information to combine and which neurophysiological processes underpin this computation? Which of the involved processes are affected in the elderly or individuals with disorders such as autism and contribute to potential behavioural deficits? And in which regard does the organization of multisensory perception differ between stimuli of distinct kinds, such as spatial or temporal information, or speech? We address these questions using neuroimaging to record brain activity in humans. The overall goal is to obtain a more principled and comprehensive understanding of how the brain handles multiple sensory inputs, and to pave the way for a framework for addressing pressing problems associated with multisensory perceptual deficits in cognitive disorders and during our life span.
Our research focused on the neurophysiological and computational mechanisms underlying the selection and combination of acoustic and visual information for human behaviour in young and older participants. To link behaviour and brain activity, volunteer participants performed perpetual tasks, such as localizing an object ink space or time, or performing a comprehension task on a speech stimulus, which were structured to allow dissociating individual processing steps involved in multisensory perception. Participants brain activity was measured using neuroimaging methods such as electroencephalography or magnetoencephalography, which capture the electro-magnetic correlates of local brain activity at high temporal resolution. As a key aspect of this agenda, we developed statistical tools to link brain activity with specific sensory computations or behaviour, capitalizing on ideas from information theory and multivariate statistics.
Our results show that multisensory perception is a process that generally involves a number of computational steps and brain regions, and depends both on the momentarily available sensory evidence, the past experience, and individual factors. Often, the information from two senses can be mismatching. For example, when watching a movie using headphones the spatial location of the actor and her voice are distinct. Our results show that in such a scenario the brain engages processes related to the inference about whether the two sensory cues are causally related or not, which emerge in prefrontal cortex. To facilitate this inference process, the brain first establishes neural representations of the auditory and visual information, and creates a representation of the merged multisensory information. These uni- and multi-sensory representations emerge in sensory cortices and parieto-temporal regions, respectively. Only once these have been established, the brain decides whether it will combine the apparently disparate multisensory information. This hierarchical organization of multisensory neural process is similarly engaged across qualitatively distinct types of sensory information, such as spatial information, motion or speech. However, our work also shows that the specific brain regions fusing the information at an intermediate step differ with the type of sensory information received.
Our work also revealed that that the manner in which two sensory cues are exploited for behaviour is not fixed across individuals and not only tied to the current sensory evidence. Rather these processes are subject to the past experience over multiple time scales, individual biases, and spontaneous fluctuations in brain state. For example, our ability to discern two temporally proximal multisensory stimuli is based by idiosynchratic and stable biases, and patterns of brain activity just prior to the stimulus. Similarly, our work shows that inference about the sources of multisensory information in prefrontal cortex are guided not just by the current stimuli but also the previous composition of the sensory environment and memory traces of previously received multisensory information persist over several seconds in multisensory regions in the parietal lobe.
Based on experiments involving distinct types of sensory information, our work suggests that the same core principles support multisensory perception regardless of the stimulus nature. However, speech engages dedicated brain regions to fuse acoustic and visual information about word identity. For example, when exposed to a noisy environment, seeing the speaker greatly facilitates speech comprehension, and the same prefrontal regions involved in general multisensory causal inference facilitate this. Yet, speech also engages specific pre-motor and parietal regions not involved in the fusion of other types of multisensory evidence.
These findings were published in 20 peer-reviewed publications, 15 preprints, and presented at international conferences, workshops and during invited talks at universities. The scientific topic was presented at a science exhibition, and team members organized 3 symposia at international conferences. The overall research topic was highlighted in two newspaper articles in the host city of Bielefeld.
By comparing the mechanisms underlying the flexible use of multisensory information across different tasks we were able to disentangle common mechanistic patterns as well task- or stimulus- specific processes, an issue often neglected. Our results show that accounting for the flexibility of human observers to combine sensory information when meaningful and of benefit, and to refrain from combining sensory evidence that does not seem to belong together, is paramount to understand the underlying brain mechanisms. By linking neural processes implementing sensory perception with computation models of multisensory perception we have been able to directly link local brain activity with sensory-specific and mechanistically interpretable computations. This allowed us to dissociate processes actually combining information from those controlling this integration process based on task demands or contextual evidence. Untangling the cascade of uni- and multisensory processes underlying perception in even finer detail is required to pave the way to pinpoint those processes that contribute to perceptual deficits with age or in disorders in the future.
Hierarchy of multisensory integration