Periodic Reporting for period 4 - DynaSens (Understanding the neural mechanisms of multisensory perception based on computational principles)
Okres sprawozdawczy: 2019-04-01 do 2020-10-31
This project aims to advance our basic understanding of the neurophysiological mechanisms underlying multisensory perception by addressing the following questions: What are the neural processes transforming and combining multiple sensory inputs to guide behaviour? How does the brain decide which information to combine and which neurophysiological processes underpin this computation? Which of the involved processes are affected in the elderly or individuals with disorders such as autism and contribute to potential behavioural deficits? And in which regard does the organization of multisensory perception differ between stimuli of distinct kinds, such as spatial or temporal information, or speech? We address these questions using neuroimaging to record brain activity in humans. The overall goal is to obtain a more principled and comprehensive understanding of how the brain handles multiple sensory inputs, and to pave the way for a framework for addressing pressing problems associated with multisensory perceptual deficits in cognitive disorders and during our life span.
Our results show that multisensory perception is a process that generally involves a number of computational steps and brain regions, and depends both on the momentarily available sensory evidence, the past experience, and individual factors. Often, the information from two senses can be mismatching. For example, when watching a movie using headphones the spatial location of the actor and her voice are distinct. Our results show that in such a scenario the brain engages processes related to the inference about whether the two sensory cues are causally related or not, which emerge in prefrontal cortex. To facilitate this inference process, the brain first establishes neural representations of the auditory and visual information, and creates a representation of the merged multisensory information. These uni- and multi-sensory representations emerge in sensory cortices and parieto-temporal regions, respectively. Only once these have been established, the brain decides whether it will combine the apparently disparate multisensory information. This hierarchical organization of multisensory neural process is similarly engaged across qualitatively distinct types of sensory information, such as spatial information, motion or speech. However, our work also shows that the specific brain regions fusing the information at an intermediate step differ with the type of sensory information received.
Our work also revealed that that the manner in which two sensory cues are exploited for behaviour is not fixed across individuals and not only tied to the current sensory evidence. Rather these processes are subject to the past experience over multiple time scales, individual biases, and spontaneous fluctuations in brain state. For example, our ability to discern two temporally proximal multisensory stimuli is based by idiosynchratic and stable biases, and patterns of brain activity just prior to the stimulus. Similarly, our work shows that inference about the sources of multisensory information in prefrontal cortex are guided not just by the current stimuli but also the previous composition of the sensory environment and memory traces of previously received multisensory information persist over several seconds in multisensory regions in the parietal lobe.
Based on experiments involving distinct types of sensory information, our work suggests that the same core principles support multisensory perception regardless of the stimulus nature. However, speech engages dedicated brain regions to fuse acoustic and visual information about word identity. For example, when exposed to a noisy environment, seeing the speaker greatly facilitates speech comprehension, and the same prefrontal regions involved in general multisensory causal inference facilitate this. Yet, speech also engages specific pre-motor and parietal regions not involved in the fusion of other types of multisensory evidence.
These findings were published in 20 peer-reviewed publications, 15 preprints, and presented at international conferences, workshops and during invited talks at universities. The scientific topic was presented at a science exhibition, and team members organized 3 symposia at international conferences. The overall research topic was highlighted in two newspaper articles in the host city of Bielefeld.