Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS

Neural & Computational Principles of Multisensory Integration during Active Sensing and Decision-Making

Periodic Reporting for period 1 - NeuCoDe (Neural & Computational Principles of Multisensory Integration during Active Sensing and Decision-Making)

Reporting period: 2019-04-01 to 2021-03-31

Imagine attempting to cross the road on a rainy night. You need to process the incoming stimuli (e.g. car lights, slippery ground) to decide whether, when and how it is safe to do so. To make such choices, we interact with the environment by directing our sensors (e.g. moving eyes or fingers) to extract relevant information. Importantly, the processing of information acquired actively from different senses requires the interaction of multiple brain areas implementing sensory, motor and cognitive functions over time. In this project, I wished to answer the following questions: How do we direct our sensors in order to accumulate evidence from the environment? How do we weigh the information obtained from different senses to create a reliable percept of the external stimuli? How do we translate this percept into decisions and how do these decisions drive subsequent actions? Importantly, I did not want to merely measure how people behave in such scenarios but understand how the human brain samples and processes the relevant information and how this process informs the formation of perceptual choices and ultimately guides subsequent actions.

Here, by bringing together behavioural neuroscience, biomedical engineering, computational modelling and neuroimaging, I studied study active multi-sensing and decision-making at the behavioural and neural levels. In particular, the proposed research elucidated a) the strategies used by human participants to actively sample the stimulus (by moving their eyes and fingers to reduce uncertainty), b) the behavioural benefit offered by combining multiple sources of information (by integrating the sensory cues depending on their reliability) as well as c) the neural mechanisms underlying this information gain and its translation into perceptual decisions.

In the future, I hope that the findings of this project will foster a) neuroscientific studies in similar naturalistic setups, b) development and application of data analytical methodologies to multimodal neuroscientific signals and c) employment of the obtained knowledge to design prosthetic devices suitable for active sensing (e.g. restoration of active touch).
We performed two complementary studies investigating a) audio-visual and b) visuo-haptic interaction in the human brain.

First, I employed a well-established visual object categorization task, in which early sensory evidence and post-sensory decision evidence can be properly dissociated based on electroencephalography (EEG) recordings. Specifically, using a face-vs-car categorization task, we have previously profiled two temporally distinct neural components that discriminate between the two stimulus categories: an early component, appearing ~170–200 ms poststimulus onset, and a late component, seen after 300–400 ms following the stimulus presentation. We hypothesized that using AV information to discriminate complex object categories—rather than more primitive visual features—would lead primarily to enhancements in the Late, as opposed to the Early, component, consistent with a post-sensory account. Importantly, by combining single-trial modelling and EEG data, we exploited the trial-by-trial variability in the strength of the Early and Late neural components in a neurally informed DDM to derive mechanistic insights into the specific role of these representations in decision-making with AV information. In short, we demonstrated in this work that multisensory behavioral improvements in accuracy arise from enhancements in the quality of post-sensory, rather than early sensory, decision evidence, consistent with the emergence of multisensory information in higher-order brain networks.

Second, I employed an active sensing paradigm coupled with neuroimaging, multivariate analysis and computational modeling to probe how the human brain actively samples multisensory information to make perceptual judgments. Participants of both sexes actively sensed to discriminate two texture stimuli using visual (V) or haptic (H) information or the two sensory cues together (VH). We showed that the simultaneous exploration of different modalities (multi-sensing) enhances neural encoding of active sensing movements. To strengthen the mechanistic interpretation of this result we exploit an informed drift diffusion model to link single-trial perceptual choice with the neural encoding of active sensing, in a clearly interpretable mechanistic context of decision making. This modeling approach demonstrated that the neural encoding of active sensing modulates the decision evidence regardless of the sensing modality and that multi-sensing results in significantly faster evidence accumulation. Then, to identify crossmodal interactions in the human brain and characterise their functional roles in decision-making behavior, we implemented a novel information-theoretic analysis, namely Partial Information Decomposition (PID). This revealed an interaction of the two unisensory representations in the human brain, in particular over motor and somatosensory cortex. Crucially, this cross-modal representational interaction correlates with multisensory performance, thus constituting a putative neural mechanism for forging active multisensory perception.

The above findings have been presented in conferences:
Society for Neuroscience Meeting
Symposium on Biology of Decision-Making

as we all as invited talks and seminars:
Behavioural Data Analytics Seminar Series, Imperial College London, UK
Centre for Mathematical Neuroscience, City University, London, UK
Unit for visually-impaired children, Istituto Italiano di tecnologia, Genoa, Italy
MultiTime Lab, Panteion University, Athens, Greece

I have also been invited to present this work to the general audience in a a)TEDx event at Glasgow Caledonian University (October 2021) and b) a Pint of Science event at the University of Leeds (in 2022).
I expect longer-term impact for patients and clinicians in the treatment and diagnosis of cognitive or developmental disorders and the development of better brain-computer interfaces, biomedical devices and neural prostheses for e.g. hand amputees or visually-impaired individuals.
The input technology used in this project (i.e. a haptic device) can serve as a graphical user interface for people with visual disabilities. Specifically, it can be used for cross-modal training that will allow them to reliably increase their reliance on the tactile modality. It can also serve as a rehabilitation platform that will support restoration of visual capacities via the combination of visual and tactile stimulation.
Another translational aspect of this work relates to the restoration of the sense of touch in hand amputees. Our haptic technology providing tactile feedback in conjunction with the decoded brain signals can inform the design of prosthetic devices, such as artificial hands.
Finally, decision–making faculties are often impaired with cognitive ageing. Being able to predict such cognitive deficits is a long-term objective in neuroscience. These methods could potentially be used to characterize the neural changes that are causal to impairments incurred by ageing. Our findings will enable identification of diagnostic and prognostic indicators of such deficits and inform the development of timely and effective treatments.
figure1new.png