Skip to main content
European Commission logo print header

Modelling cortical information flow during visuomotor adaptation as active inference in the human brain

Periodic Reporting for period 1 - ViMoAct (Modelling cortical information flow during visuomotor adaptation as active inference in the human brain)

Reporting period: 2017-11-01 to 2019-10-31

Controlling the body’s actions in a constantly changing environment is one of the earliest and most important tasks of the human brain. Failure of the underlying mechanisms would have profound implications for our experience of selfhood and self-other distinction—and thus, for our normal functioning in society. But the mechanisms by which the brain uses information from various senses to control bodily actions remain unclear. This project was aimed at addressing this question—using the specific example of visual (seen) and proprioceptive (felt) sensory feedback from the moving hand.

We used a virtual reality environment to decouple seen and felt hand postures during a task requiring target-tracking with either hand (Figure 1). Participants had to match the phase of grasping movements—sensed from their unseen real hand or a seen virtual hand—to a virtual target, under varying congruence of proprioceptive (real) and visual (i.e. virtual) signals. Thus, either visual or proprioceptive information was task-relevant (while the respective other modality was a distractor). This experimental design was unique and novel in that it implemented a manipulation of our participants’ ‘cognitive-attentional set’; in other words, we manipulated how seen vs felt feedback from the moving hand was weighted depending on cognitive-attentional factors (in this case, task-relevance).

The specific objectives of this project were to illuminate the neuronal mechanisms underlying the above processes, using functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) recordings. Crucially, we aimed at modelling these data with a computational network analysis called ‘Dynamic causal modelling’ (DCM). DCM is a computational framework that allows one to compare multiple alternative hypotheses (models) about how some observed data feature (in our case: fMRI signal activation or MEG spectral power across the scalp) was most likely generated by underlying interactions between and/or within neuronal populations across a network of brain sources.

The design and methods used allowed us, furthermore, to interpret the results within the framework of ‘active inference’. In brief, active inference is a neurobiologically inspired computational account of perception and action. For us, the framework provided concrete predictions how visual vs proprioceptive sensory inputs should be weighted depending on cognitive-attentional set, and how this should manifest itself in behaviour and brain data. Finally, the active inference framework provided us with an opportunity to relate our experimental findings to philosophical accounts of minimal selfhood.
In our first experiment (Limanowski & Friston 2020 Cerebral Cortex), we used fMRI and DCM to investigate the neuronal mechanisms underlying the task-dependent weighting of visual vs proprioceptive information from the moving body, as described above. We found increased activity of visual and multisensory brain areas during the 'virtual hand' task and increased activity of proprioceptive brain areas during the 'real hand' task. DCM then showed that these activity changes were the result of selective, diametrical modulations of excitability of sensory (visual vs proprioceptive) brain areas (Figure 2). These results showed that endogenous attention can balance the excitability (i.e. cortical 'gain') of visual vs proprioceptive brain areas during action.

In a combined simulation and behavioural study (Limanowski & Friston 2020 Scientific Reports), we simulated a simple agent based on predictive coding formulations of active inference as situated within a free energy principle of brain function. The behaviour of our 'real' participants and the results of the computational simulations jointly confirmed that precision estimates of vision vs proprioception within the agent’s model of its body directly determined the degree to which each modality was used for driving goal-directed action (Figure 3). Thus, we established the hypothesized link between sensory precision weighting and behaviour.

Building upon the fMRI and simulation results, we next examined cortical oscillations with MEG while participants performed an analogous task (Limanowski, Litvak, & Friston 2020 bioRxiv). Crucially, the rich temporal structure of MEG data allowed us to use a neural mass model for DCM comprising three interconnected cell populations, which thus distinguished between ‘extrinsic’ (‘forward’ and ‘backward’) between-area connections, and ‘intrinsic’ connections. The latter connections model effects of self-inhibition, determining the input-output balance or ‘excitability’ of a given source, and are therefore usually associated with cortical gain control. We could thus, in our model comparison, test whether the condition-specific effects were best explained by changes in extrinsic (forward and/or backward between-region) and/or intrinsic (within-region) connectivity. Our MEG spectral results revealed that relative to the congruent movement conditions, occipital oscillatory power in the ‘beta’ range (12-30 Hz) was suppressed in the incongruent ‘virtual hand’ task but enhanced in incongruent ‘real hand’ task. Our DCM analysis identified diametrical changes in the cortical gain of visual areas as the most likely causes of these spectral differences; i.e. increased gain during the incongruent ‘virtual hand’ task and decreased gain during the incongruent ‘real hand’ task relative to movements without visuo-proprioceptive conflict (Figure 4). These results strongly support the hypothesis that visual (vs proprioceptive) bodily action information can be differently weighted depending on the prevalent cognitive-attentional set; i.e. for integration with the current action plan.

The implications of our experimental work for the understanding of minimal selfhood within the larger framework of active inference were further discussed in two theoretical papers (Limanowski & Friston 2018 Frontiers in Psychology, 2020 Philosophy and the Mind Sciences).
Together, our results suggest a striking flexibility in the brain’s body modelling for action. They show a direct link between cognitive-attentional set and neuronal processes determining sensory gating; i.e. indicating that cognitive-attentional factors may directly determine the ‘gating’ of visual vs proprioceptive action feedback via adjusting neuronal gain control.

This means that, to an extent, people can deliberately up- or down-weight sensory information from the moving body received via various channels.

This novel finding has important implications for our understanding of how the brain flexibly represents the body, and the degree of cognitive-attentional control humans have over these processes. Not last, this is a crucial question when it comes to cyber-physical interaction, as for example when embodying virtual avatars. Current virtual reality setups still require a more or less active focus on the seen (virtual) body while trying to attenuate or ignore that the physical body may be somewhat incongruent. We believe these immersive experiences rely on the very same processes that our project has identified. In this way, we have shown that in principle, basic mechanisms of self-modelling can be illuminated by a theoretically informed brain imaging approach.
Figure 1: Task design.
Figure 3: Simulated agent performing the same task.
Figure 2: DCM of fMRI activation.
Figure 4: DCM of MEG oscillations.