Periodic Reporting for period 1 - ViMoAct (Modelling cortical information flow during visuomotor adaptation as active inference in the human brain)
Okres sprawozdawczy: 2017-11-01 do 2019-10-31
We used a virtual reality environment to decouple seen and felt hand postures during a task requiring target-tracking with either hand (Figure 1). Participants had to match the phase of grasping movements—sensed from their unseen real hand or a seen virtual hand—to a virtual target, under varying congruence of proprioceptive (real) and visual (i.e. virtual) signals. Thus, either visual or proprioceptive information was task-relevant (while the respective other modality was a distractor). This experimental design was unique and novel in that it implemented a manipulation of our participants’ ‘cognitive-attentional set’; in other words, we manipulated how seen vs felt feedback from the moving hand was weighted depending on cognitive-attentional factors (in this case, task-relevance).
The specific objectives of this project were to illuminate the neuronal mechanisms underlying the above processes, using functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) recordings. Crucially, we aimed at modelling these data with a computational network analysis called ‘Dynamic causal modelling’ (DCM). DCM is a computational framework that allows one to compare multiple alternative hypotheses (models) about how some observed data feature (in our case: fMRI signal activation or MEG spectral power across the scalp) was most likely generated by underlying interactions between and/or within neuronal populations across a network of brain sources.
The design and methods used allowed us, furthermore, to interpret the results within the framework of ‘active inference’. In brief, active inference is a neurobiologically inspired computational account of perception and action. For us, the framework provided concrete predictions how visual vs proprioceptive sensory inputs should be weighted depending on cognitive-attentional set, and how this should manifest itself in behaviour and brain data. Finally, the active inference framework provided us with an opportunity to relate our experimental findings to philosophical accounts of minimal selfhood.
In a combined simulation and behavioural study (Limanowski & Friston 2020 Scientific Reports), we simulated a simple agent based on predictive coding formulations of active inference as situated within a free energy principle of brain function. The behaviour of our 'real' participants and the results of the computational simulations jointly confirmed that precision estimates of vision vs proprioception within the agent’s model of its body directly determined the degree to which each modality was used for driving goal-directed action (Figure 3). Thus, we established the hypothesized link between sensory precision weighting and behaviour.
Building upon the fMRI and simulation results, we next examined cortical oscillations with MEG while participants performed an analogous task (Limanowski, Litvak, & Friston 2020 bioRxiv). Crucially, the rich temporal structure of MEG data allowed us to use a neural mass model for DCM comprising three interconnected cell populations, which thus distinguished between ‘extrinsic’ (‘forward’ and ‘backward’) between-area connections, and ‘intrinsic’ connections. The latter connections model effects of self-inhibition, determining the input-output balance or ‘excitability’ of a given source, and are therefore usually associated with cortical gain control. We could thus, in our model comparison, test whether the condition-specific effects were best explained by changes in extrinsic (forward and/or backward between-region) and/or intrinsic (within-region) connectivity. Our MEG spectral results revealed that relative to the congruent movement conditions, occipital oscillatory power in the ‘beta’ range (12-30 Hz) was suppressed in the incongruent ‘virtual hand’ task but enhanced in incongruent ‘real hand’ task. Our DCM analysis identified diametrical changes in the cortical gain of visual areas as the most likely causes of these spectral differences; i.e. increased gain during the incongruent ‘virtual hand’ task and decreased gain during the incongruent ‘real hand’ task relative to movements without visuo-proprioceptive conflict (Figure 4). These results strongly support the hypothesis that visual (vs proprioceptive) bodily action information can be differently weighted depending on the prevalent cognitive-attentional set; i.e. for integration with the current action plan.
The implications of our experimental work for the understanding of minimal selfhood within the larger framework of active inference were further discussed in two theoretical papers (Limanowski & Friston 2018 Frontiers in Psychology, 2020 Philosophy and the Mind Sciences).
This means that, to an extent, people can deliberately up- or down-weight sensory information from the moving body received via various channels.
This novel finding has important implications for our understanding of how the brain flexibly represents the body, and the degree of cognitive-attentional control humans have over these processes. Not last, this is a crucial question when it comes to cyber-physical interaction, as for example when embodying virtual avatars. Current virtual reality setups still require a more or less active focus on the seen (virtual) body while trying to attenuate or ignore that the physical body may be somewhat incongruent. We believe these immersive experiences rely on the very same processes that our project has identified. In this way, we have shown that in principle, basic mechanisms of self-modelling can be illuminated by a theoretically informed brain imaging approach.