Skip to main content
European Commission logo print header

The Motor Representation of Sensory Experience

Periodic Reporting for period 3 - moreSense (The Motor Representation of Sensory Experience)

Reporting period: 2021-04-01 to 2022-09-30

How do we perceive space and time? Traditional views of perception assume that information from the outside world enters cortical areas of the brain which reconstruct an internal image of the external spatial and temporal relations. However, this view lacks information on how the brain can know the true scale between sensory and external events. In other words, how is the size of a distance in the retinal image related to a distance in external space? Only actions can provide the necessary comparison between internal processing and external space and time. By reaching out for objects and experiencing the resulting errors, we can match the relations in internal maps to those of the outside world. The project ‘more Sense’ investigates whether and how motor actions build up our experience of space and time. Virtual reality including motion tracking is used to artificially distort the contingencies between movements and visual perception. The prolonged exposure to distorted contingencies between action and perception can reveal how motor errors influence where we see things in the world. Similarly for the perception of time, adaptation to distortions in the durations between actions the resulting sensory consequences lead to a change in temporal estimates.
The results of the project ‘more Sense’ shall lead to a deeper understanding of our perceptual experience. This knowledge can help developing diagnostics and rehabilitation procedures for damages to perception following stroke.
Within the previous project duration, the research team has developed an experimental virtual reality setup that allows measurements of head-, eye- and arm- movements with the sufficient speed and precision that is necessary for scientific behavioral studies. Using this setup we have successfully induced adaptation of movements to artificial visual contingencies. For instance, we made participants believe that their arm movements were longer or shorter than they really are. With these methods we could determine the influence of motor processing on visual perception. Amongst other findings, we could demonstrate that adaptive alterations in arm movements led to a shift in the perceived position of the own hand. We have also built up a cooperation with a rehabilitation clinic to study motor influences on perception in patients with a visual field loss that followed a stroke incidence. We developed a virtual reality software that is particularly suited to measure eye movements in older patients while they are observing an enjoyable virtual scene or game. The results from these measurements shall produce further insights into basic research on space perception on the one hand and they shall also contribute to the development of a novel rehabilitation procedure on the other hand.
We have developed a novel method to diagnose spatial hemifield neglect. This neurological disorder that follows a stroke incident inhibits patients to explore the left half of their visual field. Previous methods involved pencil-and-paper tests which can be conducted quickly but lacked precision. We have created an integrated virtual reality setup that can capture eye-movements while patients are watching an entertaining scene. The method we developed takes about a minute and delivers a complete picture of the neglected visual field. We are currently expanding this method to also include a tool that promotes rehabilitation training. We aim to make full use of the possibilities provided by virtual reality to overcome the limitations of previous rehabilitation attempts. We will test our novel approach in an additional patient study to confirm its efficacy.
In a more basic science direction, we developed a new method to study the influence of action on perception. By alternating action and perception trials we managed to measure serial dependencies between motor errors and visual localization. Previous research mostly used adaptation methods in order to modify action and to observe putative changes in perception. However, our novel approach reveals that every single action we perform has direct measurable consequences for future perception. Thus, we are in a constant mode of learning. Put differently, every motor error is used to recalibrate visual perception. We are currently successfully transferring this method, that we demonstrated with eye movements, to other motor modalities (head movements, reaching movements etc.) in many experiments. This way, we can determine quantitatively how each motor modality contributes to the perception of space. Similar approaches are currently applied to the perception of time. We expect to present a full picture of the contribution of motor maps to the experience of space and time at the end of the project period.
How motor maps structure visual space