Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary
Content archived on 2024-05-29

Eye-hand coordination and control: Neural network identification and experimental characterization of the brain's visual-motor reference frame transformation

Final Activity Report Summary - EYE-HAND CONTROL (Eye-hand coordination and control: Neural network identification and ... characterization of the brain's visual-motor reference frame transformation)

Approximately half of the human brain is devoted to processing visual information for generating behaviour and one of the most common forms of behaviour is motor actions, e.g. moving our arms to reach out for a cup of coffee. Therefore, the present research project investigated how visual information was transformed into accurate arm movements.

More specifically, visual information is encoded relative to the line of sight, i.e. in a retinal frame of reference, whereas a motor command for the arm must be specified in a reference frame relative to the shoulder, i.e. the insertion point of the arm in the upper body. As such, accurate motor planning requires a geometrical transformation between a visual and effector-related reference frame. In this project, a complete mathematical and experimental framework was developed for the first time to describe this highly non-linear three-dimensional visuomotor reference frame transformation in its entire complexity.

In a first step, a formulaic mathematical framework was developed, modelling the complete geometry of the eye-to-head-to-arm transformation. We used Clifford's dual quaternion formalism as used in robotics to describe the involved three-dimensional geometry. This model made quantitative predictions on the precision and accuracy of reaching movements to visual targets in cases for which non-visual signals about eye and head orientation were not taken into account by the brain. These predictions were then tested experimentally in a second step. As a result, reaching errors made by human subjects were compatible with the hypothesis that the human brain used an internal model of the complete body geometry for reach planning. This computation was costly, as reflected by the increased reaching variability under large eye or head positions.

Once we demonstrated that such a three-dimensional visuomotor conversion was indeed performed by the brain, an artificial neural network was trained and analysed in order to uncover a possible neural mechanism implementing reference frame conversions in distributed processing. It was shown that individual neurons in such a network performed partial, fixed reference frame transformations through their input-output relationship and that those partial transformations were then combined in a gain-weighted fashion to produce the overall transformation at the population level. In addition, we demonstrated that using different physiological techniques to identify the reference frame of neurons could result in different reference frames within the same neuron. This major advance called for a more careful interpretation of the results that were obtained across different methods.

In an attempt to uncover the neural substrates of the real brain that implemented this three-dimensional reference frame transformation, high temporal resolution functional brain activity was recorded during a memory-guided pointing task using Magnetoencephalography (MEG). We could show that the visual to effector-centred transformation occurred in a network of occipital-parietal areas as soon as 300 ms after target presentation. The transformation between an extrinsic, higher-level spatial, and intrinsic, e.g. muscle-related, motor plan occurred in pre-motor and motor areas around the time of movement execution. This study identified for the first time the spatial and temporal aspects of the three-dimensional reference frame transformation for arm movements.

Finally, the current theory about reference frame transformations was extended to velocity signals. This was important because static position signals and dynamic velocity information were processed in anatomically different brain structures that separated before any reference frame transformation could occur. We demonstrated in model-based experiments that smooth pursuit eye movements, whose main input was velocity, were geometrically accurate. Thus, velocity signals also underwent a three-dimensional reference frame transformation and this study opened up a whole new field of research.