Over the last years there have been more and more hints that the perception of motor actions might be supported by mechanisms that are very similar to those in control of these actions. This would make sence as both the perceptual and the control tasks share a large part of their computational problems.
This series of meetings concerns the progress in the field of spatial coding, especially as it refers to our sense of motion and orientation, and as it leads to specific voluntary or reflex motor actions. We live in a three- dimensional world. Sensory inputs define our position in space and that of objects. Motor output is used for self- motion as well as for manipulation of objects. We experience one unique sensory space, which requires the fusion of visual, acoustic, somatosensory, vestibular, and other inputs. This is computationally a highly non- rivial process, as the sensory systems utilise different coding formats and co-ordinate frames.
The first meeting will address the role of the cortex and the cerebellum in the generation of eye movements. Eye movements have been studied extensively as a model for sensory-motor transformations, because they are relatively simple movements, e.g. pure rotations, with a constant mass, and without intervening obstacles. The primary pathways for the generation of eye movements are within the brainstem and the midbrain. However, the role of cortical and cerebellar areas in the control of eye movements has recently received increasing attention.
Their role in the plasticity and performance of eye movement generation is currently investigated at behavioural, electrophysiological and molecular level.
The second meeting will focus on the computational mechanisms and strategies that allow us to generate and perceive movement and action in space. This includes the control of body stance, locomotion, and goal-directed actions. These tasks are computationally very difficult, as many parameters (which are coded in different co- ordinate frames-of -reference) have to be controlled simultaneously using feedback from the body and from the external environment. The latter is usually provided by the visual motion input, which is used to monitor the consequences of motor actions.