We rarely experience difficulty localizing our limbs. We can easily describe the location of our left hand or point to it with our right, and we can do this with our eyes open or closed. Moreover, we never have the sensation that our hand is in two locations simultaneously, even though we simultaneously process two distinct sources of information (vision and proprioception) about its current location. This project examines how vision and proprioception are combined to generate estimates of hand and target locations. The proposal tests the hypothesis that proprioception resists re-alignment by vision and that inter-sensory alignment is not needed for effective action. We apply an optimal cue combination approach to understanding the interaction between vision and proprioception, building from theoretical groundwork laid by Ernst and Banks (2002) and Smeets et al. (2006). The knowledge acquired from the proposed research should assist our understanding of sensory integration; it should also improve our understanding of how people adapt to virtual reality, mixed reality, and teleoperation systems, such that these systems can be optimally structured for comfortable use and accurate performance. The project’s success will rely on the combination of the applicant’s prior research experience in Canada (specifically in motor adaptation) and the host organisation’s resources and faculty expertise. The Vision and Control of Action (VISCA) group at the University of Barcelona provides the researcher with access to multiple virtual reality systems, and members of the faculty have expertise in sensorimotor integration, particularly as it relates to moving in virtual environments.
Call for proposal
See other projects for this call