"Gaze is a compelling modality for human-computer interaction (HCI). Where we look implicitly reflects our goals and information needs, and we are able to direct our gaze at will and speed across our environment, to efficiently communicate interest and express intent. However, in decades of research on gaze interfaces, the approach has been to consider eye movement in isolation and ignore concurrent movement of head and body. GEMINI will develop a fundamentally different approach, from the ground up recognising that visual attention and gaze control are performed in natural and necessary coordination of eyes, head and body.
The project goal is to establish a new conceptual, computational and scientific foundation for gaze and eye movement in interaction, grounded in understanding of eye-head-body coordination and eye movements that stabilise and adapt vision. GEMINI will pioneer motion-based principles for determining gaze attention in 3D, methods for inferring interaction context from synergetic eye, head, and body movement, and a multimodal framework for ""gaze in concert with the body"" in which eye-head gaze is extended with gestural and proxemic interaction for a breakthrough in usable touchless interaction with ubiquitous computing.
GEMINI will radically change how eye movement is approached for interaction, not as isolated and niche, but as central to interaction and dynamically coupled with other movement of the body. The project vision is that users will be able to interact fluidly in 3D with the physical-digital world around them, close-up and at-a-distance, using only their body for input, with eye, head, hand and body movements dynamically combined based on their natural coordination, context and complementarity. This has huge potential for impact on interactive systems where touch is not possible, practical, desirable or safe. If successful, ""gaze in concert with the body"" may well define the next major HCI paradigm."
Call for proposal
See other projects for this call