CORDIS - Résultats de la recherche de l’UE
CORDIS

Levitation with localised tactile and audio feedback for mid-air interactions

Periodic Reporting for period 2 - Levitate (Levitation with localised tactile and audio feedback for mid-air interactions)

Période du rapport: 2019-01-01 au 2021-03-31

The LEVITATE project is the first to create a radically novel human-computer interaction paradigm that empowers the unadorned user to reach into a new kind of display composed of levitating matter. This tangible display will allow users to see, feel, manipulate, and hear three-dimensional objects in space. Our users can interact with the system in a walk-up-and-use manner without any user instrumentation. It is the first system to achieve this, establishing a new paradigm for tangible displays. We are a multi-disciplinary team working towards this goal from three perspectives: improving the acoustic models and algorithms used to manipulate ultrasound waves (WP1); developing new physical prototypes that enable designers to implement these new multisensory user interfaces (WP2); and creating new techniques that allow people to efficiently interact with these user interfaces (WP4). We are evaluating our new user interfaces (WP3) to understand performance and user satisfaction, and to inform the requirements for the next generation of our prototypes.
WP1 is developing the next generation of modelling tools and algorithms to improve, and ultimately combine, levitation, ultrasound haptics and parametric audio, produced from a single array of ultrasound speakers. In the first period, we have undertaken measurement activities to characterise the behaviour of ultrasound transducers and phased ultrasound arrays. This improves our understanding of how the hardware used in our physical prototypes behaves. A direct benefit of this is improved physical prototype hardware and software (WP2): e.g. new firmware and hardware layouts have been produced by Ultrahaptics. Tools developed in WP1 have been used to combine levitation and ultrasound haptic feedback from a single array for the first time. This first modality combination provides the foundation for novel multisensory interaction techniques (WP3 and WP4) and is an important step towards our aim of combining levitation, haptics and parametric audio.

WP2 is designing and implementing physical prototypes to enable new user interfaces. We have taken two general approaches: improving the control mechanisms for transducer arrays, and using physical metamaterials to create complex sound fields. The former method has been used in a variety of novel interfaces, e.g. to levitate food particles and to create interactive levitating objects. The latter approach has been used to create complex sound fields using diverse fabrication approaches, including optimised reflective surfaces and metamaterial platters that sit on top of a standard ultrasound array. A key achievement in this work was creating a self-bending sound beam, enabling a complex sound field to be created behind an obstacle. This method can also be applied to modality combinations, e.g. bending a haptic sound field around a levitating object. The physical prototypes developed in WP2 are enabling new user interfaces and interaction techniques, which are investigated by WP3 and WP4.

WP3 is creating high quality multisensory user interfaces. As part of this work, we have investigated ways of improving ultrasound haptic feedback, which provides the important physical aspect of our multisensory applications. New haptic rendering techniques have been developed and evaluated. Findings from WP3 have identified requirements for better physical prototypes (WP2), e.g. new haptic rendering firmware was developed to improve the quality of feedback. Another important aspect of WP3 is to evaluate the interaction techniques we develop in WP4. A series of user studies have investigated the performance and user satisfaction of new interaction techniques. These studies show the efficacy of our new interaction techniques and their findings are informing the design of new interfaces based on combinations of levitation, haptics, and parametric audio.

WP4 is developing and refining techniques to enable users to interact with our new user interfaces. Interfaces composed of levitating objects are highly novel and applying existing input techniques is not straightforward. We started with object selection, a fundamental part of interacting with any type of display. Selecting an object for manipulation is not trivial because users cannot necessarily touch the levitating objects; instead, we used mid-air pointing for selection. Our pointing technique and novel feedback mechanism was successful, as verified through WP3 user studies. We then extended this technique to allow users to reposition a levitating object, by mapping the object position to an extended fingertip position. A user study investigated the degree of control users could exert over a levitating object. In the next period, the project will expand the vocabulary of interaction techniques, so that users can fully manipulate content composed of levitating objects (e.g. to rotate or scale them). Demonstrators in this work package showcase the new user interfaces we are developing on the LEVITATE project.

A key aim of the project is to disseminate and communicate our work to a broad variety of audiences (WP5). We have undertaken several communication activities to engage with the general public, especially younger people. Through science fairs, workshops and television appearances, we have communicated the novel aspects of our research in a non-academic context. We aim to educate and inspire the next generation of scientists through thought provoking demonstrations of what user interfaces of the future might look, feel and sound like. We are also advancing our research agenda in the scientific community, with strong publications and dissemination activities in the HCI, haptics and acoustics fields. We also communicate our work online, through an engaging project website and active social media profile.
We are moving away from traditional human-computer interaction techniques like buttons, keyboards and mice towards touch (e.g. Smartphones and multi-touch gestures) and touchless interactions (as with the Kinect and Leap Motion controllers). The limiting factor in these new interactions is, ironically, the lack of physicality and direct feedback. Feedback such as visual or auditory cues can be disconnected from the hands generating gesture and touch input. In this project, we propose a highly novel vision of bringing the physical interface to the user in mid-air. In our vision, the computer can control the existence, form, and appearance of complex levitating objects composed of "levitating particles". Users can reach into the levitating matter, feel it, manipulate it, and hear it with all feedback originating from the levitating object's position in mid-air, as with objects in real life. There are numerous applications for this advanced display technology in all aspects of human interaction with computers. For example, instead of interacting with a virtual representation of a protein behind a computer screen, scientists could gather around a physical representation of the protein in mid-air, reach and fold it in different ways, and draw other proteins closer to see, feel and hear how they interact. The flexible medium of floating particles could be used by artists to create new forms of digital interactive installations for public spaces. Finally, engineers could walk with their clients around virtual prototypes floating in mid-air, while both are able to reach into the model and change it as they go.
consortium-meeting-250219.jpg
ec-review-270318.jpg
ubt-meeting-011020.jpg