Skip to main content
Ir a la página de inicio de la Comisión Europea (se abrirá en una nueva ventana)
español español
CORDIS - Resultados de investigaciones de la UE
CORDIS
Contenido archivado el 2024-06-18

3D Stereoscopic Interactive User Interfaces

Final Report Summary - 3STARS (3D Stereoscopic Interactive User Interfaces)

In the last few years, Stereoscopic display devices have been introduced to the consumer market. This, together with the upcoming consumer release of stereoscopic head-mounted displays such as the Oculus Rift, has contributed to renew interest in technologies capable of presenting three-dimensional contents to the user. Indeed, these display work by simulating the way our own brain generates the perception of depth, by rendering two computer-generated pictures from slightly different viewpoints. The display then makes sure that each picture is seen by the associated eye, by means of shutter glasses or two separate screens. The 3STARS project aimed at pursuing research in this domain by designing new interaction techniques and investigating how stereoscopic display can support interactive experiences.

In the first stage of the project, the researcher investigated the current state of the art in the domain of stereoscopic 3D user interfaces. Following this literature review, he set out to address issues commonly associated with stereoscopic visualization. Although specialist equipment that is explicitly designed to manipulate 3D content in the full 6 degrees of freedom exists, this has contributed to 3DUIs remaining confined to research laboratories and industry professionals. For example, the use of Multitouch input and 3D displays has received a lot of attention in recent years due to the richness of gestural interaction and the possibility of applying it to analyse scientific data. However, the requirement of touching the screen in order to interact, directly leads to the occurrence of the “vergence-accommodation conflict”. This arises because the user could perceive 3D content to appear in front of the screen or inside it. However, users still need to touch the display surface in order to express an input. This introduces a mismatch between the real depth of the screen and the perceived depth. In so doing, by fixating on their fingers, users risk incurring in the loss of the stereoscopic effect.

To mitigate these issues, the researcher proposed the use of an indirect Multitouch interaction paradigm. A tablet can be consider an indirect device if it is used to affect another display such as a 3D stereoscopic one. One distinct advantage they have compared to direct 3D Multitouch installations is that they are much more ubiquitous. Thus any tablet can potentially be turned into a device capable of full 6DOF input. Furthermore, by not requiring the user to touch the display area, they do not introduce the risk of potentially missing information by occluding it with one’s own hands. They also retain the richness of Multitouch interaction and are less prone to context switches than the traditional mouse and keyboard combination.
Through a combination of user studies based on a 3D docking task, the researcher compared two indirect techniques designed for the tablet with two state-of-the-art direct touch techniques. Results from these studies showed that the indirect techniques are as effective as the best among the direct touch techniques. Furthermore, the indirect ones were reported to be less tiring and provided a better quality of experience. In a navigation tasks, the indirect technique resulted in a significant difference of 30% less errors.

Results of these studies have been collected into a paper that has been sent to the IEEE journal “Transactions on Visualization and Computer Graphics”. The authors have been recommended to revise the manuscript and resubmit it at the next available opportunity. A side project focusing on the use of an indirect stereoscopic 3DUI based on a foot tracker has instead been accepted to the 2014 IEEE Symposium on 3D User Interfaces. This work was based on the use of a foot tracker system installed below the user’s desk. This tracker captured the user’s feet movements and mapped them to actions affecting the 3D environment. The researcher was invited to present his work in Minneapolis on March the 30th, 2014. Long term results include the potential of using commonplace Multitouch devices as 3D input devices and the design of better and improved interaction techniques for this class of devices.

In the second part of the research fellowship, the rise in popularity of 3D stereoscopic head-mounted displays prompted the researcher to appraise its research potential. A promising research direction was found and the researcher decided to pursue this timely project with the goal of being at the forefront of this branch of 3D stereoscopic interfaces.

Indeed, today’s interactive experiences that simulate reality and are perceived through HMD stereoscopic displays suffer from two main issues. First, these experiences require suitable physical spaces if users need to walk around. Second, interacting with objects requires complex augmentation of either the environment or the user. If users want to participate in these experiences in their own home, it is unlikely that they will have these conditions available. Therefore we propose an investigation of this concept of “Substitutional Reality” in the context of Virtual Reality. The idea builds on the concept of “passive haptics” to provide engaging interactive experiences. Passive haptics is the notion of using real physical props to match 3D objects that users see in the virtual environment. So that when they reach for the virtual object, they receive passive touch feedback from touching a real object. In the past, research on this concept has been used to enhance the sense of really feeling “present” in a virtual environment. Substitutional Reality proposes to adapt the virtual environment to the layout of the physical environment. In this way, real objects are substituted by virtual equivalents. The researcher, along with a colleague, formalised this concept and investigated the extents to which a mismatch can be tolerated before it breaks the illusion of being in a virtual environment. Indeed, perfect matches are unlikely to be easily practical. On the other hand a mismatch between the physical object and its virtual representation enables designers to create a wide variety of Substitutional Environments.

Results helped to identify which physical properties (such as shape, size, material and temperature) had the most impact in supporting or breaking this illusion. We collected our finding in a set of design guidelines to support future designers of Substitutional Reality experiences. These results have been collected in a paper that, at the time of writing, has been accepted for inclusion in the program of the SIGCHI Conference on Human Factors, to be held in Seoul, South Korea in April 2015. Long term results could include the further exploration of this design space which eventually could result in systems able to use the user’s physical environment to dynamically alter the Virtual Environment the user is immersed in. This will allow users to participate in Virtual Reality experiences in their own home environment.

In addition to these research activities, the researcher was invited to give job talks at the Universities of Portsmouth, York and Birmingham. Eventually, the University of Portsmouth offered him a Lectureship. Following the termination of his contract with Lancaster University, he is continuing his research projects started thanks to the fellowship at the University of Portsmouth, where he is also engaging in teaching activities.

The project website is maintained at http://www.adalsimeone.me/3STARS(se abrirá en una nueva ventana)
The researcher should now be contacted at adalberto.simeone@port.ac.uk
Mi folleto 0 0