Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
Content archived on 2024-06-18

Learning and rehabilitation using haptic, audio and visual feedback in virtual reality environments

Final Report Summary - LR HAV VRE (Learning and rehabilitation using haptic, audio and visual feedback in virtual reality environments)

Project objectives for the period

These studies are based on the hypothesis that supplying appropriate information (perceptual and conceptual) through compensatory sensorial channels by using a virtual learning environment (VLE) may assist people who are blind in gathering the information required for learning process. This technology is used in learning and rehabilitation environments for people with physical, mental, sensory, and learning disabilities. This project is part of a larger and longer research effort that over the years has included design and development of two VLEs for people who are blind, usability studies, and evaluation studies of the effectiveness of VLEs on cognitive skill.

During this project, we researched and developed several technologies that allow users who are blind to explore and learn via virtual environments through haptic and auditory feedbacks. Over the course of the project four studies were carried out. They are described below.

Study one: Complex VLE to support orientation skills by people who are blind - The participants explored four unknown complex real spaces by exploring the VLE in advance before their arrival to the real spaces (RS). This study was based on the BlindAid system, which was designed and developed at the touch lab at Massachusetts Institute of technology (MIT) with Dr Srinivasan and Dr Schloerb during my previous research as a postdoctoral associate. The BlindAid system combines three dimensional (3D) audio with a Phantom® haptic interface. This study included four participants who are totally blind. During the project period we evaluated their exploration strategies, exploration process, cognitive mapping process, and their contribution to their navigation in the RS. The findings supply strong evidence that the exploration in the VLEs gave participants a stimulating, comprehensive, and thorough acquaintance with the target space. During the orientation tasks in the RS the participants were able to recall their cognitive map. They were able to manipulate their spatial knowledge very well, especially in the reverse tasks and the perspective-taking tasks. Most of the participants were able to transfer the tactile information that they collected as a tactile traveler in the VLEs to auditory landmarks and were able to use echolocation landmarks during their walk in the RS. One of the outcomes of these results and of the successful collaborative work with orientation and mobility (O&M) specialist during study one was to transform this promising technology into a useful learning and rehabilitation tool (study two). This study was collaborative research with Dr Srinivasan and Dr Schloerb from the touch lab, MIT, and with the Carroll center for the blind, a rehabilitation center in Newton, Massachusetts. Study results were published in three international scientific journals, presented at three international conferences and four scientific meetings:

Study two: VLE as an O&M rehabilitation tool for newly blind - This study involved integrating the BlindAid system (see study one) in a rehabilitation center as a simulator with which their clients can interact and be trained as part of an O&M rehabilitation program. The BlindAid system combines 3D audio with a Phantom® haptic interface so as to allow the user to explore a virtual map through a hand-held stylus. The use of this system allowed newly blind persons to understand their own positive and negative spatial behavior. This study included participants who were totally blind and people with residual vision who were blindfolded during the experiments (an experimental group n=nine; a control group n=nine). During this study the participants learned how to explore new spaces, how to collect spatial information by using auditory and tactile information, how to solve an orientation problem, and how to apply this information in the RS. In this stage of study, we concluded the analysis process of the research data and the participants' performance of orientation tasks in the RSs of all the research participants (N=15). The research results show that the exploration simulation in the VLE with O&M specialist interventions in the training process improved participants' orientation skills, helped their exploration path, and brought them to think about their exploration strategy, orientation problem solving, and the construction of a cognitive map. This study was collaborative research with Dr Srinivasan and Dr Schloerb from the touch Lab, MIT, and with the Carroll center for the blind, a rehabilitation center in Newton, Massachusetts, in the United States (US). These preliminary results were published in disability and rehabilitation: assistive technology an international scientific journal and presented at Rehab Week 2011, institute of electrical and electronics engineers (IEEE) international conference on virtual rehabilitation (ICVR), Zurich, Switzerland, three more papers are in the process of submission.

Study three: Virtual cane versa BlindAid System - This research examines the influence of two pre-planning O&M systems on a participant's exploration strategies, exploration process, cognitive mapping process, and their contribution to their navigation in the RS. This research included two systems: (a) The BlindAid system, which combines 3D audio with a Phantom® haptic interface so as to allow the user to explore a virtual map through a hand-held stylus and (b) virtual cane, based on the Nintendo Wii, a mainstream device, readily available, and inexpensive. This interface uses the WiiMote as a virtual cane to scan the environment in front of the user and the Nunchuck as a direct motion, and combines auditory feedback (Evett, Battersby, Ridley & Brown, 2009). This study included 15 participants who are totally blind, divided to three research groups: two experimental groups and one control group. Each of the two experimental groups included five participants, and they explored the virtual environments (VE's) by using either the BlindAid system or the virtual cane. The control group participants explored the RSs and included five participants. The participants explored two unknown real spaces, both simple and complex spaces, by exploring them in advance before their arrival to the RSs. During the project period we evaluated their exploration strategies, exploration process, cognitive mapping process, and their contribution to their navigation in the RS. In this stage of study, we are collecting data and analysing the research data of the participants and their performance of orientation tasks in the RSs. Two graduate students who participated in the VLE research group carry out this study. This work is a collaborative study with Battersby, Dr Brown, Dr Evett and Merritt from the computing and technology team, Nottingham Trent university, Nottingham, UK. This study results will be published in international scientific journal and international conference in the near future.

Study four: VLE as a science education learning simulation for people who are blind - This system is a modified version of the one that was originally created for the GasLab curriculum by Wilensky in 2003 and then adapted for the connected chemistry curriculum by Levy and Wilensky in 2009. This computer model can help individuals who are blind explore and learn about scientific complex models. The particular adaptation of the model for this study involves auditory feedback of variables, locations, and events. This study was founded by ISF (Israel science foundation, individual research grant, October 2011-September 2015). This work is a collaborative study with Dr Levy from the faculty of education at university of Haifa, Israel. This study results were published in two international scientific journals, and two international conferences.

Implications of the project

Based on past research (Simonnet, et al., 2010), further research is needed to examine if and how the VE's spatial exploration method (allocentric or geocentric representations) influence the user's spatial model (route model or map model), a topic that was less examined and which might have an influence on the user's ultimate ability and outcome in using a VE. Additionally, the research must proceed to examine the real-life scenarios in which this type of O&M aid is most needed, such as outdoor and complex spaces, and must examine the user's ability to apply the new spatial knowledge in the RS. Recently, novel VE approaches have expanded from supporting a single user in a local approach to supporting multiple users in remote locations (Weiss & Klinger, 2009). The use of such systems can aid people who are blind in two main applications: (a) integration in a rehabilitation program for newly blind and (b) short-term rehabilitation sessions that are needed mostly as a result of relocation. Additionally, these approaches will allow users to share spatial information via the VE. The potential of these new approaches should be examined. Future research should examine the possibilities of integrating hand-held devices technologies with VE preplanning and in-situ O&M applications. As with their sighted peers, people who are blind tend to use applications based on Android software. Tactile handheld models have been developed (Fukushima & Kajimoto, 2011; Youngseong & Eunsol, 2010). Such applications allow people who are blind to explore layout of streets on virtual environments by using touch to move along the street and to receive auditory directions. The handheld device's screen will fit the user's palm, allowing the user to collect all the tactile information. This innovative and unique system could allow users to explore the spatial space in advance, preplan a new path, install landmarks, apply these landmarks through the Global Positioning System (GPS) in the RS, share this information with multiple users, and use different spatial layers through the GPS (user's landmarks, public transportation, road construction, etc).