Skip to main content

Natural sense of vision through acoustics and haptics

Periodic Reporting for period 2 - Sound of Vision (Natural sense of vision through acoustics and haptics)

Reporting period: 2016-07-01 to 2017-12-31

The Sound of Vision project (vision restoration through sound and haptics) designed, implemented and validated an original non-invasive hardware and software system to assist visually impaired persons (VIPs) in understanding the environment and to navigate. The Sound of Vision device works by permanently scanning the environment, extracting essential features and rendering them in real-time to the user through audio and haptic means. It is a state-of-the art device, using latest technologies to provide rich and customizable functionality that surpass any of the other existing solutions.
The overall objective was to develop a system that helps VIPs both in perceiving the environment and in independently moving in indoor or outdoor areas, without the need for predefined tags/sensors located in the surroundings.
Three functional prototypes were developed, extensively tested and validated. Last two prototypes were improved based on technical and usability testing of the previous prototype. Feedback was provided by VIPs, training specialists and specialists in psychophysics and behavioral science.
The main objective was to overcome the limitations of previous similar approaches, that prevented the adoption by the visually impaired community. This goal was achieved by employing a complex research and development approach with an emphasis on several key aspects: interdisciplinary approach to the design, implementation and validation of the system, direct participation of VIPs in the design and validation processes, high importance given to training, pervasiveness, wearability and richness and naturalness of the representation through original combinations of audio and haptic encodings.
The end result is a solution that includes training material and instruments to help VIPs use the system. It is a concept that goes beyond the state of the art and has the potential to become a successful commercial product.
WP1(M1-M5) was dedicated to the preparation of the development of the Sound of Vision system. The first draft of the User Requirements Document (URD) was discussed by the Consortium. Guided by these, the design work progressed iteratively and produced a high-level guide and blueprint for the implementation of the system, described in the Architectural Design Document (ADD). The work produced a list of equipment to be bought and evaluated in WP2. Dissemination activities were initiated and online presence was established.

WP2 (M6-M13) was dedicated to experimenting and evaluating technical alternatives. The design concept of the virtual training environment was defined and he blueprint for the Sound of Vision solution was designed and written in a Detailed Design Document (DDD).

The main focus of WP3 (M14-M21) was the development of the first prototype of the Sound of Vision system. The test environments and procedures were ready at the end of M19 and testing was conducted in M20-M21. Taking into account the feedback of sighted and blind users, the performance of the initial prototype was improved.

During WP4 (M22-M29), the consortium developed the advanced prototype: improving existing modules, implementing and integrating additional modules, and performing unit, integration and system testing. The overall stability was increased, obtaining a fully functional hardware+software solution, working at interactive frame rates, supporting all the established audio-haptic encodings, with better ergonomics. During WP4, training protocols with the advanced prototype were made.
At the end of WP4 the prototype was subjected to:
usability and performance testing
BCI testing
technical internal testing
users feedback acquisition.

During WP5 (M30-M36), the Final prototype system was developed, built upon the advanced prototype from WP4. It included most of the essential functions envisaged for the system. The prototype was validated and improved through extensive testing, with very good results. Three types of tests were performed: technical tests; extensive usability and performance tests, in both lab and real world environments and tests with EEG.
The main result is the Final Prototype. It is a complex and powerful TRL 7-8 hardware & software solution accompanied by a comprehensive set of training resources. It works in real time with continuous 3D scanning and analysis of the surroundings, powerful and naturalistic audio-haptics encodings of the environment, good ergonomics, stable.

The next step is the transfer to industry and exploitation. Thus, the solution itself will be improved (miniaturization, ergonomics, reliability, functionality and costs) and become widely available to the visually impaired community.

The following are the key aspects of Sound of Vision:
Powerful 3D scene scanning and analysis
Rendering through naturalistic multimodal (audio-haptic) full-scene encodings
Additional tools for perception, safe and efficient mobility
Wearable
Developed and extensively tested with end-users
Rich and efficient training resources, to help users achieve proficiency
The consortium believes, backed on the test performances of trained users, that the main results of the project have the potential to significantly impact the lifestyle of VIPs. Specifically, the Final Prototype allows for Better perception of the environment and Mobility improvement, which facilitates mobility and increases safety.
The Final Sound of Vision prototype is a TRL 7-8 solution, modular in terms of hardware and software, and with unprecedented functionality. This puts Sound of Vision in an excellent position to quickly advance to a commercial product and, very important, permanently benefit of new and better components, such as the 3D cameras or portable processing power.
Comparing the Sound of Vision project with similar actions, the following aspects stand out::
Interdisciplinary approach to design, implementation and validation
High importance given to training
High importance given to pervasiveness
Rich, natural-like perception, and additional tools - flexible and customizable
Emphasis on the use of haptics
Powerful combinations of representations through audio and haptic
Extensive evaluation with end users


Additional impact is supported by the following technical innovations produced:
Adjustable multi-speakers
Haptic belt
Basic research into haptic perception
Wearable hardware and software system based on sensor fusion for acquisition of 3D information
Method for converting depth images into fluid sounds for real-time blind navigation
Method for combined haptic and auditory space representation
Method for simultaneous and dynamic haptic rendering objects’ shape, distance, and type
Machine learning algorithm for real-time understanding of stressful moments in indoor and outdoor environments using biosignals.
GPU Rays-based Segmentation
GPU estimation and filtering of normal vectors
Labeling of ground, walls, ceiling, generic objects in indoor environments
Stair detection in depth images
Method for estimating ground plane equation for an arbitrary roll rotation
Method for real-time detection of doors based on color and depth
Algorithm for real-time stereo-based 3D reconstruction in dynamic scenes
Algorithm for real-time segmentation of outdoor environments
Ground plane detection and tracking in stereo sequences
Detection of negative obstacles in stereo sequences
Dual vectors based solution for the initial guess in the motion estimation problem