The EU-backed SOUND OF VISION (Natural sense of vision through acoustics and haptics) project is all about creating and conveying an auditory representation of the surrounding environment. Composed of 3D cameras and inertial sensors, the device processes data from surrounding objects and feeds it back to the user in the form of spatial sounds and vibration from a wearable belt. ‘Our system can identify and warn the user about potential collisions or falls, suggest best free paths, and even scan for and read texts,’ says Prof. Rúnar Unnþórsson from the University of Iceland. The user is provided with clear, naturalistic visual-to-audio and tactile metaphors, making Sound of Vision an invaluably helpful tool in otherwise highly stressful and unsecure environments. At first glance, the device could be considered as just another offering in a growing list of similar assistive technologies already available or currently being developed. But that would be ignoring the very features that make it unique and that convinced the EU to invest almost EUR 4 million in its development. ‘There are several important aspects that distinguish Sound of Vision from alternatives,’ Prof. Unnþórsson explains. ‘It works both indoor and outdoor; it can render on audio and/or tactile channels as required; it provides increased functionality with the likes of free paths and text reading; and it comes with an elaborated set of training procedures that allow for an intensive use of virtual environments to enable self-training. Finally, it offers several alternative methods for encoding and rendering the extracted information. The user can select the most suitable one at any time, based on needs and preferences.’ Sound of Vision is also highly customisable: The user can select different sound and tactile models and fine tune parameters at will. On to the next prototype Several prototypes have been developed over the duration of the project, starting with one including only the basic functions, then a more advanced one including most of the above-mentioned features. Each new version was thoroughly tested by volunteer visually-impaired people, confirming the device’s good performance and enabling the team to identify issues. ‘Most complaints have been highly valuable for improving the prototypes, allowing us to keep the most useful encodings and fine tune them,’ says Prof. Unnþórsson. The team is currently working on their final prototype, which they expect to be ready in October 2017, with a final round of testing to take place in October/November. Users can expect better reliability, more efficient scanning, encodings and renderings, as well as improved wearability and ergonomics. ‘At the top of our list is the continuous improvement of the acquiring and processing of the 3D data. We also need to finish the physical design of the final prototype, and keep fine tuning: for instance, adjusting audio and haptic encodings, fine tuning parameters and improving software reliability and energy efficiency, which are both very important for a wearable device,’ Prof. Unnþórsson explains. As soon as the project gets wrapped up at the end of this year, partners plan to look for continuations and industrial partnerships, in order to further miniaturise the system and start its mass production and commercialisation. ‘Small-scale commercialisation can start six to 12 months after the project ends. The device would first go to a small selected group of visually-impaired people willing to help with refining the product. Before full commercialisation, however, we estimate that two years will be needed for the commercial product development, including miniaturisation, cost optimisation, refining, testing and certifications,’ Prof. Unnþórsson concludes.
SOUND OF VISION, inertial sensors, tactile, visually-impaired, acoustics, haptics, wearable belt, falls, software