Periodic Reporting for period 2 - .lumen – Glasses for the blind (.lumen - Empowering the blind)
Berichtszeitraum: 2023-11-01 bis 2024-10-31
The guide dog is unanimously seen as a good option, but it has a few drawbacks. It costs between $30-$60K to train a single guide dog – and it can be a big responsibility for a blind person to provide care for such a companion.
Because of this, there are only 28,000 guide dogs to 40 million individuals with visual disabilities.
.lumen offers a solution that mimics the benefits of a guide dog without the drawbacks that make it a non-scalable solution. Using the latest in autonomous driving, artificial intelligence and robotics, .lumen behaves like a virtual guide dog in the form of a headset. Once the user puts it on, they become capable of acknowledging their position and movements in 3D; they understand where they are and how they can interact with the environment around them. You can, for example, ask the headset to take you to a specific point of interest or to bring you home, and they will do so. Rather than pulling your hand, as a guide dog does, the .lumen system guides you by “pulling” your head. The system offers relevant information to its user through an intuitive haptics and auditory feedback mechanism.
The objective of the project is to achieve the following milestones:
- Demonstrate the system prototype in an operational environment
- Complete and qualify the system
- Deliver the .lumen device to the first clients
Some of the main activities included:
- Refinements of the Hardware such that this head-worn wearable can be comfortable for a large sector of the population
- Furthermore, specific aspects of the sensors, cooling, battery, miniaturisation, and feedback interfaces were developed
- On the Software side, a tremendous amount of updates on the overall software stack such as
- Machine Learning models dealing with both image processing, and with sound processing, some surpassing their 100th iteration on tasks such as classification, segmentation, and intent understanding
- Classical Computer Vision algorithms able to process real-time up to 10 image streams in order to perform 3D Environment Understanding
- Advanced Auditory & Haptic feedback interfaces further developed together with blind individuals.
- Hundreds of blind individuals navigating using the guide-dog like experience provided by the patented .lumen system
- Achieving 70% of the computing power of a self-driving car in a wearable
- The new state of the art Machine Learning model for pedestrian segmentation, interlaced with mapping algorithms permitting an environment understanding never before obtained in a wearable