Skip to main content
European Commission logo print header

Vision Inspired Driver Assistance Systems

Periodic Reporting for period 2 - VI-DAS (Vision Inspired Driver Assistance Systems)

Periodo di rendicontazione: 2018-03-01 al 2019-08-31

Road accidents continue to be a major public safety concern. Human error is the main cause of accidents. Intelligent driver systems that can monitor the driver’s state and behaviour show promise for our collective safety. VI-DAS progressed the design of next-gen 720° connected ADAS (scene analysis, driver status). Advances in sensors, data fusion, machine learning and user feedback provide the capability to better understand driver, vehicle and scene context, facilitating a significant step along the road towards truly semi-autonomous vehicles. On this path there is a need to design vehicle automation that can gracefully hand-over and back to the driver. VI-DAS advances in computer vision and machine learning introduce non-invasive, vision-based sensing capabilities to vehicles and enable contextual driver behaviour modelling. The technologies are based on inexpensive and ubiquitous sensors, primarily cameras. Predictions on outcomes in a scene are created to determine the best reaction to feed to a personalised HMI component that proposes optimal behaviour for safety, efficiency and comfort. VI-DAS employes a cloud platform to improve ADAS sensor and algorithm design and to store and analyse data at a large scale, thus enabling the exploitation of vehicle connectivity and cooperative systems. VI-DAS addresses human error analysis by the study of real accidents in order to understand patterns and consequences as an input to the technologies. VI- DAS also addressed legal, liability and emerging ethical aspects because with such technology comes new risks, and justifiable public concern. The insurance industry will be key in the adoption of next generation ADAS and Autonomous Vehicles and a stakeholder in reaching L3.
The innovative Human-Centred Method implemented by VI-DAS started from accidents and driving errors analysis to support the design of VIDAS prototypes at each phase of the development process. VI-DAS use cases tackle complex situations rather than simple actions and manoeuvre descriptions in different scenarios. One of the main objectives in VI-DAS is to address the hand-over and hand-back between manual and automated driving modes, focused on the driver’s status and scene interpretation, always keeping the driver in the loop.
After the specification phase and following three RTD cycles, the main modules of the overall VI-DAS System were developed:
1) Outside sensing: Multi object detectors, trackers, depth estimators and classification modules
2) Inside Sensing: Driver monitoring modules: blink, head pose, gaze and action recognition
3) Understand: risk and prediction modules with the recommendation to the action module
4) Advise/Act: complete multimodal adaptive HMI
5) Connect: Cooperative X2X modules including V2C
6) Risk: including liability and insurance parameters

These modules have been integrated into VI-DAS SAE-L3 onwards sensor agnostic platform where the complete approach is an architecture deployable in different HW platforms such as vehicles and simulators.

VI-DAS approach was successfully integrated and tested in a real environment using Eindhoven/Helmond Highway infrastructure where a final event was organized to showcase the VI-DAS use cases.
Additionally, an exhaustive validation was carried out with end users with impressive results. The studies show high usability and acceptability of VI-DAS: Utility, Acceptability and Satisfaction 90%, Effectiveness 90% and Efficiency 97%.

Dissemination and communication have played a key role in the success of the project. VI-DAS has also generated the largest Multidimensional Driver State Open Database which will be public for further research purposes. Both project as a whole and individual module exploitation plans have been studied proposing a clear list of VI-DAS products offers to the market or to other stakeholders. The IPR study carried out in parallel has helped the consortium in the identification of exploitable foreground and the joint exploitation opportunities.

Finally, the defined Human Machine Transition cycle for addressing graceful hand Over and hand back as well as the Video Content description for generating a metadata based reach scene description will be further pushed for proposing them as standards.
The main impacted areas are as follows:
- Automotive sensory data fusion and aggregation: VI-DAS has proposed a multi-core, multi-CPU hardware architecture capable of connecting multiple units for carrying out the computational load required for the processing of the perception, understanding and act activities.
Additionally, VI-DAS has produced the VCD metadata format, currently being standardised.
- Driver State Monitoring: VI-DAS has proposed a personalised driving and driver models based on non-intrusive parameters and flexible model building which are pre-trained but fined tune with each individual driver’s data. This approach helps to really understand the Individual State of the Diver.
- Driver modelling and risk evaluation: VI-DAS takes a steps towards the correct scene analysis of critical situations by categorisation of databases of accidents, near crash cases and difficult driving situations and driving errors; the prediction of the dynamic evolution of the traffic using v-SLAM technology and object tracking; the estimation of the consequences in terms of risk and utility by quantification of the cost of an expected event and the associate damage; and in the evaluation and selection of the most appropriate behavioural choice, leading to corresponding driver support actions or notifications
- Confidence estimation to support risk estimation: VI-DAS proposes an in the confidence estimation techniques for next-generation real-time techniques from artificial intelligence, specifically for deep learning. VI-DAS has explored an advanced concept on simulation-in-the-loop. In this approach, the LDM has been used as a basis for simulating the expected multi-modal sensory inputs.
- Developing, verifying and validating safety SW: New testing method and testing automation for (semi-)autonomous vehicles. Two software tools have been mainly used: RTMaps, created by Intempora, and PreScan, created by TASS. These tools have evolved within the project including new functionalities developed to provide support to the demanding requirements of testing and integration. Furthermore, an innovative Human-Centred Method has been implemented considering accidents and driving errors analysis in the design of VI- DAS prototypes.
- Efficient, customizable and optimized HMI: On the fly allocation of the HMI channel to be used in the holistic environment. The major advance in this field is the development of personalised cognitive-aware modality allocation systems that is based on driving models and scene understanding obtained from the sensors. By including the driver’s information processing characteristics, personal driving modes and situational context, automatic adaptive multimodal HMI systems have been generated for improving safety and comfort.
- Connected component security: The EVRA method has been enhanced by applying the inter-system communication-centric STRIDE-per-element threat analysis methodology.
- Insurance and legal innovation: VI-DAS has embedded insurance considerations within the technical outcomes of the project. This allowed the project to estimate both the auto liability reduction and the product liability increase for the VI-DAS car.
Project Logo