Skip to main content
European Commission logo
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Ultra-layered perception with brain-inspired information processing for vehicle collision avoidance

Periodic Reporting for period 1 - ULTRACEPT (Ultra-layered perception with brain-inspired information processing for vehicle collision avoidance)

Période du rapport: 2018-12-01 au 2020-11-30

Although in their early stages, autonomous vehicles, have demonstrated huge potential in shaping future lifestyles. However, to be accepted by ordinary users, autonomous vehicles have a critical issue to solve – being trustworthy at collision detection. Autonomous vehicles that experience accidents once every few months or years would be unacceptable to the general public. In the real world, human driven vehicles collide at every second. More than 1.3 million people are killed by road accidents every single year. The current approaches for vehicle collision detection such as vehicle to vehicle communication, radar, laser-based Lidar, and GPS are far from acceptable in terms of reliability, cost, energy consumption, and size. For example, radar is too sensitive to metallic material, Lidar is too expensive and does not work well on absorbing/reflective surfaces, GPS based methods are difficult in cities with tall buildings, vehicle to vehicle communication cannot detect pedestrians or any objects unconnected, segmentation based vision methods are too computing power-thirsty to be miniaturized, and normal vision sensors cannot cope with fog, rain and dim environment at night. To save people’s lives and make autonomous vehicles safe to serve human society, a new type of trustworthy, robust, low-cost, and low energy consumption vehicle collision detection and avoidance systems are needed.
This ULTRACEPT consortium proposes an innovative solution with brain-inspired multiple layered and multiple modalities information processing for trustworthy vehicle collision detection. Connecting multidisciplinary teams from different countries together via staff exchange and collaboration, it takes the advantages of low-cost spatial-temporal and parallel computing capacity of bio-inspired visual neural systems and multiple modalities data inputs in extracting potential collision cues at complex weather and lighting conditions.
Multi-disciplinary research has been carried out by the consortium according to the plan during the first period. The key challenge in the modeling work is to combine new findings from neurobiologists with existing knowledge. This is the continuous exploration of collision detection neural systems and their underlying mechanisms for mobile intelligent machines such as autonomous vehicles. Researchers in the consortium have proposed a new model of lobular giant movement detector (LGMD) which has significantly improved the performance when challenged with both translating and looming visual cues. This model has been disseminated in an international conference and shared within the project consortium via secondments, workshops and networking events. The modelling work has also been extended to bio-inspired locomotion inspired by fish in collaboration with experienced and early stage researchers from partners in Asia and Europe. Secondments between partners have consolidated the collaboration between modelers, system integration researchers and neurobiologists.
To enhance collision detection by integrating multiple visual neural systems, researchers from the consortium proposed novel LGMD model and directional selective visual neural system models with separated on/off channels. On multiple visual neural system integration and coordination in real time systems, researchers have begun integrating multiple visual neural networks including e-LGMD1, LGMD2, and DS (directional sensitive neural network), into one autonomous robotic system for verification. This work was published in IEEE Access in 2020.
On the system for long distance hazard perception, the consortium proposed new small target movement detector (STMD) models. The proposed model can detect small targets only a few pixels in size. The STMD models are the first step for knowing objects are approaching in distance that may develop into hazards in seconds. The collaborators in Germany have proposed complex-valued neural networks for real-valued classification problems as a white-box model. The proposed models can select the most important features from the complete feature space through a self-organizing modeling process. They also proposed hybrid classification framework based on clustering. These proposed models and methods contributed to both hazard perception and text and road marking recognition.
In order to capture other modalities rather than normal colour vision data, researchers compared thermal sensors from major suppliers and identified the right type of thermal image sensor for data acquisition. An early stage researcher has been working on thermal image camera pre-processing algorithms to enhance the contrast of the thermal map. The early research on bio-inspired neural systems models for processing thermal images for collision detection has been completed. It demonstrated that the LGMD works well with temperature-map-based images sensors.
The researchers on secondments to partner universities and SME’s explored the other modality input – sounds - to enhance the safety aspect for driving. Their sound analysis for road condition recognition has been disseminated at an international conference.
In collaboration with consortium partners from universities in China and the EU, a road collision database has been created and published for open access in Github and will be maintained by the consortium to include more scenarios over time based on feedback from users.
As part of a new spin off from the research and development activities inspired by the research outcomes of this consortium, researchers from a German partner university have been focusing on how to implement collision avoidance in the robotic scenarios. They have presented a Human-Robot collaboration pipeline that generates efficient and collision-free robot trajectories based on the early motion trajectory and intended target predictions of human arm with optimization capability. The results show that the generated robot trajectory is safe and efficient to complete the whole task together with humans.
In summary, in this reporting period, despite the difficulties arising from the pandemic, the consortium has completed: 9 deliverables; organized 3 joint workshops and 1 training seminar; completed 139 researcher months secondments with another 36 months in progress; published 10 journal articles, 14 conference papers; achieved 3 milestones as planned.
As detailed in the published report, talks, and research papers uploaded on the project website, new research outputs beyond the current state of the art have been proposed and verified for improving driving safety by detecting collision early and accurately. In addition, the consortium has created a publicly available collision scene database to aid the development of algorithms. Consortium partners have also secured funding from local funding bodies to contribute to the staff exchange and collaboration in the coming months. The SMEs involved are exploring the potential market of the collision detection visual systems which may lead to new consortium on intelligent vehicle sensor systems for massive production.
ULTRACEPT project logo