Skip to main content

Ultra-layered perception with brain-inspired information processing for vehicle collision avoidance

Periodic Reporting for period 1 - ULTRACEPT (Ultra-layered perception with brain-inspired information processing for vehicle collision avoidance)

Reporting period: 2018-12-01 to 2020-11-30

Although in their early stages, autonomous vehicles, have demonstrated huge potential in shaping future lifestyles. However, to be accepted by ordinary users, autonomous vehicles have a critical issue to solve – being trustworthy at collision detection. No one likes an autonomous car that is doomed to a collision accident once every few months years or years. In the real world, collisions happen every second. More than 1.3 million people are killed by road accidents every single year. The current approaches for vehicle collision detection such as vehicle to vehicle communication, radar, laser-based Lidar, and GPS are far from acceptable in terms of reliability, cost, energy consumption, and size. E.g. radar is too sensitive to metallic material, Lidar is too expensive and does not work well on absorbing/reflective surfaces, GPS based methods are difficult in cities with tall buildings, vehicle to vehicle communication cannot detect pedestrians or any objects unconnected, segmentation based vision methods are too computing power-thirsty to be miniaturized, and normal vision sensors cannot cope with fog, rain and dim environment at night. To save people’s lives and make autonomous vehicles safer to serve human society, a new type of trustworthy, robust, low-cost, and low energy consumption vehicle collision detection and avoidance systems are needed. This consortium proposes an innovative solution with brain-inspired multiple layered and multiple modalities information processing for trustworthy vehicle collision detection. It takes the advantages of low-cost spatial-temporal and parallel computing capacity of bio-inspired visual neural systems and multiple modalities data inputs in extracting potential collision cues at complex weather and lighting conditions.
Multi-disciplinary project research has been carried out by the consortium according to the plan during the first period. The key challenge in the modeling work is to combine new findings from neurobiologists with existing knowledge. This work is a continuous exploration of collision detection neural systems and the underlying mechanisms which could be applied to mobile intelligent machines such as mobile robots or autonomous vehicles. Researchers in the consortium have proposed a new model of lobular giant movement detector (LGMD) which has significantly improved the performance when challenged with both translating and looming visual cues. This model has been disseminated in conferences and shared within the project consortium via secondments, workshops and networking events. The modelling work has also been extended to bio-inspired locomotion and optimization inspired by fish in collaboration with experienced and early stage researchers (ESR) from partners in Asia and Europe. Secondments between partners have consolidated the collaboration between modelers, system integration researchers, and neurobiologists. The modeling work also benefited from the 2019 kick-off workshop in Guangzhou China, where partner representatives took part in presentations and discussions.
To enhance collision detection by integrating multiple visual neural systems, researchers from the consortium proposed ON/OFF channel separated LGMD model and ON/OFF-based directional selective visual neural system models. On multiple visual neural system integration and coordination in real-time systems, researchers have begun integrating multiple visual neural networks including e-LGMD1, LGMD2, and DS, into one autonomous robotic system. This work was published in IEEE Access.
For long-distance hazard perception, the consortium proposed new small target movement detector (STMD) models. The proposed model can detect small targets only a few pixels big in the captured video sequences. The STMD models are the critical first step for knowing what objects are approaching in the distance with relevant movements that could be developed into hazards in just a few seconds. The collaborators have proposed complex-valued neural networks for real-valued classification problems as a white-box model. The proposed models can select the most important features from the complete feature space through a self-organizing modeling process. They also proposed hybrid classification framework based on clustering. These proposed models and methods contributed to both hazard perception and text and road marking recognition.
After comparing a few thermal sensors from major suppliers, researchers have identified the right type of thermal image sensor for data acquisition. An ESR has been working on thermal image camera pre-processing algorithms to enhance the contrast of the thermal map. The early research on bio-inspired neural systems models for processing thermal images for collision detection has been completed. It demonstrated that the LGMD works well with temperature-map-based images sensors.
The researchers on secondments to partner universities and SME’s explored the other modality input – sounds - to enhance the safety aspect for driving. Their sound analysis for road condition recognition has been disseminated at an international conference and submitted to a journal for dissemination.
In collaboration with consortium partners from universities in China and the EU, the road collision database has been created and published open access in Github and will be maintained by the consortium to include more scenarios over time based on feedback from users.
Researchers from a German partner university have been focusing on how to implement collision avoidance in the robotic scenarios. They have presented a Human-Robot collaboration pipeline that generates efficient and collision-free robot trajectories based on the trajectory and intended target predictions by the human arm and hand at the early motion stage. An optimization-based trajectory generation algorithm is proposed to ensure the safety of the human while the robot is collaborating with humans. Researchers have modeled human limbs and robot links as capsule-shaped collision objects. The proposed system has been tested by a human-robot pick-and-place task on a platform consisting of a robot arm, and a motion tracking system. The results show that the proposed pipeline can predict human trajectory and estimate the target position intended by the human accurately. Meanwhile, the generated robot trajectory is safe and efficient to complete the whole task together with humans.
Building on the research outcomes thus far, the consortium will continue to overcome the challenges caused by the pandemic and work on the tasks towards trustworthy collision detection systems via enhanced collaboration and knowledge exchange activities.
As detailed in the published report, talks, and research papers, new research outputs beyond the current state of the art have been proposed and verified with the overall aim of improving road safety by detecting collision early and accurately. In addition, we created a publicly available collision scene database. Researchers or development companies can use the database to test the robustness of their products for hazardous collision detection. Consortium partner GZHU has also secured NSFC funding led by the PI, Prof Jigen Peng, which focuses on fundamental mathematical theories and principles underlying small moving target detection. The SMEs involved are exploring the potential market of the collision detection visual systems, though in its preliminary stage.
ULTRACEPT project logo