Skip to main content

Spatial-temporal information processing for collision detection in dynamic environments

Periodic Reporting for period 2 - STEP2DYNA (Spatial-temporal information processing for collision detection in dynamic environments)

Período documentado: 2018-07-01 hasta 2021-12-31

In the real world, collisions happen at every second - often resulting in serious accidents and fatalities. For example, more than 3560 people die from vehicle collision per day worldwide. Autonomous unmanned aerial vehicles (UAVs) have demonstrated great potential in serving human society such as delivering goods to households and precision farming but are restricted due to lack of collision detection capability. The current approaches for collision detection such as radar, laser based Lidar, and GPS are far from acceptable in terms of reliability, energy consumption and size.
The STEP2DYNA consortium proposed an innovative bio-inspired solution for collision detection in dynamic environments at low cost and low energy consumption. The methodologies employed by the consortium take advantage of low cost spatial-temporal and parallel computing capacity of visual neural systems, and realise it as a compact vision module specifically for collision detection in dynamic environments. The multidisciplinary teams across Europe, Asia, and South America have carried out neurophysiological experiments, computational biological system modelling, circuits, and embedded system design, robotics, and UAV experiments, to verify the proposed collision detection sensor system in various conditions. 25 journal papers and 39 conference papers have been published during the project, with more being prepared for submission and publication. The database (e.g. for collision detection) has been uploaded to Github for free public access, along with the published papers.
These research outcomes are the result of the close collaboration of the consortium partners supported by this project via secondments, workshops, and training seminars. The publication areas also span cross to neurobiology, neural system modelling, electronic hardware design, robotics, and UAVs. The research teams in Europe have significantly strengthened their capacity with newly obtained skills through the project activities while transferring knowledge to partners. Through this project, the partners have built strong expertise in this exciting multidisciplinary area and the European SME has gained a leading position to exploit the market potential further after the project.
The first work package of the project focussed on modelling, testing, and comparing visual neural systems. Researchers have pioneered a novel collision selective visual neural network inspired by a specific group of Lobula Giant Movement Detectors (LGMD) neurons on the juvenile locust.
In terms of model comparison and selection, researchers have published a few papers about new methods in constructing LGMDs models, and other similar bio-plausible models such as directional selective neuron models, and the angular velocity detecting model, for estimating the image motion velocity using the latest neural circuit’s discoveries of the Drosophila visual systems.
The second work package focused on multiple neural systems, for enhancing the reliability and robustness of the system. Researchers have proposed and published a directional selective based neural network for small target detection in a cluttered background. The proposed neural network not only can detect small target motion but can also specify the direction of small target motion in cluttered backgrounds. Extensive experiments showed that the proposed neural network can reliably detect small targets against cluttered backgrounds.
Work package 3 relates to the identification and selection of the best model for chip realisation. This involves identifying and comparing specialised neural models for collision detection, and investigating design factors such as feasibility, suitability, and production costs. The research outputs on the hardware and embedded vision module with robotics experiments have been published its reliability verified with compact size and lower cost.
Researchers have identified the Lobula Giant Movement Detector (LGMD1) as the most reliable model and have implemented it onto a UAV demonstrating the model’s preliminary ability for collision avoidance, as published in papers to conference and a recent paper in IEEE Trans. Neural Networks and Learning Systems.
Progress has been made with the chip design and this is a key piece of work. The work has included measurement of the JESD204B chip and design of the RILCM chips for the injection mechanism analysis. The JESD204B chip design is aimed to widen the lane number of the data communication for the VLSI chip.
The final technical work package looked at robots being equipped with mini visual sensors. Experiments have been carried out with micro swarm robots equipped with the vision module, and UAVs for collision free navigation with the developed vision module onboard verified the effectiveness of the min visual sensor. Specifically for the demonstrator system, researchers have implemented the LGMD1 onto a UAV demonstrating the preliminary ability for collision avoidance as described in a conference paper. This has involved building a multi-sensor quadcopter platform, selecting the hardware and software design, designing software on the computer, carrying out flight experiments, and writing a paper for the application of the LGMD on the quadcopter. The quadcopter chosen for the demo systems is a multi-sensor platform that can provide posture, height, direction, and position information and provide a command interface for the LGMD vision-based collision detector.
The modelling work is ongoing as new discoveries on the looming sensitive neurons, such as LGMDs in locusts, and MLG1s in crab, have been revealed by neurobiologists shedding light on new structures to be considered in the modelling work. Further modelling work has been designed to integrate the collision detecting model to the model of insect navigation to implement the insect navigation mechanism on the Colias robot and test the performances and properties of the biological strategy in the real world. This leads to a new Colias robotic platform to facilitate micro mobile robots’ navigation research.
In regard to the model for chip realisation, researchers have designed the system into a full functioning embedded vision module in which a micro camera and its post processing chip have been integrated into one system. The vision module will be redesigned with a more specialised chip to further reduce the size, energy cost, and increase its capability in lower light conditions.
The team will also look at the possibility to further combine the functionalities of LGMD1 and LGMD2 neural computations to form a hybrid visual neural network and investigate the integrating methods of direction and collision selective neural models with similar separated ON/OFF pathways to handle more complex visual motion challenges.
The research collaboration between partners has generated a positive impact on society during the project time. For example, the University of Lincoln team demonstrated the Colias robots at the University Open Days in 2017. The University of Newcastle organised a public lecture for primary school students In the future, similar events will be organised to reach wider audiences.

Documentos relacionados