Skip to main content

Spatial-temporal information processing for collision detection in dynamic environments

Periodic Reporting for period 1 - STEP2DYNA (Spatial-temporal information processing for collision detection in dynamic environments)

Reporting period: 2016-07-01 to 2018-06-30

In the real world, collisions happens at every second - often resulting in serious accidents and fatalities. For example, more than 3560 people die from vehicle collision per day worldwide. On the other sector, autonomous unmanned aerial vehicles (UAVs) have demonstrated great potential in serving human society such as delivering goods to households and precision farming, but are restricted due to lacking of collision detection capability. The current approaches for collision detection such as radar, laser based Ladar and GPS are far from acceptable in terms of reliability, energy consumption and size. A new type of low cost, low energy consumption and miniaturized collision detection sensors are urgently needed to not only save millions of people’s lives, but also make autonomous UAVs and robots safe to serve human society.

The STEP2DYNA consortium proposes an innovative bio-inspired solution for collision detection in dynamic environments. It takes the advantages of low cost spatial-temporal and parallel computing capacity of visual neural systems and realizes it in chips specifically for collision detection in dynamic
environments. Realizing visual neural systems in chips demands multidisciplinary expertise in biological system modelling, computer vision, chip design and robotics. This breadth of expertise is not readily possessed within one institution. Secondly, the market potential of the collision detection system could not be well exploited, unless by a dedicated partner from industry.

Therefore, this consortium is designed to bring neurobiologists, neural system modellers, chip designers, robotics researchers and engineers from Europe and the East of Asia together and complement each other’s research strengths via staff secondments, and jointly organised workshops and conferences. Through this project, the partners will build strong expertise in this exciting multidisciplinary area and the European SME will position well as a market leader in collision detection sensors.
The first work package of the project focusses on modelling, testing and comparing visual neural systems. Researchers have pioneered a novel collision selective visual neural network inspired by a specific group of lobula giant movement detectors (LGMD) neurons on the juvenile locust.

In terms of model comparison and selection, researchers have studied suitable models and proposed a bio-plausible model, the angular velocity detecting model, for estimating the image motion velocity using the latest neural circuit’s discoveries of the Drosophila visual systems.

The second work package focuses on multiple neural systems, which are needed to enhance the reliability and robustness of the system. Researchers have proposed a directional selective based neural network for small target detection in a cluttered background. The proposed neural network not only can detect small target motion but it can also specify the direction of small target motion in cluttered backgrounds. Extensive experiments showed that the proposed neural network can reliably detect small targets against cluttered backgrounds.

Work package 3 relates to the identification and selection of the best model for chip realisation. This involves identifying and comparing specialised neural models for collision detection, and investigating design factors such as feasibility, suitability and production costs. The identified neural systems will be tested under more realistic electronic noise after the main chip structure has been identified, and finally the VLSI (very-large-scale integration) chip will be further designed and tested.

Researchers have identified the Lobula Giant Movement Detector (LGMD1) as the most reliable model, and have implemented it on to a UAV demonstrating the model’s preliminary ability for collision avoidance.

Progress has been made with the chip design and this is a key piece of work. The work has included measurement of the JESD204B chip and design of the RILCM chips for the injection mechanism analysis. The JESD204B chip design is aimed to widen the lane number of the data communication for the VLSI chip.

The final technical work package looks at robots being equipped with visual sensors. The preliminary visual neural system will be selected and implemented on a robotic platform, with a motor control system for collision detection and avoidance. This will then be implemented onto a UAV platform.

Researchers have implemented the lobula giant movement detector LGMD1 on to a UAV demonstrating the preliminary ability for collision avoidance. This has involved building a multi sensor quadcopter platform, selecting the hardware and software design, designing software on the computer, carrying out flight experiments and writing a paper for the application of the LGMD on the quadcopter. The quadcopter is a multi-sensor platform which can provide posture, height, direction, and position information and provide a command interface for the LGMD vision-based collision detector.
The modelling work is ongoing. Vision is a necessary cue for human beings to detect collision, but the principle between vision and collision detection is still unknown. The basic component of vision is gaze, which can be calculated by eye gaze tracking. However, current 3D gaze tracking in the real environment is limited by the accuracy. Researchers aim to develop a high accurate eye tracker to estimate human gaze by employing more accurate projective models.

Further modelling work will be to research a collision detecting model and integrate it into the model of insect navigation, and to implement the insect navigation mechanism on the Colias robot and test the performances and properties of the biological strategy in the real word.

In regards to the model for chip realisation, research is ongoing between those working on invertebrate visual neural system modellers and those working on the chip design. This work will develop in the second half of the project, with key partners coming together in the workshops to discuss further developments in this area.

As part of the final work package, the team propose to further combine the functionalities of LGMD1 and LGMD2 neural computations to form a hybrid visual neural network, and secondly will investigate the possibility of integrating direction and collision selective neural models with similar separated ON/OFF pathways to handle more complex visual motion challenges.

The research collaboration between partners has generated a positive impact on society in the first phase of the project. For example, the team at the University of Lincoln demonstrated the Colias robots at the University Open Days in 2017 where potential undergraduate students had been invited. The University of Newcastle, assisted by Early Stage Researchers from the University of Lincoln, organised a public lecture for a large group of primary school students on studying insects’ vision systems, using the Colias robots as demonstration tools.

In addition to the planned research activities, the consortium will also consider additional new modelling and realisation activities based on the outcomes yielded in the first phase, to exploit the potential of biologically plausible neural visual systems for various real-world applications.