Skip to main content
European Commission logo print header

Real-time understanding of dexterous deformable object manipulation with bio-inspired hybrid hardware architectures

Article Category

Article available in the following languages:

Robots of tomorrow with intelligent visual capabilities

The ability to perceive and understand the dynamics of the real world is critical for the next generation of robots. An EU initiative has explored vision, which is essential for most robotic tasks.

Digital Economy icon Digital Economy

Robots need a way to adaptively select relevant information in a given scene for further processing. They require prior common-sense knowledge about where to find a target and should also have an idea of their size, shape, colour or texture. Robots require attention mechanisms to determine which parts of the sensory array they need to process. Attention involves selecting the most relevant information from multi-sensory inputs to efficiently carry out a target search. The EU-funded REAL-TIME ASOC (Real-time understanding of dexterous deformable object manipulation with bio-inspired hybrid hardware architectures) project focused on the development of new mechanisms for visual attention. REAL-TIME ASOC employed a specialised camera called 'Dynamic vision sensor' (DVS) which is suitable for robotic applications requiring short latencies to operate in real time. It captures everything that is changing at a very high temporal resolution in microseconds. DVS records about 600 000 frames per second and reduces the amount of information by removing a scene's static areas. Project partners began by using the DVS sensor to extract contours and boundary ownership from event information only. Since events are solely triggered at major luminance changes, most events occur at the boundary of objects. Detecting these contours is a key step towards further processing. They introduced an approach that identifies the location of contours and their border ownership using features representing motion, timing, texture and spatial orientations. The contour detection and boundary assignment were then demonstrated in a proto-segmentation of the scene. Scientists worked on algorithms to estimate image motion from asynchronous event-based information, and a field programmable gate array to compute visual attention. Lastly, they produced a dataset that provides both frame-free event data and classic image, motion and depth data. This helps to evaluate different event-based methods and compare them to frame-based conventional computer vision. REAL-TIME ASOC demonstrated how tomorrow's robot will visually select and process images much like humans do.

Keywords

Robots, REAL-TIME ASOC, object manipulation, hybrid hardware architectures, visual attention

Discover other articles in the same domain of application