Skip to main content
European Commission logo print header

Real-time understanding of dexterous deformable object manipulation with bio-inspired hybrid hardware architectures

Final Report Summary - FASTDEFORM (Real-time understanding of dexterous deformable object manipulation with bio-inspired hybrid hardware architectures)

This project aims to close the perception-action loop in dexterous manipulation scenarios that involve rigid and non-rigid objects with arbitrary physical characteristics. This requires removing a sensory/semantic communication bottleneck that currently exists between low-level vision such as dense motion and depth perception on the one hand, and abstract scene understanding and reasoning on the other.

We have achieved this through the development of novel methods for building and maintaining a detailed 3D scene representation, the complexity of which is dynamically adapted based on sensory information extracted from the scene in real-time. All these methods were specifically designed to exploit the massive parallelism provided by Graphics Processing Units and realized on a real-time hybrid platform. In accordance with biological vision systems, the sensory/semantic bottleneck is removed using visual attention methods that focus communication on scene items of interest and at an abstraction level that is supported by the sensory and prior information.

On a number of benchmark datasets specifically designed for this project (and made publicly available) the system has been shown to outperform state-of-the-art methods. Highly robust and real-time operation has been shown on (1) the manipulation of a large number of rigid objects with accurate tracking of occlusions and dynamics, (2) visual servoing guided by accurate joint multi-object and manipulator tracking with inaccurate calibration and camera motion, and (3) complex origami style articulated cardboard manipulation with frequent occlusions. A number of videos demonstrating the real-time system in all these scenarios are available at: www.youtube.com/user/karlpauwels

This project has shown that a tight integration and real-time interaction between visual simulation and visual perception, both realized on the same hardware, can significantly enhance state-of-the-art scene understanding in terms of speed, robustness, and accuracy.

The results of this project can provide robotic systems with the ability to perceive and understand complex manipulations of non-rigid objects in closed perception/action loops. This can enable such systems to operate outside the highly-constrained industrial settings they are currently restricted to, opening up a wide-variety of applications in manufacturing, medicine, elderly support and general domestic applications.

For more results, publications, and contact details see www.karlpauwels.com