Community Research and Development Information Service - CORDIS

Final Report Summary - LIMOMAN (Developmental Learning of Internal Models for Robotic Manipulation based on Motor Primitives and Multisensory Integration)

LIMOMAN (developmental Learning of Internal MOdels for robotic MANipulation based on motor primitives and multisensory integration) addresses the key problem of improving the ability of current robots in dexterous manipulation, focusing in particular on three aspects borrowed from the human motor control system: internal models, developmental learning and multisensory integration.

Specifically, we propose the concept of "probabilistic, contextual and hierarchical" internal models, that goes beyond existing architectures by incorporating i) adaptability (due to incremental probabilistic learning), ii) flexibility (as different contexts can be represented) and iii) scalability (because of the hierarchical organization).
Moreover, we investigate hand synergies approaches to encode the motor complexity of the robot hand with compact representations, and we exploit the motion primitives framework to facilitate learning by demonstration.
Then, rich sensory feedback is needed for robust control of manipulation: vision, proprioception, tactile and force sensing. We develop new concepts for soft 3D tactile sensors and we explore Bayesian techniques to integrate different sensory modalities.
We demonstrate our solutions engaging the iCub humanoid robot in a complex task (i.e. object manipulation) that has several requirements: the robot acts on common unmodeled objects (adaptability and robustness), in different contexts (flexibility), at different levels of complexity (scalability).

Indeed, we proved that different probabilistic techniques can be successfully used to learn robot internal models that account for different sensorimotor capabilities, considering both dynamic and kinematic aspects, and that such models can be used to formulate predictions that improve movement control, actions planning and state estimation. We explored strategies to efficiently combine different sensory channels (i.e. visual, proprioceptive, force, tactile) and to encode both the robot movements and the sensorimotor structures with compact representations.
We also investigated the concept of affordances, leading to computational models of visual perception that allow to extract the most important information from the stream of visual data, and to make predictions about the effects of the actions that can be used for action planning, also in the context of tool use.
We implemented most of our work on the iCub humanoid robot, deploying several software modules that are integrated in a complex architecture that is publicly available and completely open-source (https://github.com/robotology).
Moreover, we proposed novel technology for tactile sensing, focusing on fundamental features for object manipulation, such as intrinsic compliance and high sensitivity, based on the experience of robot interaction with a real unstructured environment.
Overall, the work conducted during the two years has resulted in 17 scientific publications (2 of them still under review), and a number of works in preparation.
Although important results have been achieved, the project leaves a number challenges for future investigation, one of the most interesting being how to effectively combine human demonstrations with robot autonomous learning to eventually achieve high performance in complex manipulation actions, such as fine grasping and in-hand manipulation.
Webpage: http://limoman-project.blogspot.pt/

Related information

Contact

Alexandre Bernardino, (Assistant Professor)
Tel.: +351218418293
Fax: +351218418291
E-mail
Follow us on: RSS Facebook Twitter YouTube Managed by the EU Publications Office Top