Skip to main content

Control of contact interactions for robots acting in the world

Periodic Reporting for period 4 - CONT-ACT (Control of contact interactions for robots acting in the world)

Reporting period: 2019-12-01 to 2020-11-30

What are the algorithmic principles that would allow a robot to run through a rocky terrain, lift a couch while reaching for an object that rolled under it or manipulate a screwdriver while balancing on top of a ladder? By trying to answer these questions in CONT-ACT, we would like to understand the fundamental principles for robot locomotion and manipulation and endow robots with the robustness and adaptability necessary to efficiently and autonomously act in an unknown and changing environment. It is a necessary step towards a new technological age: ubiquitous robots capable of helping humans in an uncountable number of tasks.

Dynamic interactions of the robot with its environment through the creation of intermittent physical contacts is central to any locomotion or manipulation task. Indeed, in order to walk or manipulate an object, a robot needs to constantly physically interact with the environment and surrounding objects. Our approach to motion generation and control in CONT-ACT gives a central place to contact interactions. Our main hypothesis is that it will allow us to develop more adaptive and robust planning and control algorithms for locomotion and manipulation. The project is divided in three main objectives: 1) the development of a hierarchical receding horizon control architecture for multi-contact behaviors, 2) the development of algorithms to learn representations for motion generation through multi-modal sensing (e.g. force and touch sensing) and 3) the development of controllers based on multi-modal sensory information through optimal control and reinforcement learning.
We have conceived new algorithms to plan whole-body multi-contact behaviors for legged robots in realtime (Objective 1). These results allow to plan complicated motions, for example a humanoid climbing up stairs, walking over stepping stones or using its hand and legs to climb up on an obstacle. An important part of our work consisted in the study of the theoretical foundations of multi-contact optimization, which resulted in the design of algorithms significantly faster than the state of the art.

In parallel to this work, we have studied how multi-modal sensory information (e.g. force sensors, visual perception or inertial measurement units) could be used to 1) learn dynamic models of robots and their environment to enable a robot to predict the consequences of its actions (Objective 2) and 2) lean how to control and improve the behavior of robots directly using trial and error and the learned models (Objective 3). We have designed new algorithms to efficiently learn dynamic models, control policies and to optimally fuse multi-modal sensory information. We have also studied how uncertainty in the knowledge of contact locations changes the optimal way of creating a contact with an object or the environment. As a result, robots are able to create very gentle touch which allows to increase the safety and robustness of the interaction.

This project initiated an open-source project called the Open Dynamics Robot Initiative, a collaboration between the Max-Planck Institute for Intelligent Systems (Germany), New York University (USA) and the Laboratory for Analysis and Architecture of Systems - CNRS (France). The initiative has created a series of low cost yet high performance robots, including quadrupeds, bipeds and also a three finger manipulation system. These robots enable laboratories to share algorithms and replicate research results quickly, reliably and at a lower cost thus lowering barrier to entry. All the hardware and software necessary to build these robots are freely available and several copies of the robots have been made in laboratories across the world.

All our algorithms have been extensively evaluated on manipulators, quadruped and biped robots, demonstrating the generality of our approach to create various types of behaviors. The algorithms have been open-sourced and are freely accessible to anyone.
Solo8 jumping - Credits W. Scheible