Skip to main content
European Commission logo print header

Developmental Context-Driven Robot Learning

Final Report Summary - DECORO (Developmental Context-Driven Robot Learning)

(see attached report for formatted summary with figures)

This project aims to achieve experimentally verifiable advances in real-time robot learning with a high-dimensional sensory context on a non-trivial robot platform. To achieve this a special emphasis was put on understanding better the relationships between the “state” used in a learning algorithm, the embodiment of the robot itself, and the robot’s sensory context. An architecture for performing real-time learning with high-dimensional contexts was developed in C++, extending previous work. The learning uses a predictive Hebbian learning rule between population encodings of actuators and time-delayed high-dimensional sensory input. One of the main findings so far has been that when allowing the robot to take into account a larger set of sensory stimuli it can better overcome sensory noise, and forgetting when performing similar tasks (similar in some of the sensory channels). See Fig. 1 for examples of the experiment scenarios used, with the simulated iCub robot learning to push objects onto different target locations in different contexts.

The learning did show weaknesses when the execution of the robot weaned too far from what was learned. Likely the context-driven approach can best complement a more goal-oriented behaviour. At the same time, one of the research opportunity outlined in the project was to explicitly take into account the physical embodiment of the agent, and its interaction with its environment, when attempting to make a robot learn skills. Some of the computation required to control an embodied agent can be offloaded to the physical implementation, through for example designing for passive stability and adaptation. However, simulated robots and environments lack the inherent richness of the real-world. This also makes exploring context-driven behaviours more challenging. An example is the tactile data from a hand when exploring objects. In simulation a soft tactile skin is very hard to emulate, making the data much more discrete than the real equivalent. That is, the hand will only touch an object in a small number of locations, activating only a small set of sensors at a time, compared to the gradually varying and redundant signals we experience when touching something. Such physical interactions can also cause physics engines like ODE, commonly used also in robotics simulators, to go unstable.

On the other hand, it is hard to prevent damage to a physical robot during physical interaction with real-world objects, unless the interactions are pre-scripted with safety checks. Unplanned interactions with developing models and skills seem integral to the type of sensorimotor coordination that is the goal for this project. But such physical interaction and exploration with a still developing ‘mind’ requires a robust body. To amend this issue, the construction of a soft and robust embodiment was integrated into the project. The result was an open-source and printable 7+3 DOF robot arm, the GummiArm. See Fig. 2. It combines structural components that are printable on hobby-grade 3D printers, and rubbery tendons in an agonist-antagonist configuration. This enables easy replication, robustness even to fast impacts, a repair cycle of minutes when something does break, and stiffness and damping when required. The arm is able to survive impacts and physical exploration with objects that are not modelled, through its Variable Stiffness Actuators (VSA) with passive compliance.

The arm makes use of simple internal models that relate muscle lengths to joint angles and stiffness. These internal models are learned through a self-calibrating procedure, and are used to perform fast and targeted movements, but also collision detection. In particular, through the dynamic use of co-contraction when performing movements, the arm can reduce the typical oscillations at the end-point while moving fast. With the arm complete the work has then shifted to exploiting this embodiment for developing context-driven behaviours. Starting from the neural network framework used in the iCub simulator, and the first internal models on the GummiArm, continuing work is focusing on learning and activating internal models that are adapted to the context.

The impact of the project includes progress on building a community researching the role of a soft embodiment in developing robots, with open source software and hardware being made openly available: http://mstoelen.github.io/GummiArm/
The GummiArm is for example being replicated in 3 universities across Europe. Another impact is the development of a soft embodiment and context-sensitivity for robots for picking tasks in agriculture. A spin-out company, Fieldwork Robotics Ltd, was established by the Researcher to further explore this avenue, with funding made available from the University of Plymouth. Finally, the project outcomes helped the Researcher achieve a permanent academic position at Plymouth University in the UK.

Read more about the project here: http://www.tech.plym.ac.uk/SoCCE/CRNS/decoro/
For enquiries, please contact Dr Martin F. Stoelen: martin.stoelen@plymouth.ac.uk