CORDIS - Resultados de investigaciones de la UE
CORDIS

Action understanding in human and robot dyadic interaction

Periodic Reporting for period 1 - ACTICIPATE (Action understanding in human and robot dyadic interaction)

Período documentado: 2017-06-01 hasta 2018-08-31

ACTICIPATE addressed the design of the robot’s behaviour that will share workspaces and co-work with humans in the future. The behaviour modelling relied on human experiments to learn the important cues in dyadic human-human interaction. It allowed us to develop a behaviour model that controls humanoid robot upper body motion in dynamic environments, during reaching and manipulation tasks, and at the same time to understand, predict and anticipate the actions of a human co-worker, as needed in manufacturing, assistive and service robotics, and domestic applications.

The dyadic interaction scenarios call for the following capabilities that are tackled in ACTICIPATE: (i) a motion generation mechanism to allow the robot to perform legible movements easily understandable to human (ii) a framework to coordinate movements of head, eyes and arm in a way similar to human movements, and model the action/movement coupling between co-workers in dyadic interaction tasks. (iii) the ability to understand and anticipate human actions, based on a common motor system/model.

There are numerous reasons for the development of such robots, and equally numerous capabilities that future robots yet need to acquire before cohabitation with humans becomes a reality. Among many, human-like nonverbal interpersonal interaction model holds the great potential to pave the way to successful utilization of the robot co-workers in a dynamic manufacturing environment. Interaction model has two-fold utilization. It is used to decode the observed motions of the others and at the same time to plan the robot’s actions and coordinate the motion execution of eyes, head and arms.
We prepared and conducted three experiments where we recorded human motion and gaze data.
Experiment 1 is conceived to investigate the process of human action recognition and anticipation. We recorded motion data from a human picking and placing objects on the table. These recordings are shown to participants, with varying temporal cuts in relation to the initial moment, and subjects were asked to recognize the intended action. The aim was to understand what are the key points in the movement (gaze/head orientation shifts, torso and arm movements) to trigger the action anticipation capacity.
Experiment 2 involved the execution of an assembly task with multiple pick-place and pick-handover operations in a dyadic scenario. We identified relevant physical properties of the co-worker’s motion that have an influence on behaviour.
Experiment 3 involved the execution of an assembly task, again in a dyadic scenario, with multiple pick-place and pick-handover operations, but with modulated object affordances. The aim of this experiment is to understand how object affordances are modulating the behaviour of a human, and if the behaviour of a human can reveal some information about the manipulated object.
The data of the first two experiments are processed and the datasets are publicly available. We used the datasets to understanding the physical properties for mutual coupling and a set of actions human use to anticipate each other's motion in a dyadic scenario. The data are used to model the motion generation of upper body and to model the coupling between the co-worker in a dyadic scenario. Formal verification of proposed methodologies is performed on the iCub platform.
The methodologies, experimental results, and robotic implementations of ACTICIPATE showed how the robot can execute coordinated movements, engage in non-verbal communication and anticipate the actions of its co-workers simultaneously.
The results will have potential impact in the areas of cognitive robotics, humanoids, vision, learning and control. By improving the capabilities of humanoid robots to operate and manipulate in shared environments, interacting with humans, and tightly adapting with perception, will also have an impact in manufacturing, pharmaceutical or electronics industries where flexibility and ease of reconfiguration is a key factor.
Human handing over an object to a robot with interpersonal interaction model