CORDIS - Wyniki badań wspieranych przez UE
CORDIS

Social Interaction Characteristics for Socially Accepted Robots

Final Report Summary - SICSAR (Social Interaction Characteristics for Socially Accepted Robots)

The main goal of our project was to originally contribute to HRI research dedicated to improve the behavioral aspect of the social acceptance of robots through the development of a highly interdisciplinary approach. More in detail, the overall research objective was to create, for robotic agents, a set of social behaviors to facilitate their interaction, coordination and cooperation with human agents. To realize this, we planned to extract these social behaviors from human-human dyadic conversations, with a focus on both individual and inter-individual aspects of these interactive dynamics.
Based on the evolutionary study of human-human cooperation, the interaction behaviors we focused on were blinking, idle movements, conversational gaze and manual gestures. In order to extract these behaviors for the robotic platform used for the project, we conducted various series of human-human interaction experiments. During these experiments the participants were interacting with one another in naturalistic settings. We fully recorded their interactive movements and interaction dynamics both at the individual and inter-individual level. Based on this data and data collected from a dedicated literature review, we generated a library of gestures and movements for the iCub robot, which can be coordinated in social behaviors in view of generating positive interactive dynamics with humans.
The project was shaped by the idea that, in order for a robot to be accepted as a “social partner” by its human interlocutors, it is important to take into consideration the cultural context in which these interlocutors are embedded. Following this approach, we conducted our interaction experiments in two different cultural contexts – Italy and Japan – with the long term goal of enabling iCub to socially interact in an appropriate – i.e. culturally acceptable - way.
After collecting and analyzing the data, we implemented several behavioral modules that functioned as a behavior baseline for the interaction between the human and the robot.
The first module to be implemented was for idle movements. This module makes the robot move its head, eyes, arms, hands and torso. These micro movements give the robot the appearance as if it is paying attention to its environment, and signal to the human in close physical proximity to the robot that it is turned on and ready to interact. When a potential human interlocutor approaches the robot, it switches into a conversational mode and changes its movements and behaviors accordingly. The small idle movements on the robot improve its life like appearance and signal to the human it is ready to be approached.
To generate a realistic human-like behavior for the robot, a naturalistic blinking module has been put in place. This module is based on human physiological data. The architecture for this module was published in a workshop paper at the tenth IEEE International Conference on Human-Robot Interaction 2015. We created different blinking patterns for the robot, ranging from static, mechanical blinking to physiological blinking combined with small head movements. We recorded videos with these different blinking patterns in a conversational context. These videos were uploaded on a YouTube channel specifically created for the project – “Social iCub” –, and used for an online study. In this study we tested the influence of the different blinking patterns on user perception of the robot. The results of the study showed that the participants rated the robot as “more intelligent” in the condition in which it expressed a naturalistic blinking pattern while talking meaningfully. The findings of this study were presented in a conference paper at the eight International Conference on Social Robotics 2016.
Another central behavior we implement on the basis of evolutionary studies on human-human cooperation is conversational gaze. During conversations humans do not stare at one another, but switch their gaze between different parts of the face of their interlocutors and the environment. The implementation of this gaze controller was based on data of extensive previously conducted human-human interaction experiments. The resulting gaze controller module enables the robot to switch its gaze between the eyes and the mouth of the human interlocutor. The architecture for the gaze controller has been published in a late breaking report at the tenth IEEE International Conference on Human-Robot Interaction 2015. The timings and probabilities for the gaze shifts were calculated with Partially Observable Markov Decision Processes based on the data of the human-human experiments. During the project we published an edited book on the topic of Gaze on human-robot communication.
In order to generate functioning conversational gaze behavior, the robot needed to able to detect the different facial features of the human interlocutor. We achieved this by implementing a face detection module, which enables the robot to detect the face and the different facial features like the eye and mouth of the human interlocutor. The results of this implementation were presented at the ninth International Workshop on Human-Friendly Robotics 2016.
In a next step we improved the naturalistic impression the gaze controller generates by combining it with the blinking module. This combination enables the robot to switch its gaze and blink at the same time during a conversation. This combination is essential for the naturalistic appearance of the robot.
Another topic we worked on was a manual conversational gesture library. Since humans gesture all the time while they are speaking, and are very sensitive to others gesturing, it stands to reason that robots should also exhibit gestures while conversing with humans. The frequency of use and the meaning of manual conversational gestures depends highly on the cultural context. Given that the same robot platforms (e.g. Pepper) are already being distributed in different countries, we decided to do comparative research on this topic in Japan and in Italy. This was done in collaboration with the Asada Lab at Osaka University. A first results of this collaboration has been published in a workshop paper at the Sixth Joint IEEE International Conference on Developmental Learning and Epigenetic Robotics 2016. In order to collect the necessary data for the culture sensitive gesture libraries we conducted behavioral experiments in Japan and in Italy. These experiment involved pairs of naïve participants having a free conversation, and pairs of actors freely acting different scenarios. The behaviors extracted from these conversations are the basis of the gestures developed for the robot.
The know-how developed during the project was applied to the design process of head and face of R1, a new service robot produced by the iCub Facility at IIT. This process involved iterative evaluations in order to find the optimal solution for the user interface in concurrence with the overall design of this new service robot. The results of the design process have been published in a conference paper the eight International Conference on Social Robotics 2016.
Another important topic, strongly related to behavioral acceptance of robots, that we developed during the project is “artificial empathy”. This work involved theoretical analyses and debate on possibilities, limits and ways of equipping robots with the ability to express emotional states and induce them in their human interlocutors. We developed this topic in collaboration with two research teams, respectively active in the domains of Philosophy of Emotions (Prof. Dumouchel, Graduate School of Core Ethics and Frontier Sciences, Ritsumeikan University, Kyoto, Japan) and Philosophy of Science and Technology (Prof. Luisa Damiano, Epistemology of the Sciences of the Artificial Research Group, University of Messina, Italy, and CERCO, University of Bergamo, Italy). This collaboration led us to extend the focus of the inquiry to ethical issues involved in artificial empathy research. This kind of inquiry, developed in domains such as Cognitive, Affective and Developmental Robotics, gains more and more momentum due to the possibilities to apply robots in healthcare and home companion contexts. The results of this work include three publications in the International Journal of Social Robotics and the organization of three workshops at International conferences.
The relevance of the work done in this project manifests itself also in a high public interests. The project was featured in a German newspaper article, and the researcher was invited to give talks about his work at different Universities, e.g. Yale University (USA), Radboud University (NL), Pontificia Universita San Tommaso d’Aquino (Italy). The related disciplinary contexts ranged from social robotics and HRI to medical rehabilitation (e.g. Seminario Villa Beretta, Costa Masnaga, Italy), and from (evolutionary) anthropology (Ritsumeikan University, Kyoto, Japan) to psychology and philosophy of science (e.g. University of Bergamo, Italy; Epistemology of the Sciences of the Artificial Research Group, University of Messina, Italy). Additionally the researcher organized and participated in different outreach activities, which included the presentation of the project to a general public and to school classes (e.g. Festival della Scienza 2015, Genoa, Italy; Caffe Scientifico 2015, Genoa, Italy).
Another outcome of this project is a master thesis on the topic of trust in social robots co-supervised with the Radboud University (Nijmegen, NL).
The improvement of social interaction abilities of robots is a first step to a higher social acceptance of this technology in close physical and psychological proximity with humans, and, as we believe, it is necessary for the imminent generation of “mixed human-robot ecologies”. The integration of social robots as aids in health care and as home companions will happen in the near future. Building functioning artificial agents will not be enough. It will be crucial to equip these agents with the necessary means to competently engage in social interactions with humans. The work and results of this project intended to contribute in paving the way in this direction.