European Commission logo
English English
CORDIS - EU research results
CORDIS

Socially Pertinent Robots in Gerontological Healthcare

Periodic Reporting for period 1 - SPRING (Socially Pertinent Robots in Gerontological Healthcare)

Reporting period: 2020-01-01 to 2021-11-30

In the past five years, social robots have been introduced into public spaces, such as museums, airports, commercial malls, banks, company show rooms, hospitals, and retirement homes, to mention a few examples. In addition to classical robotic skills such as navigation, grasping and manipulating objects, i.e. physical interactions, social robots must be able to communicate with people in the most natural way, i.e. cognitive interactions.
What if robots could take on the repetitive tasks involved in receiving the public? We know there are already forms of artificial intelligence which are capable of interacting with humans. However these mediation tools have but rudimentary possibilities, or require remote control by an engineer. While there are “butler” robots which can provide the weather forecast or give geographical directions, they are not able to execute complex social tasks autonomously, such as escorting users around a building. To be able to carry out such tasks, a social robot must be capable of perceiving and distinguishing signals emitted by different speakers, understanding these signals and identifying that they are addressed to the robot, and then react accordingly. This is a daunting challenge, because it requires numerous perceptive abilities and a capacity for automatic learning in order to execute autonomous decision-making. SPRING's overall objective is to answer this challenge.
But how do we enable a robot to identify from a set of conversations which request is addressed to it; to understand that it is being asked where a person may sit; to look around and find a vacant seat, determine the path to accompany the speaker to their seat while avoiding other patients and staff on the premises, and then perceive the relevance of offering distraction in the form of conversation? There are numerous technological difficulties and hurdles to overcome in order to accomplish this type of complex task. With regard to movement, RobotLearn opted to implement the reinforcement learning method. In order to determine its speed, approach angle and other parameters of movement, the robot is trained through an artificial intelligence system which calculates the adequacy between optimal action and the action actually undertaken, and attributes “rewards” for successful outcomes. This training phase enables the robot to come across a wide variety of possible cases in full autonomy, without human intervention to correct pathways. Once placed in real conditions, ARI continues to learn and identify the optimal action for each situation. This opens up the possibility of its use in a hospital setting. This is the aim of the second phase of SPRING, which is due to start in 2022: to validate the use of the robot in a hospital and to assess its impact on users and their habits, in addition to its acceptability. Entrusting even simple social tasks to a robot is nevertheless far from innocuous and raises numerous ethical and organisational issues, which are also handled within the project.
Progress has been achieved toward all of SPRING overall and specific objectives in this first period, however heavily impacted by the covid crisis.
Hereafter is a rapid overview of the achieved progress with regards to each of the SPRING’s precise objectives followed by their estimated completion level by November 2021.
(i) Overall objective: to develop Socially Assistive Robots with the capacity of performing multi-person interactions and open-domain dialogue.
--> The overall integration efforts that have started are the first step towards achieving this objective. For the moment, interactions are limited to a couple of people and dialogue is contextual.
(ii) Scientific objective: to develop a novel concept of socially-aware robots, and to conceive innovative methods and algorithms for computer vision, audio processing, sensor-based control, and spoken dialog systems.
--> The current state of progress is individual testing of advanced features developed by each partner in realistic conditions (navigation, audio signal tracking and cleaning, visual environmental awareness and audio-visual signal fusion, human behaviour understanding, dialogue); first integration of elements being planned in early 2022.
(iii) Technological objective: to create and launch a brand new generation of robots that adapt to the needs of the users.
--> The robotic platform produced, customised and delivered to all partners in 2021 is the perfect low-cost/high flexibility (with ROS) platform for this project and to lay the basis to a new generation of social robots.
(iv) Experimental objective: to validate the technology in a hospital and to assess its acceptability by patients and medical staff.
--> Preparation work for this objective has been performed: server and data exchange protocol is in place, anthropological and ethical studies by APHP have given insight as to how to perform the experiments, use cases have been defined accordingly, and 3D mapping of the experiments venue for robot navigation is acquired. Following integration of the first software modules in early 2022, and evolution of the pandemic-induced restrictions, experiments in the gerontology hospital will take place as of 2022.
Progress beyond the SoA has been achieved under the following SPRING objectives:
- To perform self-localisation and tracking in cluttered and populated spaces
Self-localisation module is functional and precise in empty environments as well as in environments with a few humans. Image-based place recognition is achieved in realistic (simulated) environments, including a 3D model of the hospital venue. Future work will push towards relevant environments in actual (non-simulated) situations.
- To build single- and multiple-person descriptions as well as representations of their interaction
Visual detection, localisation and tracking module active and currently in test and are consistent in time; audio tracking, diarisation & enhancement in adverse environment available; dynamic scenarios and audio-vision fusion still work in progress.
- To augment the 3D geometric maps with semantic information
We achieved to generate maps with specialised semantic information (hospital-related). Link with dialogue functions ongoing but process ready. Further work is needed to refine the specialised semantic information and to use the information in dialogue.
- To quantify the users’ levels of acceptance of social robots
Automated analysis of human behaviour in progress, achieved level is understanding of simple emotions based on expressions and poses. Link to acceptance level will require separate acceptance studies after experiments at the hospital (future work).
- To endow robots with the necessary skills to engage/disengage and participate in conversations
Progress planned in the next years
- Empower robots with skills needed for situated interactions
Simple situated interactions on the basis of geometric representation is achieved, social representations is ongoing work, semantic & behavioural are planned for the years
- Online learning of active perception strategies
Preliminary progress allows positioning the robot to increase the automatic speech recognition performance. Pending validation, integration and use in relevant situations will be tested in the upcoming years.
- Demonstrate the pertinence of the project’s scientific and technological developments
Demonstration efforts will start in the upcoming years.
SPRING logo
SPRING-ARI robot at Inria (C) Inria
SPRING-ARI robot at Inria (C) Inria