Community Research and Development Information Service - CORDIS

  • European Commission
  • CORDIS
  • Projects and Results
  • Periodic Reporting for period 2 - BabyRobot (Child-Robot Communication and Collaboration: Edutainment, Behavioural Modelling and Cognitive Development in Typically Developing and Autistic Spectrum Children)
H2020

BabyRobot Report Summary

Project ID: 687831
Funded under: H2020-EU.2.1.1.

Periodic Reporting for period 2 - BabyRobot (Child-Robot Communication and Collaboration: Edutainment, Behavioural Modelling and Cognitive Development in Typically Developing and Autistic Spectrum Children)

Reporting period: 2017-01-01 to 2017-12-31

Summary of the context and overall objectives of the project

The crowning achievement of human communication is our unique ability to share intentionality, create and execute on joint plans. Using this paradigm we model human-robot communication as a three-step process: sharing attention, establishing common ground and forming shared goals. Prerequisites for successful communication are being able to decode the cognitive state of people around us (intention-reading) and to build trust. Our main goal is to create robots that analyze and track human behavior over time in the context of their surroundings (situational) using audio-visual monitoring in order to establish common ground and intention-reading capabilities. In BabyRobot we focus on the typically developing and autistic spectrum children user population. Children have unique communication skills, are quick and adaptive learners, eager to embrace new robotic technologies. This is especially relevant for special education where the development of social skills is delayed or never fully develops without intervention or therapy. Thus our second goal is to define, implement and evaluate child-robot interaction application scenarios for developing specific socio-affective, communication and collaboration skills in typically developing and autistic spectrum children. Our aim is to support not supplant the therapist or educator, working hand-in-hand to create a low risk environment for learning and cognitive development. Breakthroughs in core robotic technologies are needed to support this research mainly in the areas of motion planning and control in constrained spaces, gestural kinematics, sensorimotor learning and adaptation. Our third goal is to push beyond the state-of-the-art in core robotic technologies to support natural human-robot interaction and collaboration for consumer, edutainment and healthcare applications. BabyRobot ambition is to create robots that can establish communication protocols and form collaboration plans on the fly, which is expected to have impact beyond the consumer and healthcare application markets addressed in this project.

The BabyRobot project is a truly interdisciplinary effort bringing together experts from diverse areas working towards common goals. At the core of the proposed research agenda is cognitive robotics, i.e., designing and building robots that have adaptive communication and collaboration skills very much like humans do. For this purpose, we build the cognitive and communicative capabilities of robots layer by layer in direct analogy to human cognition as shown in Figure 1. Machine learning algorithms for speech and gesture recognition, speech understanding, socio-affective state recognition, planning and discourse modeling are some of the relevant component technologies for building such capabilities. In addition, in order to negotiate the semantics of the communicated information and the environment between the human and the robot three more tools are needed: joint attentional mechanisms, establishing common ground and sharing goals and intentions. When one or more of these tools are missing or underperforming, communication, collaboration and learning is ineffectual for humans and machines alike. Autism spectrum disorder (ASD) children are a prime example where the core recognition and understanding function are intact, but joint attention, common grounding and share intentionality mechanisms are compromised leading to poor performance in language learning, communication and collaboration tasks. This makes the ASD population a natural choice for this line of research, child and robot learning and enhancing their communication capabilities hand-in-hand.

The BabyRobot project is centered around three use cases that will help identify, ground, develop and evaluate the relevant set of technologies, as well as their application to child-robot interaction scenarios, specifically: 1) natural child-robot interaction scenarios that showcase the joint attention, common ground and shared inte

Work performed from the beginning of the project to the end of the period covered by the report and main results achieved so far

Figure 2 depicts the modular architectural components of BabyRobot, namely: a) the audio-visual processing and behavior tracking module, b) the core robotic functionality module, c) the communication module and d) the pedagogical module.

During the first two years of the project, work has focused on researching, developing, implementing and gradually integrating parts of the above core architectural modules, and in particular:

- Multi-person localization and tracking from audio-visual input
- Gesture and speech recognition
- Social, cognitive, and affective state tracking from audio-visual data
- Common grounding by employing conceptual networks
- Motion planning and control for the synthesis of robot gestures
- On-line adaptation and learning to optimize control policies on the fly

Furthermore, work has progressed in the following directions:

1) Application and pilot testing of all three use case scenarios, together with the respective data collection protocols and procedures. These include:
a) joint attention and turn taking (use case 1) modules;
b) child-robot collaborative games (use case 2) scenarios, including testing with ASD children;
c) use case 3 scenarios, addressing specific cognitive skills on ASD children through a series of child-robot interactive games, including work towards semi-autonomous child-robot interaction.

2) Establishment of the general BabyRobot architecture under a modular, extendable and reproducible framework, as depicted in Figure 3.

Progress beyond the state of the art and expected potential impact (including the socio-economic impact and the wider societal implications of the project so far)

In summary, during the first two years of BabyRobot, contributions beyond the state of the art have been made in the following major areas:
1) Joint attention modeling in human-robot interaction
2) Conceptual networks used for creating common ground as well as semantic representations for human-robot interactive systems
3) Robust multimodal situated algorithms for behavioral analysis using multimodal low- and mid-level cues (including paralinguistic cues motivated by social signal processing)
4) Intention reading and action recognition that incorporate multimodal low- mid- and high-level cues
5) Adaptation and reinforcement learning both for core robotics, communication and collaboration functionality.

In the first two years period, contributions towards technology development, as well as towards identifying wider socio-economic implications of BabyRobot technologies, include:
1. Validated via the proposed use cases that the studied human-robot interaction scenarios are relevant for ASD and TD children for learning communicative and collaborative skills (via educators and psychologists).
2. Initiated the investigation on the exploitation potential of BabyRobot technologies in edutainment applications in schools and special education.

Related information

Follow us on: RSS Facebook Twitter YouTube Managed by the EU Publications Office Top