Skip to main content

Child-Robot Communication and Collaboration: Edutainment, Behavioural Modelling and Cognitive Development in Typically Developing and Autistic Spectrum Children

Periodic Reporting for period 3 - BabyRobot (Child-Robot Communication and Collaboration: Edutainment, Behavioural Modelling and Cognitive Development in Typically Developing and Autistic Spectrum Children)

Reporting period: 2018-01-01 to 2018-12-31

The crowning achievement of human communication is our unique ability to share intentionality, create and execute on joint plans. Using this paradigm we model human-robot communication as a three-step process: sharing attention, establishing common ground and forming shared goals. Prerequisites for successful communication are being able to decode the cognitive state of people around us (intention-reading) and to build trust. Our main goal is to create robots that analyze and track human behavior over time in the context of their surroundings (situational) using audio-visual monitoring in order to establish common ground and intention-reading capabilities. In BabyRobot we focus on the typically developing and autistic spectrum children user population. Children have unique communication skills, are quick and adaptive learners, eager to embrace new robotic technologies. This is especially relevant for special education where the development of social skills is delayed or never fully develops without intervention or therapy. Thus our second goal is to define, implement and evaluate child-robot interaction application scenarios for developing specific socio-affective, communication and collaboration skills in typically developing and autistic spectrum children. Our aim is to support not supplant the therapist or educator, working hand-in-hand to create a low risk environment for learning and cognitive development. Breakthroughs in core robotic technologies are needed to support this research mainly in the areas of motion planning and control in constrained spaces, gestural kinematics, sensorimotor learning and adaptation. Our third goal is to push beyond the state-of-the-art in core robotic technologies to support natural human-robot interaction and collaboration for consumer, edutainment and healthcare applications. BabyRobot ambition is to create robots that can establish communication protocols and form collaboration plans on the fly, which is expected to have impact beyond the consumer and healthcare application markets addressed in this project.

The BabyRobot project is a truly interdisciplinary effort bringing together experts from diverse areas working towards common goals. At the core of the proposed research agenda is cognitive robotics, i.e. designing and building robots that have adaptive communication and collaboration skills very much like humans do. For this purpose, we build the cognitive and communicative capabilities of robots layer by layer in direct analogy to human cognition as shown in Figure 1. Machine learning algorithms for speech and gesture recognition, speech understanding, socio-affective state recognition, planning and discourse modeling are some of the relevant component technologies for building such capabilities. In addition, in order to negotiate the semantics of the communicated information and the environment between the human and the robot three more tools are needed: joint attentional mechanisms, establishing common ground and sharing goals and intentions. When one or more of these tools are missing or underperforming, communication, collaboration and learning is ineffectual for humans and machines alike. Autism spectrum disorder (ASD) children are a prime example where the core recognition and understanding function are intact, but joint attention, common grounding and share intentionality mechanisms are compromised leading to poor performance in language learning, communication and collaboration tasks. This makes the ASD population a natural choice for this line of research, child and robot learning and enhancing their communication capabilities hand-in-hand.

The BabyRobot project is centered around three use cases to identify, ground, develop and evaluate the relevant set of technologies, as well as their application to child-robot interaction scenarios, specifically: 1) natural child-robot interaction scenarios that showcase the joint attention, common ground and shared intentionality modules, 2) communication skill development and learning via tactile and language games, and 3) collaboration skill development and learning via dyadic and triadic interaction, using the robot as a mediator. The target populations are typically developing (TD) and ASD children, ages 6-10 years interacting in their native language. Use cases 2 and 3 involve longitudinal studies in collaboration with educators (and therapists for ASD children) in order to formally evaluate and measure the progress in the child's communication and collaboration skills. The following languages are addressed in BabyRobot: English, Danish, Swedish and Greek.
Figure 2 depicts the modular architectural framework and core components of BabyRobot, namely: a) the audio-visual processing and behavior tracking module, b) the core robotic functionality module, c) the communication module and d) the pedagogical module.

Work has focused on researching, developing, implementing and gradually integrating parts of the above modules, in particular:

- Multi-person localization and tracking from audio-visual input
- Gesture and speech recognition
- Social, cognitive and affective state tracking from audio-visual data
- Common grounding by employing conceptual networks
- On-line adaptation and learning to optimize robot control policies on the fly based on monitoring of human behavior

Furthermore, significant results have been achieved in the following directions:

1) Application and pilot testing of all three use case scenarios, together with the respective data collection protocols and procedures.
2) Establishment of the general BabyRobot architecture under a modular, extendable and reproducible framework, as depicted in Figure 3.
3) Optimizing and releasing parts of the codebase that is technology ready.
4) Establishing a common research approach and assessment tools to systematically study in a coherent way a variety of targeted social and cognitive skills in children with ASD.
In summary, contributions beyond the state of the art have been made in the following major areas:
1) Joint attention modeling in human-robot interaction
2) Conceptual networks used for creating common ground as well as semantic representations for human-robot interactive systems
3) Robust multimodal situated algorithms for behavioral analysis using multimodal low- and mid-level cues (including paralinguistic cues motivated by social signal processing)
4) Intention reading and action recognition that incorporate multimodal low- mid- and high-level cues
5) On-line adaptation and reinforcement learning algorithms for optimal decision-making in human-robot collaboration settings

Contributions towards technology development, as well as towards identifying wider socio-economic implications of BabyRobot technologies, include:
1. Validating via the proposed use cases that the studied human-robot interaction scenarios are relevant for ASD and TD children for learning communicative and collaborative skills.
2. Investigating the exploitation potential of BabyRobot technologies in different edutainment applications in schools and special education.
3. Conducting a series of intervention studies, involving children with ASD, after carefully designing specific study scenarios and protocols.
Results are very promising, showing the great potential of these technologies and opening up a rich variety of possibilities to further apply such a framework in different real-world settings.