CORDIS - Resultados de investigaciones de la UE
CORDIS

Intentional stance for social attunement

Periodic Reporting for period 4 - InStance (Intentional stance for social attunement)

Período documentado: 2021-11-01 hasta 2022-12-31

1. The main problem addressed by InStance

The InStance project focused on the question of whether, and under what conditions, people adopt the Intentional Stance towards robots, and what this means for social attunement in human-robot interaction.

The Intentional Stance is a concept introduced by the philosopher Daniel Dennett, who proposed that we adopt the Intentional Stance toward others when we predict and explain their behaviour with reference to mental states, such as beliefs, desires or intentions. For example, when I see my friend extending her arm with a glass of water in my direction, I assume that she intends to hand me that glass, because she believes that I am thirsty and she wants to ease my thirst. The terms “intend”, “believe” or “want” refer to mental states, and the assumption is that through referring to mental states, I can explain someone else’s behaviour. However, for non-intentional systems (such as artefacts), we adopt the design stance - assuming that the system’s has been designed to behave in particular way. For example, when a coffee machine stops pouring coffee, we understand that this happens not because the machine wants to be mean to the user, but because it was designed to fill a mug with only a certain quantity of liquid.

In this context, it is intriguing to understand if people adopt the intentional or the design stance towards humanoid robots that look like humans and perhaps behave in a human-like way.

Adopting either the intentional stance is crucial not only for explaining others’ behaviour but presumably also for social attunement. That is, when I adopt the Intentional Stance, I direct my attention to where somebody is pointing, and we establish joint focus of attention, thereby becoming socially attuned. On the contrary, if I see that a machine’s artificial arm is pointing somewhere, I am unwilling to attend there, as I do not believe that the machine wants to show me something, i.e. there is no intentional communicative content in the gesture.

2. Overall objectives
The objectives of InStance were defined through a research agenda composed of four work packages, each focusing on a different factor that might influence the likelihood of adoption of the Intentional Stance towards robots. We identified the following factors: (i) human-like subtle behaviours in a robot; (ii) social signals which carry communicative intentions, such as eye contact with the human observer; (iii) cultural differences affecting the likelihood of attributing mental life to other beings; (iv) familiarity with robots.

3. Importance for society
InStance has important societal implications. Humanoid robots offer a promise of being our future assistants in daily mundane tasks, and of being at service for not only the dirty dangerous jobs such as search and rescue in disaster recovery, but also in daily life.
However, to make robots work efficiently with humans, it is of utmost importance that they are well “attuned” to how humans work and operate. InStance has explored the factors that influence the social attunement, as a function of the Intentional Stance.

Understanding factors affecting adoption of the Intentional Stance towards robots might be important also for designing robots for elderly care. Think of an elderly person whose robot reminds her to take medication at a specific time of the day. If the elderly person adopts the Intentional Stance towards their robot, the robot’s advice is taken as result of a good intention or willingness to help, rather than just preprogrammed “alarm clock”, which is much more tempting to ignore.

Finally, eliciting the adoption of the Intentional Stance might be crucial for educational robotics. Robots assisting in educational activities might certainly be more efficient when children see them as intentional, as opposed to only mundane artefacts. Attribution of intentionality might increase long-term engagement in educational activities and might prevent from abandoning the “mechanical toy” after a few instances of interactions.
I am concluding the InStance project with several important results. First, we developed a novel tool which measures the degree to which a person adopts the Intentional Stance towards a robot. Further, with the use of this tool, we showed that it is possible to distinguish, based on neural activity, individuals that are likely to adopt the Intentional Stance towards robots from those who treat a humanoid robot as a mechanical artefact. This is a striking result because it suggests that such a high-level concept can be related to specific brain activity.

Regarding the project specific objectives (see above), we have learned that:
(1) Human-range temporal variability, predictability and human-like reactiveness to environmental events are good hints for humanness in robot behaviour
(2) Eye contact with a robot affects social attunement both at the behavioral as well as neural level
(3) Robot's human-like behavior and eye contact have similar impact on adoption of the Intentional Stance and social attunement in samples from Asian and European cultures.
(4) Long-term exposure to robots (or technology in general), as in the case of roboticists, reduces the likelihood of adopting the Intentional Stance towards robots, while extensive experience with reasoning about others’ mental states, as in the case of psychotherapists, increases the likelihood of adopting the Intentional Stance towards robots. These differences can also be observed at the neural level.

In general, during the InStance project's lifetime, we have conducted >80 experiments and tested >2800 participants in our human-robot interaction studies. This body of work has been reported in 95 publications, including 3 papers in the prestigious journal Science Robotics. Our results have been disseminated not only to scientific community, but also to the general public, through presentations at various public events, such as Science Festival or National Geographic Festival in Rome or Ars Electronica in Linz. In terms of exploitation, our methodological approach led us to develop robot-assisted training protocols for children diagnosed with autism spectrum disorder, which yielded excellent results regarding improvement of children's social cognition mechanisms.
InStance has offered a novel methodological approach of using human-robot interaction to study human social cognition. It has been the first research program that has operationalized the philosophical concept of Intentional Stance by quantifying it with objective behavioral and neural measures (rather than relying on subjective reports). Finally, it has provided a cutting-edge approach in robotics by using neuroscience methods to explore objective measures of quality of human-robot interaction.

Overall, this project has substantially advanced state of the art, as there have been no other empirical program that systematically examined factors that contribute to the adoption of the Intentional Stance towards robots. We have shown that both categories of factors – those that are related to the robot behavior and those that are linked with the human observer (cultural embedding, type of education) are crucial for the likelihood of adopting the Intentional Stance. InStance offered a unique and interdisciplinary contribution to the fields of social and cognitive neuroscience, as well as engineering (social robotics in particular), and philosophy.
Experimental setup: Participant seated with iCub in the cabin, experimenter monitors EEG signal
Illustration of InStance methods: combining interaction protocol with cognitive neuroscience methods
A nonverbal Turing test. Participants judge whether their robot is programmed or teleoperated
Example experiment: manipulating contingency of robot behavior by tracking the human eye movements