Skip to main content
European Commission logo print header

investigating Human Shared PErception with Robots

Periodic Reporting for period 2 - wHiSPER (investigating Human Shared PErception with Robots)

Reporting period: 2020-09-01 to 2022-02-28

Perception is a complex process, where prior knowledge exerts a fundamental influence over what we see. Incorporating previous experience, or priors, into the current percept helps the brain cope with the uncertainty resulting from sensory and neural noise and ambiguity. As a consequence, the very same physical stimulus can be perceived differently by two observers, given different internal priors, which can be rapidly formed through short sensory experience and this phenomenon might be exacerbated in elderly, who suffer of reduced sensory acuity. Although recently a high degree of interest has emerged in the effect of recent experience on visual perception, there is no knowledge of how perceptual inference shapes visual perception during social interaction. However, this is a crucial question, as during interaction the brain is faced with two potentially conflicting goals: maximizing individual perceptual stability using internal priors, or maximizing perceptual alignment with the partner to facilitate coordination, by limiting the reliance on individual priors. Perceptual alignment is at the basis of all everyday joint activities, as passing an object, and it acquires even more importance in all contexts with high coordination demands, as sports, dance, music, where temporal and spatial precision in the perception of external stimuli and of the partner’s actions are fundamental for task achievement.

wHiSPER studies, for the first time, how basic mechanisms of perceptual inference are modified during interaction, by moving the investigation from an individual, passive approach to an interactive shared context, where two agents dynamically influence each other. To allow for scrupulous and systematic control, wHiSPER uses a humanoid robot as an interactive agent, serving as investigation probe, whose perceptual and motor decision are fully under experimenter’s control.
One of the crucial limits to the study of perception during interaction has so far been the impossibility of maintaining rigorous control on the stimulation, while allowing for a direct involvement in a dynamic exchange with another agent. The robotic platform makes it possible to port the stimuli used in perceptual investigations to the domain of online collaboration, bringing controllability and repeatability to an embodied and interactive context. The robot becomes either the stimulus presenter or the co-actor in experiments with increasing levels of interactivity, and complements more traditional screen-based investigations, adopted for baseline experiments.
In summary wHiSPER exploits rigorous psychophysical methods, Bayesian modeling and humanoid technologies to provide the first comprehensive account of how visual perception of spatial and temporal world properties changes during interaction with both humans and robots
The objective of the wHiSPER project is to understand how perception changes during interaction and to enable robots to establish a shared perception with their human partners. In this reporting period we have pursued these goals by conducting experiments to determine how being involved in an interactive context influences perceptual inference (Objective 1). As a result, we demonstrated for the first time that visual perception of space changes during interaction with a robotic agent, as far as it is perceived as a social being. These results have been presented as full paper in the main conference in the field of human-robot interaction (ACM/IEEE HRI 2020), receiving an honourable mention. Another study has been carried out to investigate whether the belief that stimuli are generated by another agent modifies the way they are processed and used in a learning task. The results, published at the IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2020) showed an increase in the engagement in the exercise due to the presence of a humanoid robot acting as stimulus generator. Further analyses of the differences in participants’ learning strategies and affective responses during the social interaction have been published at the International Conference on Social Robotics (ICRS 2020) and at the Italian Robotics and Intelligent Machines Conference (I-RIM 2020), obtaining in both venues the Best Paper Award.

We have also investigated action perception with different agents, both humans and robots (Objective 2). In particular, we have assessed how the style of an action – i.e. the way a movement is performed, such as gently or rudely – impacts on the observers’ perception and response to the action. The results confirm that our motor models act as priors also for action style, influencing the processing of others’ motions properties, such as the estimation of movement duration (Lombardi et al. 2021 Front. Hum. Neurosci). Interestingly, this generalizes also to the observation of robot motions, as far as it is designed to replicate the relevant spatio-temporal regularities of biological movements (Di Cesare et al. 2021 Sci. Rep.). Moreover, this communication appears to be effective also through different modalities such as touch (Rizzolatti et al. 2021 Sci. Rep.) and audio (Di Cesare et al. 2021 Cereb. Cort.).

To advance our understanding of whether and how individual perception aligns to others’ priors (Objective 3), we developed a bio-inspired model of time perception for the robot iCub. In the model, the estimation of the duration of the current time interval is influenced by the story of the previous stimuli (prior, as seen for space in Mazzola et al. HRI 2020). By manipulating the model parameters during a temporal reproduction experiment, the robot could either be set to emulate participants’ duration estimation or to produce significantly different estimates, based on different prior. This allowed us to quantify whether, in an HRI experiment, participants can adapt their perception and the timing of their actions to that of the robotic partner showing a different perception. The research has already led to the completion of a Master Thesis in Bioengineering. Furthermore, we investigated how humans adapt to other agents (human or not) who exhibit different perceptual spatial responses. In particular, we investigated whether social influence can modify participants’ response toward the (different) perception exhibited by the partner and whether this phenomenon is modulated by reciprocity, which is a pervasive social norm sustaining human cooperation (Zonca et al. 2021 Sci. Rep.). Our first aim is to understand whether and how the consideration that others show towards our judgments shapes our willingness to consider others’ opinions and to explore whether typically social norms, like reciprocity, may intervene while interacting with humanoids robots. By exploring these themes, we hope to shed light on the mechanisms regulating learning and advice taking in human-robot interaction, in particular when a difference exist in human and robot perceptions.

We then developed novel abilities of the iCub to endow it with the competences necessary to investigate how the robot adapts to the human partner, a fundamental prerequisite of assessing how is it possible to enable shared perception with a robot (Objective 4). In particular, we developed algorithms to read facial expressions (Barros et al. SN Comp. Sci. 2020) and cognitive load (Pasquali et al. HRI 2021). Furthermore, we designed simple architectures to address social adaptation abilities based on this type of input, with a particular interest in how to endow the robot with an internal motivation system to allow for autonomous decision-making and personalization of the interaction (Tanevska et al. Front. Rob. AI 2020). We also explored alternative pathways to establish shared perception with the robot. In particular, we studied how to include social aspects in the learning strategies of artificial agents based on reinforcement learning (RL). To advance this research, we created an environment that could be sufficiently controllable to enable testing of RL agents, while ensuring the establishment of entertaining social interactions and a shared space with human participants. To do so, we designed a competitive game where humans and virtual agents could play together (Barros et al. 2021 HRI comp., Barros et al. ICPR 2020). We explored different RL-based agents, assessing the impact of incorporating different models of human perception on game choices. In particular, we modeled the impact of winning chances as a modulator of player strategy (as mood, Barros et al. ICDL 2020), the impact of introducing rivalry (Barros et al. Pre-registration Workshop, NeurIPS2020) and we evaluated the possibility to introduce a personalized learning (Barros et al. 2021 Front. Rob. AI).
This project is unique and innovative as it develops a radically new approach to the study of perception, questioning the assumption that interaction involves the same perceptual processes supporting individual perception. This breakthrough is possible thanks to a novel methodology, based on the use of humanoid robots as interactive stimulators, which guarantee full control of the dynamic evolution of the interaction. For the first time, to investigate online perception, we employ robotics stimuli whose behavior and perception are dependent on the current behavior of participants, also emulating human perceptual inference. The final outcome of the project will result in a new quantitative methodology to investigate visual perception during interaction and will pave the way to designing a new generation of adaptive technologies.
The challenges tackled in this project are of foundational importance. Only by knowing whether and how the very basic input to our system, as perception, changes as a function of the interactive context, will it be possible to understand interaction and social cognition itself. Additionally, the findings can be generalized to perception mediated by a different modalities like touch or audition, as the model of perceptual inference adopted in wHiSPER has been shown to apply also to different senses. wHiSPER will provide also a novel understanding of human-robot interaction, with an innovative way of measuring how the interaction with different agents affects individual perception.
This project will lay the foundations to make autonomous technology adaptive to each user’s perceptual needs. Indeed, the findings of wHiSPER will allow technology to become adaptive to the user to a completely new level. Novel robots will be able to predict potential distortions in the perception of their partners and either adapt to them – i.e. by tailoring their actions to complement those of the human in time and space, e.g. in a passage – or correct them, e.g. by systematically warning them of the inaccuracy and correcting the misperception.
Banner