Periodic Reporting for period 2 - wHiSPER (investigating Human Shared PErception with Robots)
Reporting period: 2020-09-01 to 2022-02-28
wHiSPER studies, for the first time, how basic mechanisms of perceptual inference are modified during interaction, by moving the investigation from an individual, passive approach to an interactive shared context, where two agents dynamically influence each other. To allow for scrupulous and systematic control, wHiSPER uses a humanoid robot as an interactive agent, serving as investigation probe, whose perceptual and motor decision are fully under experimenter’s control.
One of the crucial limits to the study of perception during interaction has so far been the impossibility of maintaining rigorous control on the stimulation, while allowing for a direct involvement in a dynamic exchange with another agent. The robotic platform makes it possible to port the stimuli used in perceptual investigations to the domain of online collaboration, bringing controllability and repeatability to an embodied and interactive context. The robot becomes either the stimulus presenter or the co-actor in experiments with increasing levels of interactivity, and complements more traditional screen-based investigations, adopted for baseline experiments.
In summary wHiSPER exploits rigorous psychophysical methods, Bayesian modeling and humanoid technologies to provide the first comprehensive account of how visual perception of spatial and temporal world properties changes during interaction with both humans and robots
We have also investigated action perception with different agents, both humans and robots (Objective 2). In particular, we have assessed how the style of an action – i.e. the way a movement is performed, such as gently or rudely – impacts on the observers’ perception and response to the action. The results confirm that our motor models act as priors also for action style, influencing the processing of others’ motions properties, such as the estimation of movement duration (Lombardi et al. 2021 Front. Hum. Neurosci). Interestingly, this generalizes also to the observation of robot motions, as far as it is designed to replicate the relevant spatio-temporal regularities of biological movements (Di Cesare et al. 2021 Sci. Rep.). Moreover, this communication appears to be effective also through different modalities such as touch (Rizzolatti et al. 2021 Sci. Rep.) and audio (Di Cesare et al. 2021 Cereb. Cort.).
To advance our understanding of whether and how individual perception aligns to others’ priors (Objective 3), we developed a bio-inspired model of time perception for the robot iCub. In the model, the estimation of the duration of the current time interval is influenced by the story of the previous stimuli (prior, as seen for space in Mazzola et al. HRI 2020). By manipulating the model parameters during a temporal reproduction experiment, the robot could either be set to emulate participants’ duration estimation or to produce significantly different estimates, based on different prior. This allowed us to quantify whether, in an HRI experiment, participants can adapt their perception and the timing of their actions to that of the robotic partner showing a different perception. The research has already led to the completion of a Master Thesis in Bioengineering. Furthermore, we investigated how humans adapt to other agents (human or not) who exhibit different perceptual spatial responses. In particular, we investigated whether social influence can modify participants’ response toward the (different) perception exhibited by the partner and whether this phenomenon is modulated by reciprocity, which is a pervasive social norm sustaining human cooperation (Zonca et al. 2021 Sci. Rep.). Our first aim is to understand whether and how the consideration that others show towards our judgments shapes our willingness to consider others’ opinions and to explore whether typically social norms, like reciprocity, may intervene while interacting with humanoids robots. By exploring these themes, we hope to shed light on the mechanisms regulating learning and advice taking in human-robot interaction, in particular when a difference exist in human and robot perceptions.
We then developed novel abilities of the iCub to endow it with the competences necessary to investigate how the robot adapts to the human partner, a fundamental prerequisite of assessing how is it possible to enable shared perception with a robot (Objective 4). In particular, we developed algorithms to read facial expressions (Barros et al. SN Comp. Sci. 2020) and cognitive load (Pasquali et al. HRI 2021). Furthermore, we designed simple architectures to address social adaptation abilities based on this type of input, with a particular interest in how to endow the robot with an internal motivation system to allow for autonomous decision-making and personalization of the interaction (Tanevska et al. Front. Rob. AI 2020). We also explored alternative pathways to establish shared perception with the robot. In particular, we studied how to include social aspects in the learning strategies of artificial agents based on reinforcement learning (RL). To advance this research, we created an environment that could be sufficiently controllable to enable testing of RL agents, while ensuring the establishment of entertaining social interactions and a shared space with human participants. To do so, we designed a competitive game where humans and virtual agents could play together (Barros et al. 2021 HRI comp., Barros et al. ICPR 2020). We explored different RL-based agents, assessing the impact of incorporating different models of human perception on game choices. In particular, we modeled the impact of winning chances as a modulator of player strategy (as mood, Barros et al. ICDL 2020), the impact of introducing rivalry (Barros et al. Pre-registration Workshop, NeurIPS2020) and we evaluated the possibility to introduce a personalized learning (Barros et al. 2021 Front. Rob. AI).
The challenges tackled in this project are of foundational importance. Only by knowing whether and how the very basic input to our system, as perception, changes as a function of the interactive context, will it be possible to understand interaction and social cognition itself. Additionally, the findings can be generalized to perception mediated by a different modalities like touch or audition, as the model of perceptual inference adopted in wHiSPER has been shown to apply also to different senses. wHiSPER will provide also a novel understanding of human-robot interaction, with an innovative way of measuring how the interaction with different agents affects individual perception.
This project will lay the foundations to make autonomous technology adaptive to each user’s perceptual needs. Indeed, the findings of wHiSPER will allow technology to become adaptive to the user to a completely new level. Novel robots will be able to predict potential distortions in the perception of their partners and either adapt to them – i.e. by tailoring their actions to complement those of the human in time and space, e.g. in a passage – or correct them, e.g. by systematically warning them of the inaccuracy and correcting the misperception.