Skip to main content

investigating Human Shared PErception with Robots

Periodic Reporting for period 1 - wHiSPER (investigating Human Shared PErception with Robots)

Reporting period: 2019-03-01 to 2020-08-31

Perception is a complex process, where prior knowledge exerts a fundamental influence over what we see. Incorporating previous experience, or priors, into the current percept helps the brain cope with the uncertainty resulting from sensory and neural noise and ambiguity. As a consequence, the very same physical stimulus can be perceived differently by two observers, given different internal priors, which can be rapidly formed through short sensory experience and this phenomenon might be exacerbated in elderly, who suffer of reduced sensory acuity. Although recently a high degree of interest has emerged in the effect of recent experience on visual perception, there is no knowledge of how perceptual inference shapes visual perception during social interaction. However, this is a crucial question, as during interaction the brain is faced with two potentially conflicting goals: maximizing individual perceptual stability using internal priors, or maximizing perceptual alignment with the partner to facilitate coordination, by limiting the reliance on individual priors. Perceptual alignment is at the basis of all everyday joint activities, as passing an object, and it acquires even more importance in all contexts with high coordination demands, as sports, dance, music, where temporal and spatial precision in the perception of external stimuli and of the partner’s actions are fundamental for task achievement.

wHiSPER studies, for the first time, how basic mechanisms of perceptual inference are modified during interaction, by moving the investigation from an individual, passive approach to an interactive shared context, where two agents dynamically influence each other. To allow for scrupulous and systematic control, wHiSPER uses a humanoid robot as an interactive agent, serving as investigation probe, whose perceptual and motor decision are fully under experimenter’s control.
One of the crucial limits to the study of perception during interaction has so far been the impossibility of maintaining rigorous control on the stimulation, while allowing for a direct involvement in a dynamic exchange with another agent. The robotic platform makes it possible to port the stimuli used in perceptual investigations to the domain of online collaboration, bringing controllability and repeatability to an embodied and interactive context. The robot becomes either the stimulus presenter or the co-actor in experiments with increasing levels of interactivity, and complements more traditional screen-based investigations, adopted for baseline experiments.
In summary wHiSPER exploits rigorous psychophysical methods, Bayesian modeling and humanoid technologies to provide the first comprehensive account of how visual perception of spatial and temporal world properties changes during interaction with both humans and robots
"The objective of the wHiSPER project is to understand how perception changes during interaction and to enable robots to establish a shared perception with their human partners. In this reporting period we have pursued these goals by conducting experiments to determine how being involved in an interactive context influences perceptual inference (Objective 1). As a result, we demonstrated for the first time that visual perception of space changes during interaction with a robotic agent, as far as it is perceived as a social being. These results have been presented as full paper in the main conference in the field of human-robot interaction (HRI 2020, with published ACM proceedings), receiving an honourable mention [1]. A further study has been carried out to investigate whether the belief that stimuli are generated by another agent modifies the way they are processed and used in a learning task. The results, published at the IEEE International Conference on Robot and Human Interactive Communication showed an increase in the engagement in the task due to the presence of a humanoid robot acting as stimulus generator, and a trend of change in stimulus processing [2]. We have also investigated action perception with different agents, both humans and robots (Objective 2). In particular, we have assessed whether the style of an action – i.e. the way a movement is performed, e.g. gently or rudely – has an impact on the observers’ perception and response to the action. The results have been presented at a workshop at the International Conference of Robotics and Automation and as a Springer book chapter and indicate that our motor models act as priors also for action style, influencing the processing of others’ motions [3]. Last, we have started exploring novel paradigms to enable robot adaptation to the human partner, a fundamental prerequisite of Objective 4. Different learning and adaptive approaches have been proposed and tested in virtual settings and presented at international conferences and workshops (e.g. [4].)
• [1] Mazzola C., Aroyo A. M., Rea F., Sciutti A., Interacting with a Social Robot Affects Visual Perception of Space. HRI 2020, ACM/IEEE International Conference on Human-Robot Interaction.
• [2] Belgiovine G., Rea F., Zenzeri J., Sciutti A., A Humanoid Social Agent Embodying Physical Assistance Enhances Motor Training Experience. ROMAN 2020 International Conference on Robot and Human Interactive Communication.
• [3] Vannucci F., Di Cesare G., Rea F., Sandini G., Sciutti A., Expressive handovers: neural and behavioral effects of different attitudes in humanoid actions IEEE International Conference on Robotics and Automation - Workshop on Human-Robot handovers
• [4] Tanevska A., Rea F., Sandini G., Canamero L., Sciutti A., Eager to Learn vs. Quick to Complain? How a socially adaptive robot architecture performs with different robot personalities. IEEE International Conference on Systems, Man, and Cybernetics"
This project is unique and innovative as it develops a radically new approach to the study of perception, questioning the assumption that interaction involves the same perceptual processes supporting individual perception. This breakthrough is possible thanks to a novel methodology, based on the use of humanoid robots as interactive stimulators, which guarantee full control of the dynamic evolution of the interaction. For the first time, to investigate online perception, we employ robotics stimuli whose behavior and perception are dependent on the current behavior of participants, also emulating human perceptual inference. The final outcome of the project will result in a new quantitative methodology to investigate visual perception during interaction and will pave the way to designing a new generation of adaptive technologies.
The challenges tackled in this project are of foundational importance. Only by knowing whether and how the very basic input to our system, as perception, changes as a function of the interactive context, will it be possible to understand interaction and social cognition itself. Additionally, the findings can be generalized to perception mediated by a different modalities like touch or audition, as the model of perceptual inference adopted in wHiSPER has been shown to apply also to different senses. wHiSPER will provide also a novel understanding of human-robot interaction, with an innovative way of measuring how the interaction with different agents affects individual perception.
This project will lay the foundations to make autonomous technology adaptive to each user’s perceptual needs. Indeed, the findings of wHiSPER will allow technology to become adaptive to the user to a completely new level. Novel robots will be able to predict potential distortions in the perception of their partners and either adapt to them – i.e. by tailoring their actions to complement those of the human in time and space, e.g. in a passage – or correct them, e.g. by systematically warning them of the inaccuracy and correcting the misperception.