CORDIS - Forschungsergebnisse der EU
CORDIS

Moments in Time in Immersive Virtual Environments

Periodic Reporting for period 4 - MoTIVE (Moments in Time in Immersive Virtual Environments)

Berichtszeitraum: 2022-07-01 bis 2023-12-31

The past can be revisited through photographs and videos and re-enactments by memory, but virtual reality (VR) affords a more immersive experience. This project aimed to deepen our understanding of various factors, around the concept of ‘presence in virtual reality’. The theory behind this was significantly expanded through this project, backed up by experimental data.

To explore this, the project reconstructed a Dire Straits concert ('Sultans of Swing': https://youtu.be/bOSWaKT88j4) and Massiel's winning Eurovision performance (https://youtu.be/dGcmRNYFtt8). This required: creating realistic virtual bodies of the performers from photos and videos, building responsive virtual audiences, and overcoming problems of using VR such as simulator sickness.

Moreover, we could go beyond reality where participants could be onstage as part of the band. The project explored how the sense of owning the virtual body and its actions (agency) would influence behaviour. Additionally, 'VR United' software was developed to enable multiple users to share a VR space and engage in social interaction even across different continents.

VR's increasing consumer availability shows the need to understand its various, including ethical, implications an issue considered during the project, leading to a new spin-off project funded by the Spanish Government. VR United has commercial potential and is, for example, now being explored as a tool to combat loneliness among older adults. Most notably, the project aims to help older people relive their past in reconstructed events using an enhanced method of ‘reminiscence therapy’. Initial results, based on pilot studies, suggest these VR experiences can positively improve some negative aspects of ageing.
Two Dire Straits concert performances were recreated to study audience responses. Surprisingly, the surrounding virtual audience, rather than the band, was the most impactful factor. Sentiment analysis of participant essays revealed that some participants, in particular women, became concerned because they thought that virtual men in the crowd around them would approach them. Others become concerned because they thought that members of the virtual crowd were ‘staring’ at them. Thus although there was high presence, there was, in some cases, low sentiment. This would never have been discovered by traditional methods focussing on questionnaires.

The project developed a novel method (Adaptive Multimodal Matching – A3M) to find optimal VR settings for presence in real-time based on participant choices, rather than post-experience questionnaires. A3M, a Reinforcement Learning (AI) model, successfully determined optimal configurations for presence. An experiment in conjunction with Facebook was used to determine optimal configurations for a virtual TED talk.

VR reminiscence therapy was explored, with older adults reliving a salient Eurovision performance from their youth. While analysis is pending, pilot results suggest these immersive experiences can positively impact negative aspects of ageing.

The project also demonstrated how VR can be a potent tool to address biases and encourage pro-social behavior. We enhanced earlier findings that embodying people in virtual bodies of a different race can reduce their implicit biases against people of that race. Here we found an increase in implicit bias when the embodiment occurred in a stressful scenario, challenging the notion of VR as an automatic empathy machine. Continuing this line, our ‘Golden Rule Embodiment Paradigm’, where participants witness their own negative actions or acquiescent inaction from the embodied perspective of a victim, was successful in reducing bias and encouraging helpful interventions in a VR scenario in a United States police department, a project in collaboration with Google Jigsaw (https://youtu.be/tb9QAUkZWic). This method is widely used by our spin-off company kiin.tech to address workplace discrimination.

Several technical advances were made: a system to create realistic virtual bodies from photos, a solution for natural walking movement in VR despite space constraints, and the QuickVR programming library. The VR United software allows remote users to interact in VR with with others with each person with a virtual body that looks like themselves. It facilitated journalistic interviews (https://www.youtube.com/watch?v=1dACicAYdYg https://youtu.be/njVlI8409fs) and a conference panel discussion including Albert Einstein driven by ChatGPT (https://youtu.be/qkN1F3QAhp8).

We went beyond the original project specification by studying ‘Change blindness’, a phenomenon where people can be oblivious to major changes in their environment. We found that in VR people were surprisingly oblivious to gradual changes in the appearances of themselves and others (https://youtu.be/XPkUIjBKqUU). This concept was applied to a single-session therapy for helping people overcome their fear of public speaking, in comparison with a traditional 5 session exposure therapy.

We developed a new technique that allows participants to walk through a VR scenario even though the physical space in which they are located is much smaller than the virtual space in which they can walk.
Finally, we found that gradually increasing an environment from dark to light allows faster movement through the environment, but without causing simulator sickness. This is very important because it offers a way forward for people to use VR without feeling the symptoms of simulator sickness.
Sentiment analysis has provided a completely new approach for understanding of how people respond to VR scenarios. Our publications on this topic (2021, 2023) have gained significant traction, demonstrating the method's value.
We introduced the AI based version of our previous 3M technique for optimising VR experiences, leading to further use of this method (at least 7 external papers) and continued use in our lab (6 studies). This approach offers a more objective way to assess factors that impact presence and other responses, without the need for questionnaires.

We found a strong inverse relationship between the level of order (entropy) of eye movements and the sense of presence in VR. This suggests a new ‘familiarity’ dimension, where internal world models play a significant role in how participants experience VR. The connection of presence with eye movement behaviour demonstrates that presence is a genuine psychophysiological phenomenon and not just a conceptual construction of researchers.

The many uses of VR United and its components (look-alike self-representations) provide an important step forward in the exploitation of VR in the developing idea of the metaverse.

Our work on change blindness in VR, demonstrating how people can miss dramatic changes, even to their own bodies and those of others and even though directly observed. This has has far-reaching potential applications that we are persuing.

The 'Golden Rule Embodiment Paradigm' is a novel tool for promoting pro-social behavior. Its success shows a potential to shape attitudes and actions.

We have discovered a new way to mitigate simulator sickness by manipulating environmental brightness, improving accessibility and comfort for VR users.

An extended version of this report can be found on https://www.motiverc.org/extended-final-report/
The performance begins
A person embodied in virtual body for the cognitive embodiment experiment
The agent based model based audience attending a concert
On stage in a virtual Dire Straits performance
royal-abert-hall-2-brighter.png
From behind the stage