Skip to main content

An end-to-end system for the production and delivery of photorealistic social immersive virtual reality experiences

Periodic Reporting for period 2 - VRTogether (An end-to-end system for the production and delivery of photorealistic social immersive virtual reality experiences)

Berichtszeitraum: 2018-11-01 bis 2020-12-31

The widespread adoption of smartphones, tablets, laptops and, more recently, head mounted displays, has transformed the classical television or cinema consumer experience, which was essentially social, into an individual experience. Broadband streaming, and new distribution models, have also changed the consumption habits: content is increasingly consumed on-demand, and live events - music, sports, live TV shows - are one of the rare cases where people still gather together to watch TV. The arrival of head-mounted displays (HMDs) to the consumer market is likely to introduce further isolation: people wearing Virtual Reality (VR) goggles can sometimes feel like being in a different place, and do not see or hear their physical surroundings. However, isolation is not a necessary consequence of new media formats: people can feel like being in another place with others.

The grand promise of Virtual Reality is that of a medium which makes you feel like being in a place where you are not, where there is an unfolding plot within which you can take part, and freely navigate within it, as well as interact openly with any element in it, including with virtual characters. In this project, the aim is to radically improve the experience by innovating in how media formats are used (i.e. how audio, video and graphics are captured, delivered and rendered at users’ homes) demonstrating a significant improvement of the feeling of being there together and the photorealistic quality of the content.

VR-Together has produced two platforms and a set of tools to offer photorealistic immersive Virtual Reality (VR) content which can be experienced together with others while apart. The main objective of the project has been to research and develop advanced VR social experiences through the orchestration of innovative media formats. The production and delivery of such experiences and the underlying technology that enables them have been demonstrated along the three years and three months during which the project has been active, and 5 specific objectives have been addressed:

OBJ1. Develop and integrate new media formats that deliver high quality photo-realistic content and create a strong feeling of co-presence in coherently integrated experiences.
OBJ2. Adapt the existing production pipeline to capture and encode multiple media formats and integrate them with state-of-the-art post-production tools.
OBJ3. Re-Design the distribution chain so such innovative content format can be orchestrated and delivered in a scalable manner.
OBJ4. Develop appropriate Quality of Experience (QoE) metrics and evaluation methods to quantify the quality of these new social VR experiences.
OBJ5. Maximize the impact of VR-Together can have on content creators, producers, distributors, tooling companies, service providers and the general audience.
The feeling of togetherness in VR environments has become one of the main challenges of the video technology research community. VR-Together has accomplished to bring down this issue, creating one of the very first VR experiences where the concept of holoportation is perfectly integrated in remote multi-user and multi-format VR experiences.

Thanks to the state-of-the-art technology, developed within VR-Together, for the real-time capturing, compression and transmission of volumetric video, the participants of our platforms will be able to feel as if remotely located friends and family were actually sharing the same physical environment.

During three years, we have explored and developed real time volumetric capturing systems, real time low latency transmission pipelines for such type of data volumes, two platforms, for unity and web environments, that are able to orchestrate multiple users in multiple interactive sessions, under different conditions and using heterogeneous user representation formats, as well as open datasets which include valuable capture data and 3D environments among other things.

Hundreds of end users have been involved in the project through three pilots in which specific use case scenarios have been evaluated. Also professionals and stakeholders have contributed to the project, through specific industry events and workshops, in which requirements, results and future steps have been presented and gathered.

The VR-Together value proposition is the real time and realistic interaction capabilities in multi-user environments and offers all the necesary tools to make a deeper immersion and togethernes experience possible.
VR-Together has undoubtedly gone beyond the state-of-the-art in the field of Social VR, by providing novel and outstanding research and innovation outputs, overcoming existing research challenges and questions, in terms of technology and user experience, respectively. In particular, the project has contributed with two variants of full-fledged Social VR platforms, using off-the-shelf hardware and standard-compliant technology: one lightweight web-based platform; and one native platform that supports extra interactivity features, and different end-users’ representation formats (i.e. Time Varying Meshes, Point Clouds, 3D avatars, 2D windowed ingests from webcam video streams, and ghost users with and without audio communication). These platforms are made up of a set of innovative and modular technological components, like:

· Volumetric capturing systems, based on single and multiple (RGB-D) sensors

· End-to-end low-latency pipelines for the integration of live 2D and volumetric streams, including encoding and distribution solutions.

· Orchestration components for session management.

· Multi-Point Control Unit (MCU) components for an optimized in-cloud processing of Point Clouds and RGB+D streams.

· Web and native media clients supporting the integration of heterogeneous media formats for the end-users’ representation and the virtual scenario, and a set of interactivity features (basic voice control, tele-porting, interaction with objects, events’ handling…).

Beyond the technological components, the project has contributed with professionally content scenarios and assets and new evaluation metrics. This has resulted in open-source and licensed software, open-science datasets, evaluation methodologies and resources, and recommendations on how to provide Social VR experiences. These outputs have been reflected in many (up to 40) publications in high-impact conferences (e.g. ACM CHI, ACM MM, IEEE Virtual Reality) and journals (e.g. IEEE ACCESS, Virtual Reality…) and standardization contributions (e.g. MPEG, ITU, W3C…).

Overall, unlike most of the existing Social VR platforms relying on just the use of synthetic avatars and others requiring more expensive and complex setups for enabling realistic representations (like Microsoft Holoportation), VR-Together has been pioneering on enabling and demonstrating distributed multi-party Social VR experiences with realistic end-users’ representations, including self-representations, in a cost-effective and modular manner. Thus, the project has significantly paved the way towards the adoption of this promising medium, in a set of verticals (entertainment, education, corporate meetings, health…) that will propel distant human communication and interaction to the next levels.
Interview Rooms
Interview Room2
Interview Room1