Skip to main content
European Commission logo
polski polski
CORDIS - Wyniki badań wspieranych przez UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

COoperative Real-Time EXperiences with EXtended reality

Periodic Reporting for period 1 - CORTEX2 (COoperative Real-Time EXperiences with EXtended reality)

Okres sprawozdawczy: 2022-09-01 do 2024-02-29

The mission of CORTEX² “COoperative Real-Time EXperiences with EXtended reality” is to democratize access to the remote collaboration offered by next-generation XR experiences across a wide range of industries and SMEs.
Telepresence has gained popularity in the last few years with an increasing interest for remote work and home office, and the use of teleconferencing tool has become mainstream in many companies. However. the new digital era offers more than only exchanging audio and video streams for collaboration. We currently witness the emergence of extended reality (XR) in both Augmented Reality (AR) and Virtual Reality (VR) variants, and concepts such as digital twins for factories and production sites have gained attraction. However, the practical implementation necessitate the digitalisation, calibration, storage and preparation of existing assets, making these tools out of reach for many small and medium enterprises.
In the project CORTEX², we are setting the basis for future extended collaborative telepresence to allow for remote cooperation in virtually all industrial and business sectors, both for productive work and education and training. Our idea merges the concepts of classical video-conferencing with extended reality, where real assets such as objects, machines or environment can be digitalized and shared with distant users for teamworking in a real-virtual continuous space.
In essence, the CORTEX² framework allows the creation of shared working experiences between multiple distant users in different operating modes. In the Virtual Reality mode, participants are able to create virtual meeting rooms where each user is represented by a virtual avatar. Participants have the possibility to appear as video-based holograms in the virtual rooms, with an option to anonymise their appearance using a AI-based video appearance generator while keeping their original facial expressions. Participants are also able to exchange documents, 3D objects and other assets and will be accompanied by a AI-powered meeting assistant with extended capabilities such as natural speech interaction, meeting summarization or translation.
In the Augmented Reality mode, participants have the possibility to share their immediate surroundings through a simplified digitalization process, which results in a textured 3D model of their environments. This model is used by distant users to identify, select and point to specific areas. In turn, these areas are then highlighted in the original users’ view using Augmented Reality techniques (virtual arrows, virtual highlight).
In order to make the experience more immersive rich contextual IoT information is integrated into video streams, rendered as AR annotations on top of displayed objects and persons. To this end, data gathered from a multitude of heterogeneous IoT devices is ingested, aggregated processed and prepared, ultimately generating layers of insightful information related to smart assets of various different vertical domains. To this end, a versatile IoT Platform is developed, collecting data from connected devices and sensors and bringing them into a unified, IoT-protocol-agnostic view that will allow the seamless management of IoT information and its custom “shaping” into layers of aggregated IoT information.
In addition to the project activities, CORTEX² will invest a total of 4 million Euros in two open calls, which will be aimed at recruiting tech startups/SMEs to co-develop CORTEX2; engaging new use-cases from different domains to demonstrate CORTEX2 replication through specific integration paths; assessing and validating the social impact associated with XR technology adoption in internal and external use cases.
CORTEX2 is a framework for collaborative telepresence and provides an architecture for building collaborative and extended applications. We have produced an abstract model for AR/VR collaborative environment and shared XR spaces, and for AR/VR behavior and intention. This abstract model has been successfully instantiated in three different use cases: Industrial Remote Cooperation, VR Remote Training and VR Business Meeting. These use cases demonstrate the use of the CORTEX2 framework by following our architecture. The platform is based on the videoconferencing software Rainbow from the partner Alcatel-Lucent-Enterprise, and is now ready for third party partners to use and co-develop. The CORTEX2 consortium has develop a number of extended services to use in conferences in the frame of the work package 3. These services include a server-based 3D reconstruction service from RGB images, a 3D reconstruction pipeline for RGBD video streams based on Simultaneous Localization and Mapping, a real-time face reenactment method for creating alternative appearances in videos, a hand gesture recognition module, a virtual assistant for business meetings that implements multiple features (question answering), summarization of documents, and IoT integration. In terms of ethical, legal and social aspects, we have conducted an analysis of the use cases and provided insights for the formulation of concrete legal and ethical requirements, and conducted a validation study on the use of virtual avatars.
The work in CORTEX2 led to the development of three demonstrators corresponding to the use cases. A first demonstrator shows CORTEX2 in use for the remote technical support using AR. The second demonstrator allows for supervision of multiple trainees through one single trainer in VR with interaction between the trainees. The third demonstrator showcase a virtual business meeting with participation through different means (avatar, video), and including assistance services such as question answering, summarization and transcription of the meeting.
The core idea of CORTEX2 goes beyond the state-of-the-art videoconferencing systems by providing extended functionalities.
We do not transmit only video and audio streams such as classical videoconference systems, but also 3D data between the partners, allowing for sharing digitalized reality between the participants.
This in turn allows for the application of real-time augmented reality between the participants where one user can augment the view of other users, and the sharing of virtual elements (objects, text, other contents) in 3D.
In addition, our integration of IoT objects allows for a seamless use of IoT devices withing AR and VR spaces during the cooperation.

Our algorithm for 3D reconstruction from a RGBD camera called ActiveSLAM shows superior performance than state-of-the-art in various benchmarks, including the precision of 3D reconstruction (publication submitted to ECCV 2024).
Our algorithm for face reenactment outperforms state of the art in term of realism and 3D effects (publication submitted to BMVC 2024)
Remote maintenance with CORTEX2
Remote technical support with CORTEX2