Periodic Reporting for period 1 - CORTEX2 (COoperative Real-Time EXperiences with EXtended reality)
Período documentado: 2022-09-01 hasta 2024-02-29
Telepresence has gained popularity in the last few years with an increasing interest for remote work and home office, and the use of teleconferencing tool has become mainstream in many companies. However. the new digital era offers more than only exchanging audio and video streams for collaboration. We currently witness the emergence of extended reality (XR) in both Augmented Reality (AR) and Virtual Reality (VR) variants, and concepts such as digital twins for factories and production sites have gained attraction. However, the practical implementation necessitate the digitalisation, calibration, storage and preparation of existing assets, making these tools out of reach for many small and medium enterprises.
In the project CORTEX², we are setting the basis for future extended collaborative telepresence to allow for remote cooperation in virtually all industrial and business sectors, both for productive work and education and training. Our idea merges the concepts of classical video-conferencing with extended reality, where real assets such as objects, machines or environment can be digitalized and shared with distant users for teamworking in a real-virtual continuous space.
In essence, the CORTEX² framework allows the creation of shared working experiences between multiple distant users in different operating modes. In the Virtual Reality mode, participants are able to create virtual meeting rooms where each user is represented by a virtual avatar. Participants have the possibility to appear as video-based holograms in the virtual rooms, with an option to anonymise their appearance using a AI-based video appearance generator while keeping their original facial expressions. Participants are also able to exchange documents, 3D objects and other assets and will be accompanied by a AI-powered meeting assistant with extended capabilities such as natural speech interaction, meeting summarization or translation.
In the Augmented Reality mode, participants have the possibility to share their immediate surroundings through a simplified digitalization process, which results in a textured 3D model of their environments. This model is used by distant users to identify, select and point to specific areas. In turn, these areas are then highlighted in the original users’ view using Augmented Reality techniques (virtual arrows, virtual highlight).
In order to make the experience more immersive rich contextual IoT information is integrated into video streams, rendered as AR annotations on top of displayed objects and persons. To this end, data gathered from a multitude of heterogeneous IoT devices is ingested, aggregated processed and prepared, ultimately generating layers of insightful information related to smart assets of various different vertical domains. To this end, a versatile IoT Platform is developed, collecting data from connected devices and sensors and bringing them into a unified, IoT-protocol-agnostic view that will allow the seamless management of IoT information and its custom “shaping” into layers of aggregated IoT information.
In addition to the project activities, CORTEX² will invest a total of 4 million Euros in two open calls, which will be aimed at recruiting tech startups/SMEs to co-develop CORTEX2; engaging new use-cases from different domains to demonstrate CORTEX2 replication through specific integration paths; assessing and validating the social impact associated with XR technology adoption in internal and external use cases.
The work in CORTEX2 led to the development of three demonstrators corresponding to the use cases. A first demonstrator shows CORTEX2 in use for the remote technical support using AR. The second demonstrator allows for supervision of multiple trainees through one single trainer in VR with interaction between the trainees. The third demonstrator showcase a virtual business meeting with participation through different means (avatar, video), and including assistance services such as question answering, summarization and transcription of the meeting.
We do not transmit only video and audio streams such as classical videoconference systems, but also 3D data between the partners, allowing for sharing digitalized reality between the participants.
This in turn allows for the application of real-time augmented reality between the participants where one user can augment the view of other users, and the sharing of virtual elements (objects, text, other contents) in 3D.
In addition, our integration of IoT objects allows for a seamless use of IoT devices withing AR and VR spaces during the cooperation.
Our algorithm for 3D reconstruction from a RGBD camera called ActiveSLAM shows superior performance than state-of-the-art in various benchmarks, including the precision of 3D reconstruction (publication submitted to ECCV 2024).
Our algorithm for face reenactment outperforms state of the art in term of realism and 3D effects (publication submitted to BMVC 2024)