Skip to main content
Weiter zur Homepage der Europäischen Kommission (öffnet in neuem Fenster)
Deutsch Deutsch
CORDIS - Forschungsergebnisse der EU
CORDIS

DIgital DYnaMic and respOnsible twinS for XR

Periodic Reporting for period 1 - DIDYMOS-XR (DIgital DYnaMic and respOnsible twinS for XR)

Berichtszeitraum: 2023-01-01 bis 2024-06-30

DIDYMOS-XR will research and develop robust and scalable methods for 3D scene reconstruction from heterogeneous cameras and sensor data (e.g. lidar), integrating data captured at different times and under different environmental conditions and creating accurate maps of a scene.
The digital transformation and the availability of more diverse and cost-effective methods for 3D capture has led to the creation of digitised representations of parts of our public spaces, machinery and processes, commonly referred to as digital twins. In order to realise advanced virtual reality (VR) and augmented reality (AR) applications for cityscapes and industrial environments, continuously updated digital twins are a crucial requirement. However, today’s digital representations only cover small parts of the real world, and most of them represent the static state at the time of capture.
In cases where dynamic elements of the digital twins are connected to sensors, these connections have to be mostly handcrafted. The vision of DIDYMOS-XR is to enable advanced, more realistic and more dynamic extended reality (XR) applications, enabled by artificial intelligence (AI).
The project addresses two application domains, automated logistic in industrial processes and city environments.
During the first half of the project, work focused on defining use cases in the application domains, capturing data for algorithmic development, and developing methods needed to implement the defined use cases.
The following use cases were identified at the beginning of the project to be implemented through pilots:
• Generating maps for navigation of autonomous mobile robots in manufacturing environments.
• Providing tourists the possibility to plan their trip using a VR representation of the city they want to visit.
• AR application to guide city tourists, taking in to account current visitor numbers at sights or the weather situation, even enabling tourists to peek into buildings/museums outside of opening hours.
• Using digital twins for city planning, e.g. assessing the effect on traffic when changing road layouts, or the effect of deployment of streetlights.
• Supporting city maintenance by using vehicle-based sensors (the same used to collect information on changes in the environment) to automatically detect issues around the city.
• Updating the geometry of parts of a digital twin based on recent sensor data, as a base application for all other use cases.
Data were captured or generated through a number of approaches (and where needed anonymised):
• A vehicle equipped with synchronised cameras, LIDAR, GPS
• Static cameras also providing depth information
• Autonomous mobile robots
• Software simulating vehicle based sensors (with the advantage to also provide training data for AI methods)
Furthermore, drone scans of the village of Etteln, Germany, (represented by FIWARE as one of the user partners) as well as a 3D-modle of the Voitor are available for research of methods and development of algorithms.
In the first half of the project the partners published beyond state of the art results in regards to the following topics:
• Cooperative saliency-based pothole detection and AR rendering for increased situational awareness
• Increasing the interpretability and performance for self-supervised point cloud transformers (a component of current deep neural networks)
• Creating maps of environments used in robot navigation, combining automatic steps with human intervention through Augmented Reality
• Automatically estimating differences in exposure and compensating these differences which increases reliability of mapping algorithms
• Detection of changes in point clouds for city scenes
• Compression methods for point-clouds taking the importance of different parts of the point cloud into account
• Assessing a users' sense of safety in public using an Augmented Reality application
• Distributed Learning for increasing the resolution of LIDAR scans
• Investigating the influence of ambient noise on the user experience in Virtual Reality
• Impact of spatial auditory navigation on user experience during augmented outdoor navigation tasks
• Investigating the impact of virtual element misalignment in collaborative Augmented Reality experiences
• Approaches to simulate shadows and occlusions in real-time for AR applications
• The impact of social environment and interaction focus on user experience and social acceptability of an Augmented Reality game
• Solutions for volumetric video reconstruction and communications enabling groundbreaking interactive and immersive social Virtual Reality experiences
HSLAM approach for high-accuracy localisation (track and position of vehicle in red)
One example of a Persona developed for the use cases in the project
Experimental rendering using the SMERF approach
Point Cloud compression: uncompressed (left), compressed to 2,785% (right)
AR app allowing to peek into a building
Virtual Reality application showing simulation of rain
Userinterface for management of issues in a city. At the right images of issues are displayed (above
Sensor equipped car for collection of development data
Fusing data from multiple agents (vehicles) to overcome noisy data or fill in occluded areas
Screenshot of real-time renderer with option menu
Object dynamicity estimation – objects are classified due to the probability that they move (highlig
Interacting with a voxel grid to create a map for robot navigation
Object priming, models (cars) are reconstructed using shape priors and inserted in their correspondi
Mein Booklet 0 0