Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS

Innovative Volumetric Capture and Editing Tools for Ubiquitous Storytelling

Periodic Reporting for period 2 - INVICTUS (Innovative Volumetric Capture and Editing Tools for Ubiquitous Storytelling)

Reporting period: 2021-10-01 to 2022-12-31

INVICTUS aims at delivering innovative authoring tools for the creation of a new generation of high-fidelity avatars and the integration of these avatars in interactive and non-interactive narratives (movies, games, XR immersive productions). The consortium proposes to develop and exploit the full potential of volumetric motion capture technologies that consist in capturing simultaneously the appearance and motion of actors using RGB cameras to create volumetric avatars, and rely on these technologies to design narratives using novel collaborative VR authoring tools.
The project focuses on three research axes:
- Improving pipelines to perform motion and appearance captures of characters with significant increase in fidelity.
- Proposing innovative editing tools on volumetric appearances and motions, such as transferring shapes, performing stylization, or adapting and transferring motions.
- Proposing innovative authoring tools that will build on VR interactive technologies to plunge storytellers in virtual representations of their narratives to edit decors, layouts and animated characters.
In terms of outputs, the INVICTUS project opens opportunities in the EU market for more compelling, immersive and personalized visual experiences, at the crossroads of film and game entertainment, reducing the cost in content creation, improving the fidelity of characters and boosting creativity. For more information see the project promotional movie: https://www.youtube.com/watch?v=SDdgm92hPRI.
During first Reporting Period, the consortium:
- Focused on improving the different stages that compose the volumetric capture pipeline to improve both the visual quality of textures and the volumetric mesh reconstruction. Different machine-learning approaches were investigated and successfully applied. Volumetric capture sessions were performed in preparation for the evaluations and made available to the consortium.
- Has developed authoring tools to assist the designers in the capture and design of volumetric-based characters, improving the visual quality and adapting the mesh to the intended context. Facial style transfer techniques were proposed to enhance creativity. Techniques of motion adaptation were also explored to adapt the animations of volumetric characters to scene constraints.
- Has delivered the Open-Source tool VRTist which proposes VR authoring capacities for the design of linear and non-linear narratives. The tool includes: scene layout features, lighting features, camera control features including framing and focus, animation features, and shot editing.
- Has identified two use-cases which will be implemented and against which the tools and technologies will be evaluated in the second reporting period.

During the second Reporting Period, the consortium:
- Explored new modalities for volumetric reconstruction to improve both the quality and the realism of the outputs by more strongly relying on machine learning techniques, also improving robustness of the methods. Test data was acquired in the volumetric capture studio especially dedicated to planned experiments and research. Moreover, new methods for the generation of animatable avatars from the volumetric video data have been developed, including pre-processing of the data. Especially, generative models have been trained from the volumetric video data. Finally, neural implicit representations were tested as an alternative representation.
- Completed the creation pipeline with additional functionalities (hairs, eyes), both for modelling and animation (speech and body motion). An innovative new model for the eye region has been developed. Also, different AI-based algorithms have been designed and tested. In particular, a deep learning method for the synthesis of gestures from speech, a retargeting method for facial expressions and a semantic facial features extraction and deep generative reconstruction for face animation. In addition, the model used during the project (Morgan) has been proposed as a new reference model for humanoid avatars within the MPEG standard.
- Has delivered an updated version of the VRTist authoring tool that first enables the intuitive editing of complex character motions using manipulatable motion trails, and second enables the interactive manipulation of Volumetric Captured characters using Deep Learning generative models. The tools integrated also the use of artist-driven rigs, together with numerous features that exploit VR capacities to improve artists productivity and creativity.
- Relied on the technologies designed and implemented during the project to conduct a number of experimentations to assess their applicability and usability in use-cases. Technologies were also demonstrated in both professional shows and general audience events. All experiments laid the ground for an assessment of the project KPIs which were successfully reached.
- Identified seven Key Exploitable Results (KERs) which will be exploited as the basis for further education and research, product, services, licence/transfer or open-source software. Two results are already available to the public:
a) VRTist: an open source VR design tool dedicated the creation of 3D environments and animations for rapidly prototyping scenes and shots (https://github.com/Irisa-Invictus/VRTist).
b) Volu: an iOS application that allows anyone with a mobile device to capture, play and share volograms (https://www.volograms.com/volu).
The creative experiences that the INVICTUS project designed through its use-cases, together with the evaluation of our creative tools, demonstrate the relevance of the technologies proposed, their readiness level, and their reception, both from the side of creative artists, and also on the side of end-consumers of these applications. The reporting of Key Performance Indicators (KPIs) values also demonstrates the increased uptake of volumetric technologies, through aspects such as ongoing usage in the creative industries, and investment in volumetric-related companies. As a result, the project has succeeded in the KPIs which were identified initially and has clearly pushed the boundaries of possibilities with volumetric technologies, as shown by the scientific publications in relevant venues, the technological demonstrations in leading events, and potential impact of the work in many industries. INVICTUS has positively impacted creativity and productivity in media production, and immersion and quality of XR media experiences by:
- Reducing the cost of producing and using realistic 3D representations and motions of avatars.
- Increasing productivity and creativity through more automated tools that enable quicker iterations from the ideas to its realization.
- Creating more compelling user experiences through high-fidelity avatars in appearance and motion, and better integration of these avatars in narratives.
Invictus Project Logo