Skip to main content
Przejdź do strony domowej Komisji Europejskiej (odnośnik otworzy się w nowym oknie)
polski polski
CORDIS - Wyniki badań wspieranych przez UE
CORDIS

Ignite the Immersive Media Sector by Enabling New Narrative Visions

Periodic Reporting for period 1 - TransMIXR (Ignite the Immersive Media Sector by Enabling New Narrative Visions)

Okres sprawozdawczy: 2022-10-01 do 2024-03-31

The TRANSMIXR project aims to reduce barriers for, and accelerate the adoption of, human centric XR and AI technologies within the creative and cultural sectors (CCS). Specifically within the domains of news media and broadcasting, performing arts and cultural heritage, we are advancing state of the art (via a human centric approach) technologies to support media production, delivery, and consumption.

Project objectives:
Introduce holistic workflows, formats and practices that enable the creation, delivery and consumption of diverse immersive storytelling experiences.

Deep understanding of multimodal media content that can be used to facilitate the creation of complex narratives.

Design and develop a distributed XR environment that allows for distributed content creation for immersive and interactive experiences.

Design and deliver immersive experiences that convey complex narratives, foster cultural participation and collaboration, and facilitate active engagement.

Bring the TRANSMIXR vision to the market and ensure that media organisations, creative companies and heritage organisations have the capacity to implement it and deliver significant impact to their target audiences.

Forge trans-sectoral synergies and demonstrate how immersive media experiences could be transferred to new domains to contribute to societal, economic and environmental well-being.
Over the first reporting period, the team has embraced the TRANSMIXR human centric approach. Through multidisciplinary efforts, significant progress has been made with respect to user and technical requirements definition. The project has interacted with 360 stakeholders via requirements and design workshops, surveys, interviews as well as soliciting feedback on proof of concepts demonstrations both internal and external to the project. Furthermore, novel production workflows have been identified, resulting in guidelines for volumetric video capturing, the use of AI techniques for creating 3D content and the use of social XR pipelines to support enhanced communication and collaboration when creating experiences.

The TRANSMIXR multimodal media content ingestion pipeline is completed. The multimodal analysis components that support 360° and volumetric video formats as well as multimodal summarisation components are well progressed. These tools can help with a deeper understanding of multimodal media content that can be used to facilitate the creation of complex narratives. With respect to the development of creation toolsets for the new production workflows, again significant progress has been made in the first reporting period. The toolset includes: three volumetric and one motion capture systems with unique characteristics; AI-based tools for the creation of 3D static and dynamic content; and a template-based XR experience creation system that supports non-technical CCS professionals to adapt XR experiences for end-users. An open source volumetric video pipeline, has been developed for facilitating the communication between customers (e.g. cultural heritage institution) and producers of XR experiences. Finally, extensive work has been dedicated to the monitoring and extension of formats, standards, and metrics based on the needs of the TRANSMIXR use cases. In terms of end user experiences, to ensure consistency across TRANSMIXR experiences, a suite of assets and design guidelines have been created. Moreover, a number of user experience implementations for the MVPs of WP5, including two versions of a novel distributed control room for broadcasters, a template for simplifying the production process of curators of cultural heritage institutions, and the initial concept of a new performance based on a non-player character supporting interaction and conversation (using an LLM). Finally a range of user studies (including the creation of datasets) with TRANSMIXR technologies and related content formats (Social XR, point cloud, AR, VR) was undertaken. To date and per use case, MVPs that include co-creation workflows & immersive media experiences have been realised through dedicated pilot and evaluation teams.
To ensure that TRANSMIXR’s results complement the already existing landscape of digital tools in similar domains as well as create opportunities for new ecosystems of social XR solutions to emerge, it was decided to deliver a dynamic set of interoperable services instead of a monolithic platform. These tools are designed to support immersive co-creation and media experiences and can be deployed flexibly according to requirements of diverse use cases beyond those at the focus of TRANSMIXR.

The project has been following the living labs methodology to ensure an inclusive, iterative and transparent development process. Requirements gathering process and design workshops produced essential input for the design of the use cases but also served as a way to engage professional stakeholders and citizens from diverse backgrounds in discussions about the possibilities of immersive storytelling. To build a community of practitioners interested in the project’s solutions, all use cases have already started disseminating their pilot concepts in various industry events. Interested stakeholders will be taken along the project as it progresses and invited to participate across a range of activities - from design workshops to evaluation and capacity building activities - to ensure that they are fully equipped with the right tools and knowledge.

The project has already made significant scientific progress. As an examples, the outputs from the volumetric capture workshop which compared 4 different volumetric video and motion captures systems from partners CERTH, CWI and TCD/HSLU will inform and support the adoption of XR technology (specifically related to volumetric capture) within the CCS. This experimental evaluation compared the different systems in terms of different sensor technologies, different capture spaces, types of movement and number of people in the scene. The outputs of these efforts will inform the CCS in terms of system requirements, set up time, quality levels, resource consumption and power consumption of these types of systems. An analysis of the experiences of the performers, technicians and artistic director, when working with these systems across the different configurations, will also provide valuable insight for the CCS on the utility of these systems when creating novel XR content formats. Furthermore, an evaluation of the captures made across each system will inform the CCS in terms of the end-user perceptual quality that is achievable with these systems. Furthermore, a novel format for volumetric video VVglTF developed in the project has been applied in a follow up project (VEMAR - Volumetric Video for Enhanced Museum Experiences in AR). In the fields of multimodal content understanding as well as data-driven XR content creation, the TRANSMIXR has contributed to making video contents more accessible to traditional text search even at the shot level, and extended the ability of annotation tools to also extract semantic information from 360 and volumetric video and use this in video summarisation. This will impact on the XR content creation domain by making accessible appropriate video material for inclusion into immersive scenes or, as the capabilities of Generative AI also grow, video may be an input for the generation of photorealistic 3D scenes or objects.
Moja broszura 0 0