European Commission logo
English English
CORDIS - EU research results
CORDIS

Voice driven interaction in XR spaces

Project description

XR experiences combining vision and sound

Extended reality (XR) technologies are on the verge of dominating the human-computer interaction (HCI) scene by overtaking traditional approaches. Two other fields that are experiencing similar blooming are natural language processing (NLP) and computer vision (CV), mainly due to the emergence of data-driven methods in the areas of machine learning (ML) and artificial intelligence (AI). VOXreality aspires to fuse these parallel fields to design and develop AI-models that will integrate language as a core interaction medium, together with visual understanding. The focus is on producing pre-trained XR-models entangling the spatial and semantic knowledge of XR and NLP systems. This could kick-start a new era of applications built around the holistic understanding of the users’ goals, away from devices and controllers.

Objective

VOXReality is an ambitious project whose goal will be to facilitate and exploit the convergence of two important technologies, natural language processing (NLP) and computer vision (CV). Both technologies are experiencing a huge performance increase due to the emergence of data-driven methods, specifically machine learning (ML) and artificial intelligence (AI). On the one hand, CV/ML are driving the extended reality (XR) revolution beyond what was possible up to now, and, on the other, speech-based interfaces and text-based content understanding are revolutionising human-machine and human-human interaction. VOXReality will employ an economical approach to combine these two. VOXReality will pursue the integration of language- and vision-based AI models with either unidirectional or bidirectional exchanges between the two modalities. Vision systems drive both AR and VR, while language understanding adds a natural way for humans to interact with the back-ends of XR systems or create multimodal XR experiences combining vision and sound. The results of the project will be twofold: 1) a set of pretrained next-generation XR models combining, in various levels and ways, language and vision AI and enabling richer, more natural immersive experiences that are expected to boost XR adoption, and 2) a set of applications using these models to demonstrate innovations in various sectors. The above technologies will be validated through three use cases: 1) Personal Assistants that are an emerging type of digital technology that seeks to support humans in their daily tasks, with their core functionalities related to human-to-machine interaction; 2) Virtual Conferences that are completely hosted and run online, typically using a virtual conferencing platform that sets up a shared virtual environment, allowing their attendees to view or participate from anywhere in the world; 3) Theaters where VOXReality will combine language translation, audiovisual user associations and AR VFX triggered by predetermined speech.

Coordinator

MAGGIOLI SPA
Net EU contribution
€ 1 483 750,00
Address
VIA DEL CARPINO 8
47822 Santarcangelo Di Romagna
Italy

See on map

Region
Nord-Est Emilia-Romagna Rimini
Activity type
Private for-profit entities (excluding Higher or Secondary Education Establishments)
Links
Total cost
€ 1 483 750,00

Participants (9)