European Commission logo
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS

Voice driven interaction in XR spaces

Description du projet

Des expériences RX combinant vision et son

Les technologies de réalité étendue (RX) sont sur le point de dominer la scène de les interactions homme-machine (IHM) en supplantant les approches traditionnelles. Deux autres domaines qui connaissent un essor similaire sont le traitement du langage naturel (NLP) et la vision par ordinateur (CV), principalement en raison de l’émergence de méthodes axées sur les données dans les domaines de l’apprentissage automatique (ML) et de l’intelligence artificielle (IA). VOXreality aspire à fusionner ces domaines parallèles pour concevoir et développer des modèles d’IA qui intégreront le langage comme moyen d’interaction principal, ainsi que la compréhension visuelle. L’accent est mis sur la production de modèles RX pré-entraînés qui intègrent les connaissances spatiales et sémantiques des systèmes RX et NLP. Cela pourrait donner le coup d’envoi d’une nouvelle ère d’applications construites autour de la compréhension globale des objectifs des utilisateurs, loin des appareils et des contrôleurs.

Objectif

VOXReality is an ambitious project whose goal will be to facilitate and exploit the convergence of two important technologies, natural language processing (NLP) and computer vision (CV). Both technologies are experiencing a huge performance increase due to the emergence of data-driven methods, specifically machine learning (ML) and artificial intelligence (AI). On the one hand, CV/ML are driving the extended reality (XR) revolution beyond what was possible up to now, and, on the other, speech-based interfaces and text-based content understanding are revolutionising human-machine and human-human interaction. VOXReality will employ an economical approach to combine these two. VOXReality will pursue the integration of language- and vision-based AI models with either unidirectional or bidirectional exchanges between the two modalities. Vision systems drive both AR and VR, while language understanding adds a natural way for humans to interact with the back-ends of XR systems or create multimodal XR experiences combining vision and sound. The results of the project will be twofold: 1) a set of pretrained next-generation XR models combining, in various levels and ways, language and vision AI and enabling richer, more natural immersive experiences that are expected to boost XR adoption, and 2) a set of applications using these models to demonstrate innovations in various sectors. The above technologies will be validated through three use cases: 1) Personal Assistants that are an emerging type of digital technology that seeks to support humans in their daily tasks, with their core functionalities related to human-to-machine interaction; 2) Virtual Conferences that are completely hosted and run online, typically using a virtual conferencing platform that sets up a shared virtual environment, allowing their attendees to view or participate from anywhere in the world; 3) Theaters where VOXReality will combine language translation, audiovisual user associations and AR VFX triggered by predetermined speech.

Coordinateur

MAGGIOLI SPA
Contribution nette de l'UE
€ 1 483 750,00
Adresse
VIA DEL CARPINO 8
47822 Santarcangelo Di Romagna
Italie

Voir sur la carte

Région
Nord-Est Emilia-Romagna Rimini
Type d’activité
Private for-profit entities (excluding Higher or Secondary Education Establishments)
Liens
Coût total
€ 1 483 750,00

Participants (9)