Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

CAROUSEL+

Periodic Reporting for period 2 - CAROUSEL (CAROUSEL+)

Reporting period: 2022-07-01 to 2023-12-31

Embodied online dancing and partying with digital characters

With internet experience nowadays mostly being passive, disconnected and disembodied, many individuals are isolated instead of brought together. The EU-funded CAROUSEL project aims to overcome the isolation of online users by allowing them to dance and party together even if they are physically separated. Through imaginative combinations of artificial intelligence and immersive interaction technologies, including the creation of digital characters capable of interactive dancing, CAROUSEL will enable internet users to feel each other’s presence, touch, and movement, from dancing together. These developments will help overcome isolation and loneliness and bring improvements to human health, work, and wellbeing. Moreover, they will generate the foundations for as yet unimagined forms of online communication and expression.

Objective

The uniqueness of the CAROUSEL value proposition resides in its capacity to empower meaningful visceral group social interaction to tangibly share imagination and physical movement in virtual and hybrid reality settings beyond the existing digital communication technology such as massive online game environments, peer-to-peer connections and social media networks. CAROUSEL technology will allow people, in small or larger groups, to feel good and have fun together. CAROUSEL will pave the way for the emergence of social- physical behaviour of digital characters by better understanding human body-language and being capable to interact autonomously with a group of people. The project will lay the new technical, and scientific, foundation for social and physical real-world AI by inventing, examining, simulating, developing, testing and validating human-digital interaction scenarios. The dream of the project is to create a sustainable innovation ecosystem of young talents around a new research path to design AI enabled digital characters which understand the social body language and can interact in a social and physical way with humans. CAROUSEL+ wants to lay the technical foundation for “social enabled” digital characters. The innovation perspective of this new branch “Real-World Social and Physical AI” is tremendous as it can be applied to many applications in many areas, for physically interacting in a meaningful social way. In particular digital characters in the future could act as physical trainers, dancers, entertainers, actors, co-workers, health assistants, guides, educators, spectators, physical therapists and companions. Beyond social, entertainment, health and educational applications, CAROUSEL learning and understanding of body language and group dynamics could be deployed in many other areas, including security, peace-making, emergency handling and autonomous driving.
In Period one the main results were:
- Definition of dance use cases and technical requirements as well as testing criteria for the first experimentation cycle and first prototype of fault-tolerant haptics/aural synchronized broadcasting, Validations of the social interaction experiences and tools of the first experimentation cycle, public awareness of the project through project website, academic publications, online Cafés, workshops at Sigraph and IEEE conferences and collaboration with other EIC sister pathfinder projects.

In the second Period the main results were:

- Updates on use cases and technical requirements, as well as testing criteria for the second experimentation cycle.

- Investigation and first prototypical implementations of deep learning models for modelling user interaction in sophisticated couple dancing and to allow to integrate intrinsic motivation and curiosity in the AI character.

- Forming the foundational platform for practical experimentation and deployment of collaborative XR solutions by presenting the DanceGraph low-latency dance sensing and actuation open source, with DanceMark telemetry validation framework. Showcasing with a live event at SIGGRAPH 2023 with remote avatar image reconstructions of participants between Los Angeles and Edinburgh.

- Progressed AI prediction inference towards shared synchronised online dance presence with a DanceGraph pose signal transformer component introducing a patterned motion stacked attention model.

- Developed DanceGraph haptics consumer and adapter components for remote partner dance contact actuation through Meta Quest controllers with ideation of teacher assisted dance direction with potential motion steering using vibration push/pull responses .

- Presented conversational networked AI agents to 5000+ audience with broadcast audio and visual expression with live large language model continuation responses and further emotional and expressive avatars towards engaging dance partner AI experiences, including MoodFlow.

- Delivered and further developed open data environments for Plant World, Bueno Aires Milonga Tango dance plaza scene optimised for real-time XR presentation with illumination environments applying to avatar lighting seamlessly. Further, progressed generative AI realisation of scalable immersive dance spaces populated with large groups.

- Experimental tests with qualitative and quantitative metrics based and perceptual user experience based validations of the social interaction experiences and tools with test users in the second experimentation cycle.

- Increased public awareness of the project via the website, social networks, academic publications, presence at trade-fairs, the publication of press articles in general and specialised press, the organisation of technical workshops. Extend stakeholder contacts to the medical sector. Maintain the collaboration with sister projects.
CAROUSEL advances the state of the art by advancing human-centred AI research in respect to real-world (physical and social) AI. The outcome of CAROUSEL will be novel AI-Agents capable to directly interact through dancing with humans and come up with new creative body movements and autotelic behaviour in real-time. This ambition will be tackled by using robust methods to synthesize natural agent movements and intrinsic motivation to achieve autotelic behaviour, as well as predicting and evaluating socio-physical factors such as mood, enjoyment, engagement, exertion, fatigue, or correctness and difficulty of movements of real world humans to aim for meaningful and entertaining interactions. Therefore we are investigate AI to control digital characters in dance use cases. The goal is to create intrinsically motivated characters that can interact with a single human but also with groups of humans in a natural way. For this purpose, we are developing specialized machine learning models for the synthesis and understanding of human movements in groups. Diverse and motivating opportunities for physical and social activity are important for health, psychology and creativity. With its focus on physical movements and feelings of humans CAROUSEL is developing beyond state-of-the-art methods, tools and techniques to advance real-world (physical and social) AI.
CAROUSEL LOGO