Periodic Reporting for period 1 - SHARESPACE (Embodied Social Experiences in Hybrid Shared Spaces)
Période du rapport: 2023-01-01 au 2024-06-30
A new approach to IMU-based body tracking called "Joint Tracker”. We integrated the IMU with visual upper body tracking to limit the number of additional sensors. Calibration is limited to a single step and methods for auto-calibration have been investigated. A novel approach for 3D metric hand tracking has been developed and tested. The AI-based cognitive architectures (CAs) for virtual humans has been developed to improve motor coordination in a group. The communication platform architecture for Sharespace, designed for real-world use and built on Pixel Streaming ash been integrated into Enterprise Rainbow environment. In addition, a streaming animation system has been developed to allow for the animation of virtual characters and audio with lip animation has been implemented. The multi-focal near-eye augmented reality display has been further developed to mechanically incorporate a proprietary ultra-high refresh rate eye-tracking sub system. The first hardware integration has been completed and both systems have been mated.
The significant groundwork has been laid in relation to the ethical and socio-cultural evaluation of SHS. An ethical framework that directly speaks to the influence of these three dimensions (demographics, psychological and sociocultural) on SHS experiences is in development in the form of a series of policy briefs.
The three pilots have been planned. The first trial with experts in the Health scenario took place at the Barcelona Hospital in June 2024 and was successfully completed. The results will be presented at the 2024 World Pain Conference in Amsterdam on August 7, 2024. The Sports demonstrator is ready and will be shown at the Club France Olympic Games from 26 July to 13 August 2024. For the Art scenario, the artists have been recruited through an open call and have developed concepts for Sharespace performances in the context of the Ars Eletronica Festival from 5 to 8 September 2024.
1.Fully ego-centric visual inertial body sensor network: This sensor network enables full body kinematic estimation and visual hand kinematic reconstruction through an ego-centric visual inertial body sensor network.Potential impacts include advances in motion capture technology and applications in virtual reality (VR) and augmented reality (AR).
2.Spatial XR display with 6DoF, eye and hand tracking: This wearable XR display integrates eye-tracking and forward-looking hemispheric cameras to provide six degrees of freedom (6DoF) pose tracking. Potential impacts include improved user experience in XR environments as it is fully see-through and auto-calibrates (using eye tracking). Further work on hardware miniaturisation is required. Potential commercialisation and market relevance will be further explored in the second phase of the project. Both partners involved in this technology are commercial entities and have the full infrastructure for commercialisation at an international level.
3. Motion adaptation between speaker and avatar: Facial animation is achieved by using audio input, allowing realistic motion adaptation between the speaker and his/her avatar. This innovation is essential for virtual communication and telepresence.
4. XR multi-user, multi-service communications platform: The existing OpenRainbow end-to-end communications platform has been enhanced with XR capabilities to support multi-user interactions and services.This platform could have a significant impact on remote collaboration and communication.
5.Scene digitisation with spherical depth and RGB cameras:A 360-degree spherical camera captures both colour information (RGB) and depth, while a camera rig enables large scene capture and photo-realistic neural rendering.This innovation has the potential to transform digital content creation and immersive experiences. Further research and technology development is required to improve certain functionalities.Market entry is being considered.
6. Cognitive architecture: Control-based methods will be used to drive virtual avatars with increasing levels of autonomy, thereby enhancing their cognitive capabilities. This innovation could lead to more intelligent and responsive virtual agents. Further research, internationalisation and supportive regulatory frameworks are needed to ensure uptake and success.