Skip to main content
Weiter zur Homepage der Europäischen Kommission (öffnet in neuem Fenster)
Deutsch de
CORDIS - Forschungsergebnisse der EU
CORDIS

REliable & eXplAinable Swarm Intelligence for People with Reduced mObility

Periodic Reporting for period 2 - REXASI-PRO (REliable & eXplAinable Swarm Intelligence for People with Reduced mObility)

Berichtszeitraum: 2024-04-01 bis 2025-09-30

AI's rapid adoption spans various domains, fueled by advancements in computational architectures, algorithms, and data availability. Despite remarkable progress, challenges persist, notably in transparency and trustworthiness due to AI's opaque nature. Addressing these concerns, guidelines like those from the European Commission emphasize human oversight, technical robustness, privacy, and fairness. However, current solutions for safety-critical AI applications face foundational hurdles. Initiatives like REXASI-PRO aim to overcome these challenges by integrating ethics, explainability, and decision science into a reliable and secure AI framework. The project focuses on developing Trustworthy Artificial Swarm Intelligence, ensuring safety, security, and ethical compliance. Specifically tailored for autonomous vehicles aiding people with reduced mobility, this framework promises a seamless, door-to-door experience.
The project aimed to develop trustworthy AI systems through human-centric solutions, focusing on multi-agent navigation in crowded areas. During the second reporting period, REXASI‑PRO focused on integrating four principal technological components (wheelchair, aerial robots, wall‑mounted cameras, orchestrator) through two sequential phases: component‑level testing and inter‑component interaction validation. Parallel test campaigns in Madrid and Genoa evaluated aerial robot collaboration and wheelchair navigation in autonomous and orchestrator‑assisted modes. Extensive effort optimized the DNN‑LNA neural network and developed an MPPI algorithm for superior navigation performance, incorporating synthetic dataset generation, vulnerability analysis, and energy efficiency improvements. Final testing involved 52 external participants, with 16 trained operators and 36 environmental agents, yielding quantitative metrics on attractiveness, efficiency, novelty, intelligence, and social acceptability. Beyond technological implementation, the project established a trustworthy AI framework aligned with European Commission pillars (safety, security, ethics, explainability, reliability, verification, validation) through four contribution categories: requirements elicitation methodologies including the Alpha SaaS platform; cyclical AI lifecycle development frameworks; runtime verification techniques for object detection and speech‑to‑text models; and sustainability optimization through dataset dimensionality reduction and knowledge distillation, demonstrating negligible performance degradation.
All beneficiaries jointly advanced trustworthy AI and robotics for safe, socially aware autonomous mobility and cyber‑physical systems, with emphasis on explainability, verification, energy efficiency and cybersecurity, in line with the EU AI Act and CPS safety norms. Their work spans new tools and algorithms, deployment on real platforms such as autonomous wheelchairs, and methods for rigorous assurance of AI behaviour.​ SUPSI progressed social navigation and trustworthy robotics through three lines of work. It created navground, navground‑learning and a VR testbed to standardise, benchmark and human‑in‑the‑loop test multi‑robot navigation in shared spaces. It also developed tailored indoor wheelchair navigation algorithms (smooth path planning with obstacle avoidance, narrow‑passage handling and hazard anticipation) and trained policies for collaborative, communicative behaviour between wheelchairs and humans. In parallel, SUPSI designed a Dynamic Bayesian Network framework to fuse heterogeneous sensor data, including from neighbouring robots, improving robustness and explainability over deep networks and classical filters, validated in laboratory and public VR demonstrations. CNR and AITEK strengthened trustworthiness and safety of AI modules. CNR performed explainable, reliable verification and validation of AI components such as the wheelchair neural controller and video analytics, identifying statistically safe operating conditions and characterising performance for expert review. AITEK first assessed safety of object‑detection models and coordinated safety analyses, then increased reliability using conformal prediction and statistical image‑feature analysis, and finally deployed improved models to an operational test environment to validate the full REXASI‑PRO system. SPXL, King’s College London and the University of Seville focused on orchestration, formal assurance and topology‑based methods. SPXL developed an orchestrator for autonomous robot fleets that integrates real‑time data to enhance robustness, safety and energy efficiency, delivered an open‑source ROS2 multi‑sensor people tracker, applied conformal prediction to speech‑to‑text, and adapted the Carla simulator to wheelchair scenarios. KCL produced first‑of‑their‑kind results in trustworthy AI, including a certification framework for generative planners, verification of unbounded temporal‑logic specifications in multi‑agent AI, conformal‑prediction techniques for stochastic systems, reliable off‑policy prediction with probabilistic guarantees, and advances in adversarially robust conformal prediction, now also used for LLM monitoring and aligned with AI‑safety requirements such as those in the EU AI Act. The University of Seville created eight dataset‑reduction methods for tabular data, a Python package and a topology‑based representativeness metric, extending this to image datasets to cut data volume and energy while preserving accuracy, and introduced geometric/topological tools to interpret and improve fleet behaviour, detect collisions and deadlocks, and support safer navigation strategies.​ DFKI and VRS advanced smart wheelchairs and CPS cybersecurity. DFKI trained deep neural networks for socially aware autonomous wheelchair navigation, generating key insights despite some performance and generalisation limits relative to initial expectations, and extended a 2D safety layer to a 3D camera‑based layer on lightweight hardware, achieving TRL 5 in populated indoor tests. VRS developed a SaaS platform for CPS cybersecurity compliance that supports multi‑standard assessments, including AI‑Act‑related and CPS‑specific requirements, enabling collaborative, sustainable and innovation‑oriented cybersecurity management across stakeholders.​HSOL developed a reliable autonomous indoor exploration system using aerial robots, achieving accurate real-time mapping, efficient multi-robot collaboration, and strong interest from the energy and safety sectors for applications in facility inspection and emergency operations.
Mein Booklet 0 0