All beneficiaries jointly advanced trustworthy AI and robotics for safe, socially aware autonomous mobility and cyber‑physical systems, with emphasis on explainability, verification, energy efficiency and cybersecurity, in line with the EU AI Act and CPS safety norms. Their work spans new tools and algorithms, deployment on real platforms such as autonomous wheelchairs, and methods for rigorous assurance of AI behaviour. SUPSI progressed social navigation and trustworthy robotics through three lines of work. It created navground, navground‑learning and a VR testbed to standardise, benchmark and human‑in‑the‑loop test multi‑robot navigation in shared spaces. It also developed tailored indoor wheelchair navigation algorithms (smooth path planning with obstacle avoidance, narrow‑passage handling and hazard anticipation) and trained policies for collaborative, communicative behaviour between wheelchairs and humans. In parallel, SUPSI designed a Dynamic Bayesian Network framework to fuse heterogeneous sensor data, including from neighbouring robots, improving robustness and explainability over deep networks and classical filters, validated in laboratory and public VR demonstrations. CNR and AITEK strengthened trustworthiness and safety of AI modules. CNR performed explainable, reliable verification and validation of AI components such as the wheelchair neural controller and video analytics, identifying statistically safe operating conditions and characterising performance for expert review. AITEK first assessed safety of object‑detection models and coordinated safety analyses, then increased reliability using conformal prediction and statistical image‑feature analysis, and finally deployed improved models to an operational test environment to validate the full REXASI‑PRO system. SPXL, King’s College London and the University of Seville focused on orchestration, formal assurance and topology‑based methods. SPXL developed an orchestrator for autonomous robot fleets that integrates real‑time data to enhance robustness, safety and energy efficiency, delivered an open‑source ROS2 multi‑sensor people tracker, applied conformal prediction to speech‑to‑text, and adapted the Carla simulator to wheelchair scenarios. KCL produced first‑of‑their‑kind results in trustworthy AI, including a certification framework for generative planners, verification of unbounded temporal‑logic specifications in multi‑agent AI, conformal‑prediction techniques for stochastic systems, reliable off‑policy prediction with probabilistic guarantees, and advances in adversarially robust conformal prediction, now also used for LLM monitoring and aligned with AI‑safety requirements such as those in the EU AI Act. The University of Seville created eight dataset‑reduction methods for tabular data, a Python package and a topology‑based representativeness metric, extending this to image datasets to cut data volume and energy while preserving accuracy, and introduced geometric/topological tools to interpret and improve fleet behaviour, detect collisions and deadlocks, and support safer navigation strategies. DFKI and VRS advanced smart wheelchairs and CPS cybersecurity. DFKI trained deep neural networks for socially aware autonomous wheelchair navigation, generating key insights despite some performance and generalisation limits relative to initial expectations, and extended a 2D safety layer to a 3D camera‑based layer on lightweight hardware, achieving TRL 5 in populated indoor tests. VRS developed a SaaS platform for CPS cybersecurity compliance that supports multi‑standard assessments, including AI‑Act‑related and CPS‑specific requirements, enabling collaborative, sustainable and innovation‑oriented cybersecurity management across stakeholders.HSOL developed a reliable autonomous indoor exploration system using aerial robots, achieving accurate real-time mapping, efficient multi-robot collaboration, and strong interest from the energy and safety sectors for applications in facility inspection and emergency operations.