Periodic Reporting for period 1 - FAITH (Fostering Artificial Intelligence Trust for Humans towards the optimization of trustworthiness through large-scale pilots in critical domains)
Berichtszeitraum: 2024-01-01 bis 2025-06-30
The FAITH project ("Fostering Artificial Intelligence Trust for Humans towards the optimization of trustworthiness through large-scale pilots in critical domains") has been established to address this gap. Operating within the strategic vision of the European Union for a human-centric and trustworthy AI ecosystem, FAITH brings together a multi-disciplinary consortium to define, measure, and operationalize trust in AI systems across seven critical domains. By developing a unified, cross-sectoral framework and running large-scale pilots, FAITH aims to provide the tools, methodologies, and evidence needed to support the adoption of trustworthy AI in diverse real-world settings.
The project will contribute to enhancing public confidence in AI technologies, supporting compliance with emerging regulations such as the EU AI Act, and providing policy-relevant evidence to guide future standardization and legislative initiatives. The expected impact is significant, as it will foster safer, more reliable, and accepted AI solutions, ensuring that the benefits of digital transformation are realized while addressing ethical, legal, and societal challenges.
To operationalize this framework, a suite of digital tools and infrastructure was developed and integrated. TrustGuard was created as the system-level orchestrator for trust modeling and risk profiling, allowing stakeholders to track and interpret trust indicators. TrustSense was developed to assess the maturity and readiness of key teams involved in AI development and operations. The AI Model Hub was also implemented, incorporating both the AI Model Passport and Data Passport, which ensure traceability, auditability, and compliance with established data standards. These tools together provide a robust infrastructure for deploying the FAITH AI_TAF across various domains and ensuring adaptability and transparency.
The methodology and supporting tools underwent validation and refinement through workshops and dry-run exercises involving stakeholders from all seven large-scale pilots (LSPs). Feedback from these activities was used to improve requirements and usability, ensuring that the tools met the practical needs of diverse user groups.
The project successfully prepared and initiated seven pilots, each targeting a different critical domain: media, transportation, education, robotics/underwater drones, industrial processes/wastewater management, healthcare, and active ageing. These pilots were set up as real-world testbeds for the FAITH framework and digital tools, with initial activities including the setup of experimental infrastructure, AI system deployment, and integration of trustworthiness assessment mechanisms.
Knowledge and requirements were systematically gathered from each pilot, allowing the FAITH consortium to analyze cross-domain insights and further refine the methodology. This process supported the scalability and adaptability of the overall FAITH ecosystem, enabling it to address the unique challenges and requirements of each sector.
Key achievements during the period include the release of the first operational FAITH AI_TAF with detailed trustworthiness metrics, the development and integration of essential digital tools such as TrustGuard, TrustSense, and the AI Model Hub, and the successful design and launch of seven pilots across critical domains. The project also established a robust technical infrastructure for data and model standardization, promoting transparency, interoperability, and adherence to FAIR principles. All technical deliverables and milestones were met on schedule, with no major deviations or unresolved risks reported during the reporting period.
A key innovation lies in the participatory co-creation process, engaging end-users, domain experts, and affected communities in the design and validation of trustworthy AI solutions. This approach helps identify and address domain-specific risks, biases, and adoption barriers, ensuring that AI systems are robust, explainable, and aligned with real-world needs and expectations.
The large-scale pilots in media, healthcare, transportation, education, industrial processes, drones, and active ageing serve as testbeds for demonstrating the practical application and scalability of the framework. These pilots will generate policy-relevant evidence, inform standardization activities, and provide a blueprint for replication in other sectors.
To capitalize on these advances, FAITH is developing tailored exploitation strategies, including commercialization pathways, open-source releases, and contributions to EU and international standardization. The project is also preparing policy recommendations to support the forthcoming AI regulatory landscape and foster a trusted European AI ecosystem.