Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS

Human-centered Trustworthiness Optimisation in Hybrid Decision Support

Periodic Reporting for period 1 - THEMIS 5.0 (Human-centered Trustworthiness Optimisation in Hybrid Decision Support)

Reporting period: 2023-10-01 to 2025-03-31

AI is rapidly transforming decision-making across key sectors like healthcare, transport, media, and public services. From diagnosing diseases to managing port traffic or detecting disinformation, AI enhances efficiency and predictive capabilities. Yet, as these systems become more complex and autonomous, the question arises: can we trust them?

To gain public trust, AI must be not just powerful, but also trustworthy—aligned with ethical principles, legal standards, and societal values. It must be fair, transparent, accountable, and understandable. Organisations need tools to assess and improve these traits in AI systems.

Today, few tools exist to evaluate AI trustworthiness or help non-experts understand AI decisions. There is also limited support for integrating ethical, legal, and technical elements into AI design in a structured way. This is the gap that THEMIS 5.0 aims to fill.

THEMIS 5.0 envisions AI that is trustworthy by design—built to reflect human values and continuously improved through interaction with users. At its core is a Trustworthiness Optimization Platform that supports hybrid human-AI decision-making by incorporating user feedback and AI-driven optimization methods. It aligns with the EU’s ethical standards for trustworthy AI.

To support explainable, ethical decision-making, THEMIS 5.0 applies virtue ethics, deontological ethics, and utilitarianism—allowing systems to reflect users’ moral preferences. The project supports the EU’s push for human-centric AI, as seen in the Ethics Guidelines for Trustworthy AI, the EU AI Act, and the Digital Strategy.

THEMIS integrates social sciences and humanities (SSH) into its technical design. Through participatory workshops across Europe, it gathers input from professionals and citizens to ensure the platform reflects real-world values and expectations.

The expected impacts of THEMIS 5.0 are both broad and far-reaching:
At the user level, individuals—whether professionals or citizens—gain tools to understand and shape how AI affects their lives, increasing transparency, trust, and fairness.
At the organisational level, public and private actors can assess, improve, and demonstrate the trustworthiness of their AI systems, helping meet legal and regulatory requirements.
At the societal level, THEMIS supports ethical AI adoption, reinforcing Europe’s leadership in responsible digital transformation.

THEMIS 5.0 also focuses on delivering real impact in three high-stakes sectors:
Healthcare: The platform will increase the transparency and reliability of AI-driven diagnostic tools, supporting better decisions and improving patient outcomes.
Media: By evaluating the fairness and performance of AI systems used to moderate content and detect disinformation, THEMIS 5.0 helps safeguard democratic discourse.
Port Management: The project will optimise AI-based decision-support tools used for operational logistics, helping reduce delays and improve efficiency—particularly in the accurate prediction of vessel arrival times.
From the start, THEMIS 5.0 prioritised user involvement. Over 270 participants across 8 countries contributed through co-creation activities—helping define what makes AI feel trustworthy in practice. These insights shaped the THEMIS framework, highlighting transparency, safety, and ethical behaviour as essential.

Workshops identified user needs, ethical concerns, and specific risks linked to each application area. Living labs tested chatbot prototypes and gathered feedback. This led to realistic user personas that represent different attitudes and expectations toward AI guiding system design. To turn these insights into a working system, THEMIS 5.0 developed a socio-technical framework for evaluating and improving AI trustworthiness, including a Trustworthiness Optimization Process to evaluate AI systems throughout their lifecycle—alongside a structured method for capturing information on AI use, data, risks, and methods to ensure compliance.

The project introduced an Ethical Compass Methodology, enabling AI systems to adapt to individual ethical preferences. Scientific methods were also developed to assess accuracy, robustness, and fairness—core components of trustworthy AI. These are implemented in AI models that consider user feedback, legal frameworks, and real-world risks.

THEMIS 5.0 designed several modular software components to support its trust optimization ecosystem such as (a) the Persona Analyser, that matches users to behavioural/ethical profiles, (b) the Trustworthiness Assessor, User Profiler, Optimization Suggester, that evaluates and improves AI systems performance, (c) the Trustworthy AI Models Engine, which updates and redeploys AI models (d) the Super Decision Engine, which supports trade-offs in ethical, legal, and technical AI choices. Built using a mix of co-created and open-source datasets, these components form the platform’s backbone.

The project has already produced several important outputs to support upcoming real-world piloting and broader use across the European AI ecosystem:
- A conceptual architecture of the THEMIS platform and its services.
- Functional prototypes of core AI modules.
- Benchmark datasets for privacy-aware and fair training.
- Tools for legal and ethical self-assessment.
THEMIS 5.0 advances the state of the art by developing AI systems that place trust, transparency, and human values at their core.

Human-Centric Trustworthiness Ecosystem: THEMIS built an AI platform that explains its decisions through a conversational agent. Using Rasa and reinforcement learning, the system learns from human input and adapts over time. By integrating human input and explanations, the ecosystem aligns AI operations with human values and context, improving transparency and trust in AI-supported decisions.

Risk Profiling and Decision Intelligence: The project expanded human risk modeling via an attacker profile and linked AI decisions to real-world performance using Decision Intelligence. This means the system evaluates not just technical accuracy, but also the real-world effects of AI decisions on organizational goals and human stakeholders. By combining human-factor risk profiling with decision impact analysis, THEMIS delivers a more holistic trustworthiness evaluation framework.

Co-Creation and Human Factors Insights: With 188 participants across 8 countries, THEMIS co-designed trustworthiness solutions aligned with real-world values. These insights shaped modules for fairness, robustness, transparency, and user-AI interaction. Overall, the scientific output includes new methodologies for human-centered AI evaluation, reflecting the understanding that perceptions of trustworthiness can vary and must be incorporated into AI system design.

Legal and Ethical Assessment Framework: THEMIS produced a template and guidance for assessing legal and ethical risks, helping organisations meet upcoming EU AI regulations. Its tools enable users to assess trustworthiness independently, translating ethical principles into practical evaluation. Such a framework can be directly useful for organizations aiming to meet EU regulations and ethical guidelines when deploying AI.

The consortium has published 10 peer-reviewed scientific papers, sharing results that guide future research and highlight THEMIS’s interdisciplinary contributions to trustworthy AI.
My booklet 0 0