Skip to main content
Vai all'homepage della Commissione europea (si apre in una nuova finestra)
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS

Transparent, Reliable and Unbiased Smart Tool for AI

Periodic Reporting for period 3 - TRUST-AI (Transparent, Reliable and Unbiased Smart Tool for AI)

Periodo di rendicontazione: 2024-04-01 al 2025-03-31

Artificial intelligence is single-handedly changing decision-making at different levels and sectors in often unpredictable and uncontrolled ways. Due to their black-box nature, existing models are difficult to interpret and hence trust. Explainable AI is an emergent field, but, to ensure no loss of predictive power, many of the proposed approaches just build local explanators on top of powerful black-box models. To change this paradigm and create an equally powerful, yet fully explainable model, we need to be able to learn its structure. However, searching for both structure and parameters is extremely challenging. Moreover, there is the risk that the necessary variables and operators are not provided to the algorithm, which leads to more complex and less general models.
It is clear that state-of-the-art, yet practical, real-world solutions cannot come only from the computer science world. Our approach therefore consists in involving human intelligence in the discovery process, resulting in AI and humans working in concert to find better solutions (i.e. models that are effective, comprehensible and generalisable). This is made possible by employing ‘explainable-by-design’ symbolic models and learning algorithms, and by adopting a human-centric, ‘guided empirical’ learning process that integrates cognition, machine learning and human-machine interaction, ultimately resulting in a Transparent, Reliable and Unbiased Smart Tool (TRUST).
This project has designed TRUST, which is both a framework concept and a software platform. The TRUST concept is based on an iterative collaboration between model developers and domain experts, involved in a learning loop with genetic programming algorithms. Multiple tools assist humans in this loop, such as counterfactual, what-if analysis and graphical visualisation. These tools are materialised in the TRUST platform, a modular open-source application, which combines state-of-the-art algorithms with customisable user interfaces.
The TRUST concept has been tested in three main use cases, including predictive and prescriptive problems, in the healthcare, retail and energy sectors. The use cases have shown promising results and have guided the design and customisation of the TRUST platform to a broad range of applications.
The TRUST concept and platform were developed in stages. First, a simple version with basic functionality was developed (in WP1), to start obtaining feedback, while more advanced functionality was being specified. The platform was designed to be as flexible and modular as possible, being able to run any genetic programming algorithm written in any programming language. The interface is intuitive and customizable, and for the first time allows to adjust and handcraft symbolic models, which is the foundation for our human-guided paradigm.
In parallel, the three main modules were developed. Studies were first conducted, together with the use cases, to understand user needs (WP2). Also, human heuristics were formalized for each of those use cases, and a novel counterfactual search algorithm (CoDiCE) was developed (WP3). Finally, multiple advancements were done in the genetic programming algorithms (MSGP and GP-GOMEA), to maximize both their accuracy and explainability (WP4).
The three use cases were addressed by separate teams, but with close collaboration of multiple partners. The problems have been approached in innovative ways, which include but are not limited to GP algorithms, and have produced promising results. In the Healthcare use case (WP5), TRUST was used in a clinical validation study where clinicians interacted with tumor growth models through a dedicated user interface. This led to strong positive feedback on the tool’s utility and interpretability. The Online Retail use case (WP6) focused on a prescriptive pricing problem, validating the generated symbolic policies through user feedback sessions. The GP-based approach was shown to outperform current heuristics in balancing revenue and operational efficiency. In the Energy use case (WP7), symbolic expressions were increasingly recognized by stakeholders for their potential in energy diagnostics and planning. Integration efforts were aligned with expert recommendations.
The use cases have generated important insights to TRUST’s design (e.g. learning multiple models that complement each other, providing customisable dashboards, and API-level model integration). These requirements were developed and fully integrated into the TRUST platform, which is available in the project’s GitLab repository (https://gitlab.inesctec.pt/trust-ai/framework(si apre in una nuova finestra)).
Several communication and dissemination channels were used, including the project website, social media, and the creation of a Zenodo community. TRUST-AI was featured in top-tier scientific conferences, public outreach initiatives, and AI-focused industry events. All these efforts will continue in the future, as new research is published and new milestones are achieved. Indeed, Tazi and Apintech have expressed interest in exploiting TRUST commercially, and additional innovations are being considered for future exploitation, including: Explainability Assistant (a natural language interface for AI models), Tumor Assistant (a graphical user interface for tracking and predicting growth of slow-growing tumors), and UTIL-AI (an explainable demand response controller).
The project has generated major scientific contributions at both fundamental and application levels.
GP-GOMEA was enhanced with a constant optimization algorithm (AMaLGaM) to tune constants efficiently and effectively, outperforming backpropagation in some cases. It was also extended with multi-tree and function class capabilities to support richer expression structures. MSGP was advanced with an algorithm that tunes the constants once a suitable tree has been found, being competitive or outperforming traditional machine learning methods and established GP-based methods, without penalizing interpretability. It was also augmented by applying the well-known boosting paradigm.
The CoDiCE algorithm for counterfactual explanation proved to be superior (or at least competitive) against the well-known DiCE algorithm, in several metrics, including validity, diffusion and coherence. CoDiCE is now usable through the Counterfactuals tab and has been applied in experiments and user studies to support model interpretability.
The use cases have also resulted in important scientific contributions, and have shown potential for socio-economic impact. Despite some challenges, which are specific to each use case, the symbolic models and decision support tools developed can indeed be used to inform medical decisions (clinicians were highly interested in the possibility to use this tool clinically), guide delivery pricing strategies (despite practical challenges, results are promising), and support energy system transparency (with further developments already taking place).
The TRUST platform, thanks to its modular architecture, can be reused, adapted, or extended in a wide range of domains and projects. Innovative individual modules such as counterfactual explanation algorithms, multi-tree symbolic learning algorithms, and domain-specific user interfaces can be integrated into other software environments, increasing the long-term utility and reach of the project’s innovations.
Project Overview
Il mio fascicolo 0 0