European Commission logo
polski polski
CORDIS - Wyniki badań wspieranych przez UE
CORDIS

Transparent, Reliable and Unbiased Smart Tool for AI

Periodic Reporting for period 1 - TRUST-AI (Transparent, Reliable and Unbiased Smart Tool for AI)

Okres sprawozdawczy: 2020-10-01 do 2022-09-30

Artificial intelligence is single-handedly changing decision-making at different levels and sectors in often unpredictable and uncontrolled ways. Due to their black-box nature, existing models are difficult to interpret and hence trust. Explainable AI is an emergent field, but, to ensure no loss of predictive power, many of the proposed approaches just build local explanators on top of powerful black-box models. To change this paradigm and create an equally powerful, yet fully explainable model, we need to be able to learn its structure. However, searching for both structure and parameters is extremely challenging. Moreover, there is the risk that the necessary variables and operators are not provided to the algorithm, which leads to more complex and less general models.
It is clear that state-of-the-art, yet practical, real-world solutions cannot come only from the computer science world. Our approach therefore consists in involving human intelligence in the discovery process, resulting in AI and humans working in concert to find better solutions (i.e. models that are effective, comprehensible and generalisable). This is made possible by employing ‘explainable-by-design’ symbolic models and learning algorithms, and by adopting a human-centric, ‘guided empirical’ learning process that integrates cognition, machine learning and human-machine interaction, ultimately resulting in a Transparent, Reliable and Unbiased Smart Tool.
This proposal aims to design TRUST, ensure its adequacy to tackle predictive and prescriptive problems, and create an innovation ecosystem around it, whereby academia and companies can further exploit it, independently or in collaboration. The proposed ‘human-guided symbolic learning’ should be the next ‘go-to paradigm’ for a wide range of sectors, where human agency/accountability is essential. These include healthcare, retail, energy, banking, insurance and public administration (of which the first three are explored in this project).
In this first period, several activities were conducted, as all work packages were active and progressing in parallel, including scientific (WP1-WP4) and use cases (WP5-WP7).
In WP1 a first prototype of the framework was designed, developed, and evaluated by multiple users. The original goal was that the framework was as flexible as possible, to allow modularity, i.e. easily plug different components. This final result has exceeded the expectations. The framework can not only run different algorithms, but really any algorithm written in any programming language. In addition, the interface is intuitive and customizable, and for the first time allows to adjust and handcraft symbolic models, which is the foundation for our human-guided paradigm.
The three main modules that compose the framework are being extended in WP2-WP4. Several studies were first conducted, together with the use cases, to understand user needs. Also, human heuristics were formalized for each of those use cases. Finally, multiple advancements are being done in the genetic programming algorithms themselves, to maximize both their accuracy and explainability (e.g. minimizing their size). These include enhancements to MSGP and GP-GOMEA, as well as extensions, such as optimizing constants and considering multiple objectives. These algorithms are also being thoroughly benchmarked against state-of-the-art black-box models, such as neural networks, both in the use cases and in other, more competitive, problems (e.g. reinforcement learning).
The three use cases are being explored by separate teams, but with close collaboration of multiple partners. The diverse set of problems is being approached in innovative ways, which include but are not limited to GP algorithms, and is producing promising results. Additionally, the use cases are generating interesting spillovers to the framework as a whole and to specific modules. For instance, from UC1, ideas for new extensions of the GP algorithms have emerged, including learning a complete function class (instead of a single function) and learning multiple functions that complement each other. Another example is model composability, i.e. being able to use the output of a model as the input of another one.
The progress achieved so far, namely on the TRUST-AI modules, is pushing the knowledge frontier. The multiple extensions of the GP algorithms are producing improved results, also because they address key issues identified in practical problems. The interfaces involve also important innovations, which are essential to promote the human-guided paradigm.
Each use case is producing interesting and promising results in its own application sector. The improvements in performance and explainability of the AI algorithms have a clear socio-economic impact, as they are used in the treatment of paraganglioma, logistic operations of online retail and energy consumption of buildings and countries.
Given the flexibility and modularity of the framework developed, either the framework itself, or its individual modules, including the backend, can be used in a variety of different projects, integrating or communicating with other well-established algorithms, libraries or packages. This can have a huge impact on both research and practice, which is not limited to the learning paradigm or the use cases that the project is exploring.
Project Overview