Skip to main content
European Commission logo
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Transparent, Reliable and Unbiased Smart Tool for AI

Description du projet

Mettre au point des solutions grâce à l’intelligence artificielle

Les modèles d’intelligence artificielle (IA) existants s’apparentant à des boîtes noires, il est difficile de les interpréter, et donc de leur faire confiance. Les solutions pratiques et concrètes à ce problème ne peuvent pas venir uniquement du monde de l’informatique. Le projet TRUST-AI, financé par l’UE, fait intervenir l’intelligence humaine dans ce processus de découverte. Il emploiera des modèles symboliques et des algorithmes d’apprentissage «explicables par conception», et adoptera un processus d’apprentissage «empirique guidé» centré sur l’homme et intégrant les aspects cognitifs. Le projet mettra au point TRUST, une plateforme d’IA fiable et collaborative, en veillant à ce qu’elle soit adaptée pour aborder les problèmes prédictifs et normatifs, puis créera un écosystème d’innovation dans lequel les universitaires et les entreprises pourront travailler soit indépendamment soit ensemble.

Objectif

Artificial intelligence is single-handedly changing decision-making at different levels and sectors in often unpredictable and uncontrolled ways. Due to their black-box nature, existing models are difficult to interpret, and hence trust. Explainable AI is an emergent field, but, to ensure no loss of predictive power, many of the proposed approaches just build local explanators on top of powerful black-box models. To change this paradigm and create an equally powerful, yet fully explainable model, we need to be able to learn its structure. However, searching for both structure and parameters is extremely challenging. Moreover, there is the risk that the necessary variables and operators are not provided to the algorithm, which leads to more complex and less general models.
It is clear that state-of-the-art, yet practical, real-world solutions cannot come only from the computer science world. Our approach therefore consists in involving human intelligence in the discovery process, resulting in AI and humans working in concert to find better solutions (i.e. models that are effective, comprehensible and generalisable). This is made possible by employing ‘explainable-by-design’ symbolic models and learning algorithms, and by adopting a human-centric, ‘guided empirical’ learning process that integrates cognition, machine learning and human-machine interaction, ultimately resulting in a Transparent, Reliable and Unbiased Smart Tool.
This proposal aims to design TRUST, ensure its adequacy to tackle predictive and prescriptive problems, and create an innovation ecosystem around it, whereby academia and companies can further exploit it, independently or in collaboration. The proposed ‘human-guided symbolic learning’ should be the next ‘go-to paradigm’ for a wide range of sectors, where human agency / accountability is essential. These include healthcare, retail, energy, banking, insurance and public administration (of which the first three are explored in this project).

Appel à propositions

H2020-FETPROACT-2019-2020

Voir d’autres projets de cet appel

Sous appel

H2020-EIC-FETPROACT-2019

Coordinateur

INESC TEC - INSTITUTO DE ENGENHARIADE SISTEMAS E COMPUTADORES, TECNOLOGIA E CIENCIA
Contribution nette de l'UE
€ 898 750,00
Adresse
RUA DR ROBERTO FRIAS CAMPUS DA FEUP
4200 465 Porto
Portugal

Voir sur la carte

Région
Continente Norte Área Metropolitana do Porto
Type d’activité
Research Organisations
Liens
Coût total
€ 898 750,00

Participants (6)