European Commission logo
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS

Argumentation-based Deep Interactive EXplanations(ADIX)

Descrizione del progetto

Un radicale ripensamento dell’intelligenza artificiale spiegabile

L’intelligenza artificiale offre nuove possibilità e una vasta gamma di opportunità in numerosi settori e campi, tra cui la sanità e l’esercizio della professione forense. Il progetto ADIX, finanziato dall’UE, renderà l’intelligenza artificiale e i suoi algoritmi più spiegabili e trasparenti a persone di diversa provenienza. L’obiettivo generale è responsabilizzare gli scienziati e le persone comuni ad accettare l’intelligenza artificiale e i vantaggi provenienti dall’apprendimento automatico. ADIX porterà a un radicale ripensamento dell’intelligenza artificiale spiegabile per una società sostenuta da tale tecnologia. Pertanto, il progetto definirà un nuovo paradigma scientifico di spiegazioni interattive e profonde basate su argomentazioni computazionali in grado di essere impiegate insieme a una varietà di metodi di intelligenza artificiale incentrati sui dati per fornire giustificazioni a sostegno.

Obiettivo

Today’s AI landscape is permeated by plentiful data and dominated by powerful methods with the potential to impact a wide range of human sectors, including healthcare and the practice of law. Yet, this potential is hindered by the opacity of most data-centric AI methods and it is widely acknowledged that AI cannot fully benefit society without addressing its widespread inability to explain its outputs, causing human mistrust and doubts regarding its regulatory and ethical compliance. Extensive research efforts are currently being devoted towards explainable AI, but they are mostly focused on engineering shallow, static explanations providing little transparency on how the explained outputs are obtained and limited opportunities for human insight. ADIX aims to define a novel scientific paradigm of deep, interactive explanations that can be deployed alongside a variety of data-centric AI methods to explain their outputs by providing justifications in their support. These can be progressively questioned by humans and the outputs of the AI methods refined as a result of human feedback, within explanatory exchanges between humans and machines. This ambitious paradigm will be realised using computational argumentation as the underpinning, unifying theoretical foundation: I will define argumentative abstractions of the inner workings of a variety of data-centric AI methods from which various explanation types, providing argumentative grounds for outputs, can be drawn, generate explanatory exchanges between humans and machines from interaction patterns instantiated on the argumentative abstractions and explanation types, and develop argumentative wrappers from human feedback. The novel paradigm will be theoretically defined and informed and tested by experiments and empirical evaluation, and it will lead to a radical re-thinking of explainable AI that can work in synergy with humans within a human-centred but AI-supported society.

Meccanismo di finanziamento

ERC-ADG - Advanced Grant

Istituzione ospitante

IMPERIAL COLLEGE OF SCIENCE TECHNOLOGY AND MEDICINE
Contribution nette de l'UE
€ 2 500 000,00
Indirizzo
SOUTH KENSINGTON CAMPUS EXHIBITION ROAD
SW7 2AZ LONDON
Regno Unito

Mostra sulla mappa

Regione
London Inner London — West Westminster
Tipo di attività
Higher or Secondary Education Establishments
Collegamenti
Costo totale
€ 2 500 001,25

Beneficiari (1)