European Commission logo
English English
CORDIS - EU research results
CORDIS

Argumentation-based Deep Interactive EXplanations(ADIX)

Project description

A radical re-thinking of explainable AI

Artificial intelligence (AI) offers new possibilities and a wide range of opportunities in many sectors and fields, including healthcare and the practice of law. The EU-funded ADIX project will make AI and its algorithms more explainable and transparent to people from different backgrounds. The overall aim is to empower both scientists and non-scientists to embrace AI and benefit from machine learning. ADIX will result in a radical rethinking of explainable AI for an AI-supported society. As such, the project will define a novel scientific paradigm of deep, interactive explanations based on computational argumentation that can be deployed alongside a variety of data-centric AI methods to provide justifications in their support.

Objective

Today’s AI landscape is permeated by plentiful data and dominated by powerful methods with the potential to impact a wide range of human sectors, including healthcare and the practice of law. Yet, this potential is hindered by the opacity of most data-centric AI methods and it is widely acknowledged that AI cannot fully benefit society without addressing its widespread inability to explain its outputs, causing human mistrust and doubts regarding its regulatory and ethical compliance. Extensive research efforts are currently being devoted towards explainable AI, but they are mostly focused on engineering shallow, static explanations providing little transparency on how the explained outputs are obtained and limited opportunities for human insight. ADIX aims to define a novel scientific paradigm of deep, interactive explanations that can be deployed alongside a variety of data-centric AI methods to explain their outputs by providing justifications in their support. These can be progressively questioned by humans and the outputs of the AI methods refined as a result of human feedback, within explanatory exchanges between humans and machines. This ambitious paradigm will be realised using computational argumentation as the underpinning, unifying theoretical foundation: I will define argumentative abstractions of the inner workings of a variety of data-centric AI methods from which various explanation types, providing argumentative grounds for outputs, can be drawn, generate explanatory exchanges between humans and machines from interaction patterns instantiated on the argumentative abstractions and explanation types, and develop argumentative wrappers from human feedback. The novel paradigm will be theoretically defined and informed and tested by experiments and empirical evaluation, and it will lead to a radical re-thinking of explainable AI that can work in synergy with humans within a human-centred but AI-supported society.

Host institution

IMPERIAL COLLEGE OF SCIENCE TECHNOLOGY AND MEDICINE
Net EU contribution
€ 2 500 000,00
Address
SOUTH KENSINGTON CAMPUS EXHIBITION ROAD
SW7 2AZ LONDON
United Kingdom

See on map

Region
London Inner London — West Westminster
Activity type
Higher or Secondary Education Establishments
Links
Total cost
€ 2 500 001,25

Beneficiaries (1)