European Commission logo
español español
CORDIS - Resultados de investigaciones de la UE
CORDIS

RRR-XAI: Right for the Right Reason eXplainable Artificial Intelligence

Periodic Reporting for period 1 - RRR-XAI (RRR-XAI: Right for the Right Reason eXplainable Artificial Intelligence)

Período documentado: 2022-11-01 hasta 2024-10-31

The overall purpose of RRR-XAI was making deep learning (DL) explainable under the right for the right reasons (RRR) philosophy, by creating explanations endorsed by the reasoning of a domain expert, using the X-NeSyL (eXplainable NeSy Learning) methodology. To achieve this I followed the rationale behind XAI under the RRR philosophy. First, performing analyses to understand two types of phenomena that cause trouble in DNNs. The second part consists of using NeSy computation to communicate such phenomena.



Research and innovation objectives included:

Objective 1.1: To understand the phenomenon: Making DL explainable consists of being able to pinpoint why a deep neural network (DNN) associates an output to a given input. Could we use inherent properties of data to diagnose why a DNN produces an output y for a given input x? The aim is to design algorithms based on instance quality measures of data, as a proxy to guarantee transparency, to get understanding of such mechanisms. The hypothesis of O1.1 to test is that data preprocessing procedures and intrinsic aspects of data may highlight the provenance of bias or learning tricks often abused by DL models, and these can be measured with eXplainable AI (XAI) metrics.

Objective 1.2: To communicate the phenomenon: Once identified the root cause leading a DNN to associate an output to a given input, What if we could interrogate it, or “let the DNN talk”? The objective is being able to convey the constraint that led the DNN to such input-output association to different audiences, using high-level (symbolic, relational) concepts in natural language. The hypothesis here is that 1) The natural language explanation (NLE) can synthesize a formal (logical, causal or counterfactual) explanation while sacrificing little-to-no performance; 2) This NLE can be accurate enough to correct the model's critical and unfair errors.
The results achieved include the following contributions:
• A holistic vision of trustworthy AI with principles for ethical use and development.
• A philosophical reflection on AI Ethics.
• An analysis of regulatory efforts around trustworthy AI focused on the European AI Act.
• An examination of the fundamental pillars and requirements for trustworthy AI.
• A definition of responsible AI systems and the role of regulatory sandboxes.

In particular, these results are collected and summarized in the following publications:

- S Ali, T Abuhmed, S El-Sappagh, K Muhammad, JM Alonso-Moral, R Confalonieri, R Guidotti, J Del Ser, N Díaz-Rodríguez, F Herrera (2023). Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Information Fusion 99, 101805 https://doi.org/10.1016/j.inffus.2023.101805
- Díaz-Rodríguez, N., Del Ser, J., Coeckelbergh, M., de Prado, M. L., Herrera-Viedma, E., & Herrera, F. (2023). Connecting the Dots in Trustworthy Artificial Intelligence: From AI Principles, Ethics, and Key Requirements to - Responsible AI Systems and Regulation. Information Fusion, 101896 https://doi.org/10.1016/j.inffus.2023.101896
- Díaz-Rodríguez N., Binkyté R., Bakkali W., Bookseller S., Tubaro P., Bacevičius A., Zhioua S., Chatila R. Gender and sex bias in COVID-19 epidemiological data through the lenses of causality. Inf. Process. Manage., 60 (3) (2023), Article 103276, 10.1016/j.ipm.2023.103276 URL https://www.sciencedirect.com/science/article/pii/S0306457323000134


Additionally, the following publications complement the latter, due to their relation with responsible AI systems:

- Ontologies4SDGs: un repositorio abierto de recursos didácticos para la enseñanza de inteligencia artificial simbólica alineando objetivos docentes con los Objetivos de desarrollo sostenible. N Díaz Rodríguez, IJ Pérez, J Gómez Romero, JL Castro Peña, I Bloch 2023
- Responsible and human centric AI-based insurance advisors. G Pisoni, N Díaz-Rodríguez
Information Processing & Management 60 (3), 103273 6 2023
- Towards a more efficient computation of individual attribute and policy contribution for post-hoc explanation of cooperative multi-agent systems using Myerson values. G Angelotti, N Díaz-Rodríguez. Knowledge-Based Systems 260, 110189


Note: Despite the early termination of the project, the work remains in progress as this research line of work is perfectly aligned with the objectives of the PhD theses of two students I currently supervise.
Significant activities completed supporting the achievement of part of the project objectives include:
- The application for national Chairs on AI from ENIA (Estrategia Nacional de Inteligencia Artificial) of the Spanish Government.
- Obtaining project funding from INCIBE (Spanish Institute of Cybersecurity) to carry out trustworthy AI projects applied to cybersecurity.
- A joint publication with the recently established ADIA Lab Europe with headquarters to open in Granada, Spain.

In addition, other emerging projects aligned to the objectives of RRR-XAI have been promoted and are currently ongoing, for instance:

- direction of the project Explainable AI as an interface for algorithmic auditing”, BBVA Foundation Leonardo Grant (from march 2023.

- participation in the COTEC working group on generative AI (https://cotec.es/en) from spring 2023.

Other activities of scientific dissemination regarding the project have been done, such as:

- Invited talk at the IES Mariana Pineda high school in Granada to encourage careers in STEM at the International Day of the Girl and Woman in Science: "Potenciales, Riesgos y Desafíos de la Inteligencia Artificial". This is an initiative of scientific dissemination organized by the unit of scientific culture of the extended vice rectorate of the university and heritage (Unidad de Cultura Científica, Vicerrectorado de Extensión Universitaria y Patrimonio, UGR). 1 feb 2023. https://sites.google.com/view/nataliadiaz/news?authuser=0
- AI Bot experience, PTS Granada, activity of robotics and AI dissemination for school children (AIBot Experience “El reto tecnológico para el talento del futuro” https://www.aimpulsa.com/aibot-experience/
- Organization of the Spanish congress Andaluz.IA
- Media interviews in written and video media: https://sites.google.com/view/nataliadiaz/media?authuser=0
-- For instance, Interview in Spanish national TV channel news 8 August 2023: minute 9:08 to min 16:26.
- More science dissemination: https://sites.google.com/view/nataliadiaz/media?authuser=0
- More information on research activities in the repository of the project (https://github.com/NataliaDiaz/rrr-xai) project web (RRR-XAI https://sites.google.com/view/nataliadiaz/projects/rrr-xai-right-for-the-right-reasons-explainable-artificial-intelligence?authuser=0) and personal website https://sites.google.com/view/nataliadiaz
Holistic vision of trustworthy AI
Pre and post market auditing and the role of sandboxes.jpg