Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS

Interactive Natural Language Technology for Explainable Artificial Intelligence

Periodic Reporting for period 2 - NL4XAI (Interactive Natural Language Technology for Explainable Artificial Intelligence)

Reporting period: 2021-10-01 to 2024-09-30

This is the first European Training Network (ETN) on Natural Language (NL) and Explainable AI (XAI). This four-year European project brings together 21 institutions, including 13 academic institutions and 8 private companies, creating a vibrant ecosystem for collaboration.
The main goal is to train 11 creative, entrepreneurial, and innovative early-stage researchers (ESRs) who face the challenge of designing and implementing self-explanatory AI systems. ESRs profit from a broad program of training events and opportunities, ranging from network-wide events to courses covering technical and scientific domains and transferable skills. Each ESR works on an individual research project at one of the network’s host organizations and takes part in network-wide training events and meetings, as well as in secondments to other beneficiaries or partners.
As a result, ESRs are prepared to design and build transparent and trustworthy AI systems that generate interactive explanations on the basis of NL and visual tools, which are intuitively understandable by everyone, even by non-expert users, validated by humans in specific use cases, and accessible to all European citizens. Namely, four ESRs design and develop XAI models; two ESRs enhance NL technology for XAI; two ESRs exploit argumentation technology for XAI; and three ESRs develop interactive interfaces for XAI.
It is worth noting that the development of NL4XAI systems addresses technical issues (i.e. designing explainable algorithms and human-machine interfaces) as well as ethical, legal, socio-economic, and cultural issues. Moreover, their validation complies with the EU General Data Protection Regulation (GDPR) and the EU regulation for AI (AI Act) and agrees with the 7 requirements (Human Agency and Oversight; Technical Robustness and Safety; Privacy and Data Governance; Transparency; Diversity, Non-discrimination and Fairness; Societal and Environmental Well-being; Accountability) included in the EU Ethics Guidelines and Assessment List for Trustworthy AI.
In the project’s first two years, activities focused on establishing management systems, recruiting ESRs, and initiating scientific and training work. ESRs received foundational training in XAI and complementary skills and reviewed the state of the art in XAI models, Natural Language Generation (NLG) systems, argumentation technology (ARGTECH) for XAI, and interactive interfaces for XAI.
The second half of the project faced challenges, including the COVID-19 pandemic, Brexit complications, and ESR resignations, necessitating an amendment to the Grant Agreement. Through collaborative efforts and adaptability, all objectives, deliverables, and milestones were successfully met.
A total of 56 deliverables were successfully submitted. This includes 25 technical deliverables (WP1-4), which encompass the core research outputs of the project, and 31 transversal deliverables (WP5-8) related to the structured training plan, dissemination, exploitation, outreach activities, network management, and ethics requirements. Regarding the milestones, all were successfully reached, with a total of 16 milestones achieved throughout the course of the project.
A total of 30 secondments, equivalent to approximately 87 months, were conducted within the framework of the project across seven different countries (Spain, Malta, Poland, the Netherlands, France, Germany, and the United Kingdom).
A total of eight events were held in five different countries (Spain, Malta, the Netherlands, France, and Poland). Additionally, two joint events were conducted with other MSCA ITN projects, HYBRIDS and NoBias, fostering further collaboration and knowledge exchange.
Throughout the project’s duration, a total of 60 publications were produced, including 11 articles published in scientific journals, 36 in scientific conference proceedings or workshops, 4 book chapters, 5 dissertations, and 4 other publications. In addition, ESRs actively participated in multiple scientific poster sessions and various other events, further enhancing the dissemination of their research findings.
It is also worth noting that the consortium organized the first two workshops on Interactive Natural Language Technology for Explainable Artificial Intelligence (XAI@INLG2019 and XAI@INLG2020), but NL4XAI researchers also took part in the organization of the first workshop on Multimodal, Affective and Interactive eXplainable AI (MAI-XAI) and the first workshop on Implementing AI Ethics through a Behavioural Lens (AIEB) at the European Conference on AI (ECAI 2024). These workshops established discussion forums regarding automatic generation of interactive explanations in natural language, as humans naturally do, in agreement with ethical and legal issues. The audience attending these workshops were early-stage and senior researchers, but also practitioners interested in enabling the next generation of XAI systems.
NL4XAI made progress beyond the state of the art by:
•Generating and evaluating new self-explaining AI systems: ESR1 developed a transparent additive AI model, proposed a post-hoc explanation method for explaining opaque models in terms of surrogated transparent AI models, and implemented a model-agnostic SHAP-based metric to assess understanding of XAI models, and a metric for evaluating the faithfulness of Feature Attribution methods. ESR2 and ESR3 implemented a technique for explaining Bayesian networks. ESR3 also produced interpretable-by-design decision trees. These techniques were validated in user studies in the medical domain. ESR4 paid attention to explaining logical formulas.
•Enhancing NL Technology for XAI: ESR5 and ESR6 developed benchmarks to assess the grounding capabilities of deep neural models for NL understanding and generation. They contributed to a better understanding of how knowledge can be verbalized and how to fix problems of omission and distortion in such verbalization.
•Exploiting Argumentation Technology for XAI: ESR7 and ESR8 incorporated explainability into multiagent system design by exploring the use of an argumentative agent-oriented approach so that it is capable of exploiting a rich tradition of modeling human communication formally in multi-agent systems.
•Developing Interactive Interfaces for XAI: ESR9 implemented lexical alignment to achieve personalized verbal explanations through conversational agents. He contributed to a better understanding of the effects of alignment on users' comprehension and trust. ESR10 investigated how to mitigate human biases by means of exploiting different interaction strategies through user studies. She got insights into the risks and benefits of interventions to guide search behavior and empower searchers. ESR11 researched AI policy and governance to offer new ethics-based perspectives on the interaction between humans and AI systems.
The project's socio-economic and societal impact includes:
•Enhanced Skills and Career Development: NL4XAI provided training through workshops, secondments, and career plans, preparing ESRs for impactful careers in academia and industry.
•Strengthened Academia-Industry Collaboration: The consortium fostered knowledge transfer, enhanced doctoral training, and created opportunities for future partnerships.
•Advancement of the European Research Area: The project strengthened ties between the ERA and the European Higher Education Area, promoting excellence in training and attracting global talent.
NL4XAI logo
My booklet 0 0