CORDIS - EU research results
CORDIS

Efficient Explainable Learning on Knowledge Graphs

Project description

Improved knowledge graphs key to working with artificial intelligence data more efficiently

Explainable artificial intelligence is a set of tools and processes that allows humans to understand and interpret predictions made by machine learning models. Knowledge graphs, together with artificial intelligence, can improve the accuracy and trustworthiness of the model outcomes. However, current knowledge graphs are limited in their ability to map complex, interconnected data at scale. Building upon recent advances in knowledge representation and artificial intelligence, the EU-funded ENEXA project will develop scalable, transparent and explainable machine learning algorithms for knowledge graphs. The focus will be placed on devising human-centred explainability techniques based on co-construction, where humans and machines initiate a conversation to jointly produce human-understandable explanations. To validate the proposed approaches, researchers will cover three use cases: business software services, geospatial intelligence and data-driven brand communications.

Objective

Explainable Artificial Intelligence (AI) is key to achieving a human-centred and ethical development of digital and industrial solutions. ENEXA builds upon novel and promising results in knowledge representation and machine learning to develop scalable, transparent and explainable machine learning algorithms for knowledge graphs. The project focuses on knowledge graphs because of their critical role as enabler of new solutions across domains and industries in Europe. Some of the existing machine learning approaches for knowledge graphs are known to already provide guarantees with respect to their completeness and correctness. However, they are still impossible or impractical to deploy on real-world data due to the scale, incompleteness and inconsistency of knowledge graphs in the wild. We devise approaches that maintain formal guarantees pertaining to completeness and correctness while being able to exploit different representations of knowledge graphs in a concurrent fashion. With our new methods, we plan to achieve significant advances in the efficiency and scalability of machine learning, especially on knowledge graphs. A supplementary innovation of ENEXA lies in its approach to explainability. Here, we focus on devising human-centred explainability techniques based on the concept of co-construction, where human and machine enter a conversation to jointly produce human-understandable explanations. Three use cases on business software services, geospatial intelligence and data-driven brand communication have been chosen to apply and validate this new approach. Given their expected growth rates, these sectors will play a major role in future European data value chains.

Coordinator

UNIVERSITAET PADERBORN
Net EU contribution
€ 1 279 625,00
Address
WARBURGER STRASSE 100
33098 Paderborn
Germany

See on map

Region
Nordrhein-Westfalen Detmold Paderborn
Activity type
Higher or Secondary Education Establishments
Links
Total cost
€ 1 279 625,00

Participants (5)