Project description DEENESFRITPL Automatically predicting what’s a fact and what’s not The expanded reach of the internet and media as well as impactful recent events have made it necessary to quickly and easily verify facts online. Unfortunately, complicating factors like the massive amounts of data mean that even machine-learning-based fact-checking has difficulty either efficiently achieving this task or further explaining its process for fact verification. Enter the European Research Council-funded ExplainYourself project, which will provide explainable fact checking. Since automatic fact-checking methods often use opaque deep neural networks, the project will provide explainable fact checking. Existing approaches are unable to produce diverse explanations, geared towards users with different information needs. Show the project objective Hide the project objective Objective ExplainYourself proposes to study explainable automatic fact checking, the task of automatically predicting the veracity of textual claims using machine learning (ML) methods, while also producing explanations about how the model arrived at the prediction. Automatic fact checking methods often use opaque deep neural network models, whose inner workings cannot easily be explained. Especially for complex tasks such as automatic fact checking, this hinders greater adoption, as it is unclear to users when the models' predictions can be trusted. Existing explainable ML methods partly overcome this by reducing the task of explanation generation to highlighting the right rationale. While a good first step, this does not fully explain how a ML model arrived at a prediction. For knowledge intensive natural language understanding (NLU) tasks such as fact checking, a ML model needs to learn complex relationships between the claim, multiple evidence documents, and common sense knowledge in addition to retrieving the right evidence. There is currently no explainability method that aims to illuminate this highly complex process. In addition, existing approaches are unable to produce diverse explanations, geared towards users with different information needs.ExplainYourself radically departs from existing work in proposing methods for explainable fact checking that more accurately reflect how fact checking models make decisions, and are useful to diverse groups of end users. It is expected that these innovations will apply to explanation generation for other knowledge-intensive NLU tasks, such as question answering or entity linking. To achieve this, ExplainYourself builds on my pioneering work on explainable fact checking as well as my interdisciplinary expertise. Fields of science natural sciencescomputer and information sciencesdata sciencenatural language processingnatural sciencescomputer and information sciencesartificial intelligencemachine learningnatural sciencescomputer and information sciencesartificial intelligencecomputational intelligence Keywords Explainable AI Automatic Fact Checking Natural Language Generation Personalised Machine Learning Human-Computer Interaction Programme(s) HORIZON.1.1 - European Research Council (ERC) Main Programme Topic(s) ERC-2022-STG - ERC STARTING GRANTS Call for proposal ERC-2022-STG See other projects for this call Funding Scheme ERC - Support for frontier research (ERC) Coordinator KOBENHAVNS UNIVERSITET Net EU contribution € 1 498 616,00 Address Norregade 10 1165 Kobenhavn Denmark See on map Region Danmark Hovedstaden Byen København Activity type Higher or Secondary Education Establishments Links Contact the organisation Opens in new window Website Opens in new window Participation in EU R&I programmes Opens in new window HORIZON collaboration network Opens in new window Other funding € 0,00