Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS

Modelling Text as a Living Object in Cross-Document Context

Project description

Groundbreaking framework to automatically analyse relationships between texts

Natural language processing (NLP) fails to support the analysis of fine-grained relationships between texts – intertextual relationships. This is a crucial milestone for AI as it would allow analysing the origin and evolution of texts and ideas, and enable new applications of AI to text-based collaboration, from education to business. Funded by the European Research Council, the InterText project is developing the first-ever framework for exploring intertextuality in NLP. InterText will develop conceptual and applied models and data sets for the study of inline commentary, implicit linking and document versioning. The models will be evaluated in two case studies involving academic peer review and conspiracy theory debunking.

Objective

Interpreting text in the context of other texts is very hard: it requires understanding the fine-grained semantic relationships between documents called intertextual relationships. This is critical in many areas of human activity, including research, business, journalism, and others. However, finding and interpreting intertextual relationships and tracing information throughout heterogeneous sources remains a tedious manual task. Natural language processing (NLP) fails to adequately support it: mainstream NLP considers texts as static, isolated entities, and existing approaches to cross-document understanding focus on narrow use cases and lack a common, theoretical foundation. Data is scarce and difficult to create, and the field lacks a principled framework for modelling intertextuality.

InterText breaks new ground by proposing the first general framework for studying intertextuality in NLP. We instantiate our framework in three intertextuality types: inline commentary, implicit linking, and semantic versioning. We produce new datasets and generalizable models for each of them. Rather than treating text as a sequence of words, we introduce a new data model that naturally reflects document structure and cross-document relationships. We use this data model to create novel, intertextuality-aware neural representations of text. While prior work ignores similarities between different types of intertextuality, we target their synergies. Thus, we offer solutions that scale to a wide range of tasks and across domains. To enable modular and efficient transfer learning, we propose new document-level adapter-based architectures. We investigate integrative properties of our framework in two case studies: academic peer review and conspiracy theory debunking. InterText creates a solid research platform for intertextuality-aware NLP crucial for managing the dynamic, interconnected digital discourse of today.

Host institution

TECHNISCHE UNIVERSITAT DARMSTADT
Net EU contribution
€ 2 499 721,00
Address
KAROLINENPLATZ 5
64289 Darmstadt
Germany

See on map

Region
Hessen Darmstadt Darmstadt, Kreisfreie Stadt
Activity type
Higher or Secondary Education Establishments
Links
Total cost
€ 2 499 721,00

Beneficiaries (1)