Skip to main content
European Commission logo
español español
CORDIS - Resultados de investigaciones de la UE
CORDIS

Modelling Text as a Living Object in Cross-Document Context

Descripción del proyecto

Un marco innovador para analizar automáticamente las relaciones entre textos

El procesamiento del lenguaje natural (PLN) no admite el análisis de relaciones detalladas entre textos: las relaciones intertextuales. Se trata de un hito crucial para la inteligencia artificial (IA), ya que permitiría analizar el origen y la evolución de textos e ideas, y posibilitaría nuevas aplicaciones de la IA en la colaboración basada en textos, desde la educación a los negocios. Financiado por el Consejo Europeo de Investigación, el equipo del proyecto InterText está desarrollando el primer marco de exploración de la intertextualidad en el PNL. El equipo de InterText desarrollará modelos conceptuales y aplicados y conjuntos de datos para el estudio del comentario en línea, la vinculación implícita y el versionado de documentos. Los modelos se evaluarán en dos estudios de caso que implican la revisión por pares académicos y la refutación de teorías conspirativas.

Objetivo

Interpreting text in the context of other texts is very hard: it requires understanding the fine-grained semantic relationships between documents called intertextual relationships. This is critical in many areas of human activity, including research, business, journalism, and others. However, finding and interpreting intertextual relationships and tracing information throughout heterogeneous sources remains a tedious manual task. Natural language processing (NLP) fails to adequately support it: mainstream NLP considers texts as static, isolated entities, and existing approaches to cross-document understanding focus on narrow use cases and lack a common, theoretical foundation. Data is scarce and difficult to create, and the field lacks a principled framework for modelling intertextuality.

InterText breaks new ground by proposing the first general framework for studying intertextuality in NLP. We instantiate our framework in three intertextuality types: inline commentary, implicit linking, and semantic versioning. We produce new datasets and generalizable models for each of them. Rather than treating text as a sequence of words, we introduce a new data model that naturally reflects document structure and cross-document relationships. We use this data model to create novel, intertextuality-aware neural representations of text. While prior work ignores similarities between different types of intertextuality, we target their synergies. Thus, we offer solutions that scale to a wide range of tasks and across domains. To enable modular and efficient transfer learning, we propose new document-level adapter-based architectures. We investigate integrative properties of our framework in two case studies: academic peer review and conspiracy theory debunking. InterText creates a solid research platform for intertextuality-aware NLP crucial for managing the dynamic, interconnected digital discourse of today.

Institución de acogida

TECHNISCHE UNIVERSITAT DARMSTADT
Aportación neta de la UEn
€ 2 499 721,00
Dirección
KAROLINENPLATZ 5
64289 Darmstadt
Alemania

Ver en el mapa

Región
Hessen Darmstadt Darmstadt, Kreisfreie Stadt
Tipo de actividad
Higher or Secondary Education Establishments
Enlaces
Coste total
€ 2 499 721,00

Beneficiarios (1)