Periodic Reporting for period 1 - HYBRIDS (Hybrid Intelligence to monitor, promote and analyse transformations in good democracy practices)
Reporting period: 2023-01-01 to 2024-12-31
The HYBRIDS project, funded under the Marie Skłodowska-Curie Actions (MSCA), aims to develop hybrid intelligence systems that integrate structured knowledge from social sciences and humanities (SSH) into large language models (LLMs), so as to develop new neuro-symbolic systems for disinformation detection in the field of natural language processing. This interdisciplinary approach will enhance the capacity to detect, analyze, and mitigate online disinformation and hate speech while ensuring transparency, explainability, and adaptability to evolving digital landscapes.
The main objectives of HYBRIDS are:
• To advance AI-based public discourse analysis by incorporating human reasoning, argumentation models, and qualitative insights.
• To improve AI disinformation detection by integrating knowledge-driven methodologies from social and human sciences into LLMs.
• To train a new generation of interdisciplinary experts capable of designing and deploying hybrid AI solutions for social impact.
• To contribute to the European strategy against disinformation, supporting fact-checking initiatives, human-center AI development, and policy-making efforts.
• Research & Scientific Contributions: The project has reviewed the state of the art in NLP-based disinformation detection, argument mining, and discourse analysis. It has also developed new datasets and experimental methodologies to assess political bias, rhetoric, and misinformation patterns.
• Hybrid AI Methods: Researchers have begun designing explainable AI models that integrate linguistic, argumentative, and symbolic reasoning approaches. The project has also developed computational tools to analyze polarization dynamics and conspiracy theories.
• Training & Capacity Building: HYBRIDS has successfully recruited a cohort of highly skilled Doctoral Candidates (DCs), providing them with extensive training in hybrid AI methodologies, interdisciplinary research, and ethical AI practices.
• Secondments & Knowledge Exchange: DCs have undertaken collaborative research visits to media organizations, and research institutions gaining hands-on experience in real-world disinformation challenges.
• Greater accuracy and contextual understanding in detecting misleading narratives.
• More explainable and interpretable AI models, reducing algorithmic bias and increasing trust in AI-based decision-making.
• Cross-lingual and culturally adaptive solutions, enabling the detection of misinformation across diverse linguistic and socio-political contexts.
• Enhanced collaboration between AI researchers and SSH scholars, bridging gaps between computational and linguistic approaches in media analysis.
The project is also laying the groundwork for future technological developments, including:
• Fully open source software and language models, and open-access datasets to support research and innovation in disinformation.
• AI-assisted mechanisms for monitoring political and social media discourse.
• Strategies for reducing computational costs and environmental impact in disinformation detection.