Periodic Reporting for period 2 - PRODEMINFO (Protecting the Democratic Information Space in Europe)
Período documentado: 2023-04-01 hasta 2024-09-30
(1) to understand contemporary misinformation in Europe and how it connects to people's tacit personal and subjective ontology of truth;
(2) to develop countermessages based on inoculation theory that are sensitive to people's different ontologies (e.g. addressing authenticity rather than accuracy of political speech); and
(3) to test such countermessages at scale on social media..
PRODEMINFO pursues these objectives by combining controlled behavioural and cognitive experimentation with "big data'' analyses of social media, simulation modelling of behaviour online, and text modelling of large corpora. The project is organised into three main work packages:
- WP1 explores the question “what does truth actually mean and to whom” using a combination of computational text modelling, social-media analysis, and behavioural experimentation to explore the changing and diverse ontology of truth.
- WP2 explores the question “how do we reach people who do not place much weight on factual accuracy", combining corpus-curation and behavioural experimentation to find the reflection of the ontology of truth in misinformation and to create ontologically-aligned countermessages.
- WP3 explores the question “can we detect misinformation before it goes viral and inhibit its spread online" by combining computational social media analysis, simulation modelling, and field-experimentation on social media to roll out countermeasures to misinformation online.
Analysis 1.1. A computational history of "truth"
This analysis is nearly complete, with additional explorations ongoing as this has turned out to be even more interesting and valuable than anticipated. To date, this analysis has yielded one publication in PNAS Nexus (Lasser et al., 2022) and one in Nature Human Behaviour (Lasser et al., 2023).
Analysis 1.2. Truth discourse and 20th century fascism
A tentative preliminary analysis has shown that the lead-up to fascism in 1930s Germany was associated with a dramatic shift towards intuition-based language, exactly as expected.
Analysis 1.3. Truth, discourse, & democracy: towards an early-warning system
A preliminary analysis showed a mild positive correlation (r = 0.4) between some of the dimensions extracted from large-scale existing survey data and the average level of fact-speaking (as opposed to belief-speaking) in the congressional records.
Study 1.4. Public discourse vs. personal attitudes
We successfully executed data collection in our five target countries (Germany, Italy, Spain, UK, and Hungary). We have analysed the data sets and extracted the best set of items to compare people’s subjective ontology of truth, captured by what we call the Evidence-Intuition Scale (EIS), across countries.
Study 1.5. Public discourse vs. personal tweets
This study is an extension of Study 1.4 where participants’ historical Twitter discourse is linked to their survey responses to examine whether individual ontologies of honesty and/or truth are detectable in spontaneous social media activity. Thus far, we have only found a very modest association between people’s speech on Twitter and their responses to the EIS.
Study 1.6 Truth in people's own words
For this analysis, we have leveraged collaborations with the Horizon 2020-funded JITSUVAX project, which granted us access to text contributions from participants regarding their opinions on vaccination. We applied our dictionaries to the text and administered the EIS to participants, finding a small-to-modest correlation between their speech in response to a prompt and their epistemic orientation.
Analysis 2.1. A European misinformation corpus
This analysis is dedicated to curating a European misinformation corpus through a search of fact-checking archives in the five target countries to select items – from both trustworthy and untrustworthy sources – for potential use as experimental stimuli. The corpus has been presented at the European Chapter of the Association for Computational Linguistics (EACL) in 2023, and it is a valuable resource for both quantitative and qualitative misinformation studies. In parallel, we have commenced a collaboration with the German-Austrian Digital Media Observatory (GADMO), which is working on a similar project in German-speaking countries.
Analysis 2.2 A synthetic misinformation corpus
This analysis endeavours to create a “synthetic corpus of misinformation” for use in research in different languages. The material for this corpus will consist of synthetic stimuli (i.e. stimuli created from scratch according to various criteria but without a knowable truth value) from a large variety of studies conducted by the PRODEMINFO team and all collaborators. Items will be labeled and classified according to the type of misleading rhetoric they contain. This work is almost complete and has benefited from collaboration with Mubashir Sultan at the Max-Planck Institute for Human Development in Berlin.