Skip to main content
European Commission logo print header

Computational Propaganda: Investigating the Impact of Algorithms and Bots on Political Discourse in Europe

Periodic Reporting for period 4 - COMPROP (Computational Propaganda:Investigating the Impact of Algorithms and Bots on Political Discourse in Europe)

Periodo di rendicontazione: 2020-07-01 al 2020-12-31

Social media platforms have come to play major roles in shaping our politics, culture, and economic lives. Computational propaganda involves the use of algorithms, automation, and big data analytics to purposefully disseminate manipulative and misleading messages over these social media networks.

Misinformation on social media has emerged as a one of the most serious threats to democratic processes. Political actors with vested interests in interfering with such processes have been attempting to manipulate public opinion. Recent years have also seen the advent of algorithmically derived content systems and bots that deliberately work to amplify hate speech and polarizing misinformation. Democracy itself is under assault from foreign governments and internal threats, such that democratic institutions may not continue to flourish unless social data science is used to put our existing knowledge and theories about politics, public opinion, and political communication to work in their defence.

The project seeks to answer fundamental research questions: How are algorithms and automation used to manipulate public opinion during elections or political crises? What are the technological, social, and psychological mechanisms by which we can encourage political expression but discourage opinion herding or the unnatural spread of extremist, sensationalist, or conspiratorial news? What new scholarly research systems can deliver real time social science about political interference, algorithmic bias, or external threats to democracy?

In this context, the Computational Propaganda Project has been tracking important moments in public life, such as elections and referenda, and more recently the global response to Covid-19, to identify the proportions of misinformation that circulate on social media. To defend the public sphere requires a better understanding of digital citizenship and modern civic engagement. Junk news, and the deliberate spread of misinformation, often generates profitable advertising revenues for technology firms and miscreants. Online hate speech, in particular misogyny and racism, gets aimed at public figures from fake accounts. Personalised political advertising, as used in large-scale data-driven campaigns, delivers targeted interventions with hidden agendas. Political bots and highly automated social media accounts disrupt election campaigns and sow seeds of doubt in the minds of citizens making important decisions around their own health, such as whether to take vaccines. This project advances the social data science, applies it to advance our understanding of how contemporary civic engagement operates, and pioneers the social science of fake news production and consumption.
To study the problem of misinformation and effectively disseminate our findings to voter groups, the team developed an innovative method of undertaking short studies on large volumes of data and publishing our results in the form of a data memo on our dedicated website hosted by the Oxford Internet Institute at Oxford University. These data raised global awareness of the problem of misinformation, and the impact that the spread of political propaganda has on democratic processes in Europe and around the world. These memos inspired further inquiry from our global scholarly community and invitations to brief government led expert committees, parliamentary and senate hearings, industry leaders and civil society groups on our findings. We have succeeded in creating significant awareness of the problem of misinformation.

In addition to our impact and outreach activities to disseminate our research to a broad audience of policymakers, industry leaders, public, newspaper organisations, we have published several academic articles that have extended the state of the art in researching the use of computational methods for political benefit contributing to the fields of political science and computational social science. The project involved early career researchers from diverse backgrounds in this scholarship, providing mentoring and career development opportunities to meet colleagues, present their own original research, and learn the skills of presentation to specialized and public audiences.

Our work output primarily took the form of scholarly papers in peer review journals and books with major academic presses such as Oxford University Press and Yale University Press. These scholarly outputs, along with conference and workshop activities and guest editing academic journals, allowed us to engage in the scientific conversation with colleagues around the world. Our work therefore has been of fundamental importance in analysing the global phenomenon of computational propaganda and political misinformation on digital platforms.
The Computational Propaganda project is the first multi-national, multi-method, multi-lingual, and multi-platform research project on the spread of misinformation on social media. We have studied misinformation in the context of important moments in public life such as elections, referenda, and political crises. During our studies in Europe, we found that the problem of computational propaganda is a global phenomenon, with bad actors in different countries regularly adopting and learning from manipulation methods that have been successful applied in other country contexts.

This research therefore represents one the most comprehensive studies of computational propaganda undertaken. Our effort has required the collaboration of a diverse group of researchers including political scientists, sociologists, media scholars and computer scientists and is a unique multi-disciplinary research effort to scientifically study this problem using and extending state of the art tools in different disciplines. The research effort has been very agile in adapting research methods to suit the different cultures of social media use that prevail in various countries.

Systematic analysis of large volumes of social media posts, has required the combination of a unique set of qualitative, comparative, quantitative methods, and computational methods, which have been constantly updated as we studied a range of political events in diverse countries. For our computational analysis, the project has worked with data sourced from social media platforms in the crucial last few weeks of political campaigning, making our reports definitive analyses of the sources of misinformation circulating on social media platforms. Moreover, the reports have analysed suspicious high-frequency trending patterns of politically relevant hashtags and user accounts that become active only few days prior to major turning points in public life.

The Computational Propaganda’s research has been foundational to the social science of misinformation, having produced the first wave of research on how authoritarian regimes interfere in the elections of democracies using social media. Moreover, the team have used the project’s findings to inform and shape policy responses in Canada, the EU, UK, US and other democracies, and the team has been recognised by policymakers on both sides of the Atlantic as pioneers in the field of online disinformation.
PI Headshot
Team Meeting II
Team Meeting IV
Team Meeting I
Team Meeting III