Skip to main content

Computational Propaganda: Investigating the Impact of Algorithms and Bots on Political Discourse in Europe

Periodic Reporting for period 3 - COMPROP (Computational Propaganda:Investigating the Impact of Algorithms and Bots on Political Discourse in Europe)

Reporting period: 2019-01-01 to 2020-06-30

Computational propaganda is the use of algorithms, automation, and big data analytics to purposefully disseminate manipulative and misleading messages over social media networks. shared on social media platforms have come to play major roles in shaping popular narratives on politics, culture and economics in connected communities across the world. In this context, actors with vested interests in interfering with democratic processes and manipulating public opinion for political gain. The last couple of years has also seen the advent of automated entities or bots that deliberately work to amplify misinformation, hate speech and work to promote polarisation within society. Since then misinformation on social media has emerged as a one of the most serious threats to democratic processes across the world. Democracy itself is under assault from foreign governments and internal threats, such that democratic institutions may not continue to flourish unless social data science is used to put our existing knowledge and theories about politics, public opinion, and political communication to work. The Computational propaganda project seeks to answers these fundamental questions: How are algorithms and automation used to manipulate public opinion during elections or political crises? What are the technological, social, and psychological mechanisms by which, we can encourage political expression but discourage opinion herding or the unnatural spread of extremist, sensationalist, conspiratorial news? What new systems of scholarship can deliver real time social science about political interference, algorithmic bias, or external threats to democracy?
In this context, the Computational Propaganda project tracks important moments in public life like elections and referenda in order to identify the proportion of misinformation that is circulating on social media particularly during election campaign periods and identify the groups of public pages or accounts with common interests that have been responsible for sharing this content in large volumes. To defend the public sphere requires a better understanding of digital citizenship and civic engagement in these domains: Fake news, the deliberate spread of misinformation through (non-satirical) news stories with no basis in fact, often for-profit through click-through advertising revenue; Online hate speech, in particular misogyny and racism, aimed at public figures from specific political or religious groups, or based on ethnicity, gender, or sexuality; Personalised political advertising, as used in large-scale data-driven campaigns, delivering targeted but hidden interventions; Political bots, or computational propaganda, where automated (or partly automated) social media accounts are used to manipulate opinion and disrupt election campaigns. In an increasingly connected world these techniques have been used by actors to sow discord and deepen fault lines within societies. It is therefore of crucial importance to scientifically study the role that political propaganda on social media platforms plays in shaping public opinion.
The overall objectives of the project are (a) to identify sources of misinformation that have been circulated widely on social media platforms, during important moments in public life (b) identify main actors or groups that have disseminated these sources on social media (c) identify accounts on Twitter that have engaged in suspicious high frequency tweeting.
In order to study the problem of misinformation and effectively disseminate our findings to voter groups, the team developed an innovative method of undertaking short studies on large volumes of data and publishing our results in the form of a data memo on our dedicated website hosted by the Oxford Internet Institute at Oxford University. These memos have hugely succeeded in creating an increased awareness of the problem of misinformation, and the impact that the spread of political propaganda has on democratic processes in Europe and around the world. These memos have been covered extensively by major international organisations and team members have been invited to various panel discussions in leading conferences and workshops, government led expert committees, parliamentary and senate hearings, and meeting with industry leaders and civil society groups. We have succeeded in creating significant awareness of the problem of misinformation.
Our data memos have covered parliamentary elections in the UK, the Brexit referendum, elections in France, Germany and Sweden. We have also studied the role of misinformation during the 2016 US Presidential elections and the 2018 US midterm elections. Apart from these countries, our work has also covered major countries in Latin America, including Mexico and Brazil. In all these memos, we have collected purposefully sampled public data from social media platforms and analysed the data to identify sources of misinformation. We have also analysed the data to identify groups who have spread these sources and Twitter accounts that have engaged in high-frequency tweeting in order to amplify this content. We have published our data memos before important political events in order to educate the voters on the types of information they have been exposed to during a crucial period leading up to the elections.
In addition to data memos, we have also published a number of working papers on political misinformation and computational propaganda. Our wave of reports looked at computational propaganda and misinformation in the following countries, Germany, Poland, Ukraine, Russia and also from Brazil, Canada, China, Taiwan and the United States. Each case study was an investigation into digital misinformation in domestic politics with a special focus on the role of automated and algorithmic manipulation. Following the publication of the report the team organised a worldwide briefing tour and held presentations in London, Washington, DC and in Palo Alto where nearly all the social media companies are headquartered. This significant effort was responsible for bringing the problem of misinformation and computational propaganda to public attention and creating the foundation for our subsequent work.
Our consolidated report on these country studies identified the distinct global trends in computational propaganda. In many of these countries, social media platforms play a crucial role in political participation and dissemination of news content. The report found that, these platforms have emerged as the primary media over which young people develop their political identities, particularly in countries where companies like Facebook, are effectively monopoly platforms for public life. Further, in several democracies, the majority of voters use social media to share political news and information, especially during elections. Even in countries where only small proportions of the public have regular access to social media, such platforms are still fundamental infrastructure for political conversation among the journalists, civil society leaders, and political elites. Thus, the report concluded that social media platforms are actively used as a tool for public opinion manipulation, though in diverse ways and on different topics. Further, through the use of sophisticated data analytics tools, social media platforms are actively used for computational propaganda either through broad efforts at opinion manipulation or targeted experiments on particular segments of the public, based on their preferences expressed in various digital forums. In authoritarian countries, social media platforms are a primary means of social control. This is especially true during political and security crises. The report also finds that in every country studied civil society groups trying, but struggling, to protect themselves and respond to active misinformation campaigns. This is a unique multi-country case study researching the use of social media for public opinion manipulation. The team involved 12 researchers across these countries who, interviewed 65 experts and analysed public posts on different social media platforms during several elections, political crises, and national security incidents. The team has published a seminal book titled Computational Propaganda: Politicians, Political Parties and Political Parties (Oxford University Press, 2018, ERC support acknowledged) based on this research.
During the reporting period, the team has also published a report titled 'Troops, Trolls and Troublemakers: Global Inventory of Organised Social Media Manipulation' which sheds light on the global organisation of social media manipulation by government and political party actors. This is the first in series of planned reports on the use of social media by governments to manipulate public opinion.
Cyber troops are defined as government, military or political party teams committed to manipulating public opinion over social media. This report investigates specific organisations created, often with public money, to help define and manage what is in the best interests of the public. The report compared such organisations across 28 countries, and prepared an inventory according to the kinds of messages, valences and communication strategies used. Further the report catalogued organisational forms and evaluated their capacities in terms of budgets and staffing. This paper is the first comprehensive inventory of the major organisations behind social media manipulation. The report finds that cyber troops are a pervasive and global phenomenon. Many different countries employ significant numbers of people and resources to manage and manipulate public opinion online, sometimes targeting domestic audiences and sometimes targeting foreign publics. Looking across the 28 countries, the report found that while every authoritarian regime had social media campaigns targeting their own populations, only a few of them targeted foreign republics. In contrast, almost every democracy studied in the report has organised social media campaigns that target foreign publics, while political-party-supported campaigns target domestic voters. Surprisingly, the earliest reports of government involvement in nudging public opinion involve democracies, and new innovations in political communication technologies often come from political parties and arise during high-profile elections. Over time, the primary mode for organising cyber troops has gone from involving military units that experiment with manipulating public opinion over social media networks to strategic communication firms that take contracts from governments for social media campaigns.
The team has also published a detailed working paper on how governments in different countries are responding to the threat of computational propaganda. The report addresses the concern that while, the manipulation of public opinion over social media has emerged as a pressing concern, legal and policy responses to develop new strategies to limit these threats to democracy, remain inefficient and even put democratic discourse online further at risk. This report inventories the responses of governments to computational propaganda around the world, debates the implications, and provides guiding principles for future policy solutions. The report further identifies core impact areas that these countermeasures should address namely, the design and social media algorithms that facilitates the spread of junk news, foreign influence operations on social media platforms by which hostile political actors use networked information infrastructures to orchestrate propaganda campaigns; online anonymity that affords the easy dissemination of hate speech and misinformation by political bots, trolls; the need for advertising transparency to have public oversight over data-driven campaigning that misuses personal user data; and sustainable journalism models and supporting media literacy which would in turn contribute to the health of the information ecosystem and informed news consumption behaviours.
In addition to our impact and outreach activities to disseminate our research to a broad audience of policymakers, industry leaders, general public, newspaper organisations, we have published a number of academic articles that have extended the state of the art in researching the use of computational methods for political benefit contributing to the fields of political science and computational social science. Our work therefore has been of fundamental importance in analysing the global phenomenon of computational propaganda and political misinformation on digital platforms.
The Computational Propaganda project has been among the first projects to systematically study the spread of misinformation on social media. We have studied misinformation in the context of important moments in public life like elections and referenda. During the course of our studies in Europe we discovered that the problem of computational propaganda is a global phenomenon with bad actors in different countries regularly adopting and learning from manipulation methods that have been successful in other country contexts. This research therefore represents one the most comprehensive studies of computational propaganda undertaken by a research project. This effort has required the collaboration of a diverse group of researchers including political scientists, sociologists, media scholars and computer scientists and is a unique multi-disciplinary research effort to scientifically study this problem using and extending state of the art tools in different disciplines. The research effort has been very agile in adapting research methods to suit the different cultures of social media use that prevail in various countries. Systematic analysis of large volumes of social media posts, has required the combination of a unique set of qualitative and quantitative methods which have been constantly updated as we studied a range of political events in diverse countries. For our computational analysis, the project has worked with data that has been sourced from social media platforms in the crucial last few weeks of political campaigning, making our reports definitive analyses of the sources of misinformation circulating on social media platforms. Moreover, the reports have analysed suspicious high-frequency trending patterns of politically relevant hashtags on Twitter which become active only few days prior to political events. The project has been among the first to examine how governments across the world are using some of these techniques to manipulate domestic audiences and retain or secure state power. Further our reports have highlighted how these tactics are used to silence and discredit voices that are critical of governments causing damage to civil discourse and eroding public faith in democratic processes. The project has also developed novel methods of collaborating and learning from a diverse group of stakeholders including fellow academics, newspaper organisations, civil society organisations, policy makers and industry leaders to ensure the widest possible dissemination of our research findings. The project work has thus established new ways of studying computational propaganda and in doing so pushed the boundaries of political science and computational social science methods.
From our work, it is clear that many democratic norms—even in established democracies in Europe —are being challenged by technological innovations that allow political actors to manipulate public opinion. Given the affordances of new social media platforms and the rapid development of artificial intelligence (AI), we recognise that there is a crucial need to drive forward innovation in political theory and study the impact of misinformation campaigns powered by advanced AI technologies on voter groups. From our extensive study of elections in a number of democracies, we have seen that news content shared over social networks during key moments in public life, can shape public opinion around important social issues including health, immigration, technology and education. Given that it is now possible to develop targeted campaigning messages for segmented voter groups, it is important to identify communities that might be particularly vulnerable to misinformation campaigns due to various factors. This includes, communities with low levels of media literacy, and limited awareness of technological tools that are used to push propaganda and lack of access to verified sources of information. While we are still grappling with the challenges posed by misinformation generated predominantly by humans, advanced AI technologies have been developed to generate images and videos of the real world using machine learning algorithms. When this technology is leveraged for the purposes of political propaganda, there’s an increased risk of misinformation campaigns altering reality for voter groups. In our future work, we would undertake an in-depth study of the impact of generative AI techniques on misinformation campaigns and the impact that this has on differentiated audience groups.
From our earlier studies, it is clear that misinformation is moving from public platforms like Twitter to newer platforms like Instagram and encrypted platforms including WhatsApp. We would therefore develop methods, integrating both quantitative machine learning techniques and qualitative methods to analyse visual content that can be classified as political propaganda and is posted on both public social media platforms and encrypted platforms like WhatsApp and examine patterns of spread within these groups.
We would continue to investigate how governments and state institutions across the world adopt these techniques to push propaganda both domestically and to interfere in the democratic processes of foreign nations and produce 2 further reports. We would continue to our study of the circulation political propaganda focusing on platforms like WhatsApp. Towards the end of the project we aim to have conducted one of the most ground-breaking research efforts on the study of computation propaganda and political misinformation across multiple digital platforms and in different country contexts using rigorous quantitative methods and advanced qualitative techniques.