Skip to main content

Citizens exposed to dissimilar views in the media: investigating backfire effects

Periodic Reporting for period 2 - EXPO (Citizens exposed to dissimilar views in the media: investigating backfire effects)

Reporting period: 2020-03-01 to 2021-08-31

Understanding and respect for those who hold different opinions are needed more than ever. In this context, exposure to dissimilar content is hoped to foster tolerance. Scholars are interested in media diversity, policymakers encourage citizens to see dissimilar views in the media, and social media platforms adapt their algorithms to get people out of “echo chambers”. However, dissimilar exposure can also increase political conflict (backfire effects). Despite these dangers, we don't know when and why exposure to dissimilar views amplifies or attenuates hostilities. Under what conditions and for whom does exposure to dissimilar views backfire? What can be done to minimize the harms and maximize the benefits of such exposure? We address these questions. We advance an evidence-based theoretical model that identifies the individual, social, and system factors that together drive dissimilar exposure and its effects on understanding and respect between citizens with different views. The model is tested in four projects, relying on latest techniques in computational social science, panel surveys, and survey experiments in 3 countries. This project's findings are crucial for scholars across disciplines, policymakers, and (social) media. Only if we know when, how, and why citizens are affected by dissimilar media will we be able to enhance respect and understanding in diverse societies.
The PI completed prior projects, which inform the ERC and which resulted in 10 peer-reviewed publications in multidisciplinary social science journals (see dissemination and outputs). In addition, the first six months of the project were devoted to meeting all the objectives of SP1. We (a) developed survey questionnaires for the panel surveys (adjusting questions from existing surveys and developing new questions). We (b) translated the questionnaires, (c) identified and negotiated with polling companies. These steps were accomplished during the first 6 months of the project. In the following months, we fielded the first wave of the surveys in the three countries and prepared the subsequent waves. We (a) developed the questionnaires for wave 2 and 3, (b) translated them, and (c) oversaw the programming, sample recruitment, and data collection. Concurrently, we prepared the analyses of the online data: we (a) developed codebooks, (b) trained about 40 student coders in the three countries, and (c) managed the coding process in three languages. Based on these labeled data, we developed state-of-the-art computational methods to automatically and at scale classify various theoretically important and practically consequential features of online content. We have established a secure infrastructure for data storage and processing, which allowed us to start to analyse the data from the panel surveys as well as the trace data. This intensive phase has been immensely successful, as evidenced by our outputs: (a) team presentations at international conferences and invited presentations, (b) two manuscripts already published, (c) five manuscripts forthcoming or under peer review, (e ) completed open code classifiers that can be used by the academic community (and – in fact – are already being used by scholars and social media platforms, and (f) popularizing, adapting, and translating open-source tool, the only one – to our knowledge – that allows for transparent data sharing and involves participants in the process of reviewing their own data. Furthermore, we have contributed to open and replicable social science by pre-registering our studies (8 pre-registration plans submitted to OSF), sharing the code/software, and giving a small workshop on open science practices (the PI). Moreover, we organized two successful international workshops on online trace data, which brought together roughly 50 scholars from different countries. We also co-organized a successful post-conference at the International Communication Association’s Annual Meeting in 2019 (26 scholars attended). Lastly, we have published two popular media publications in The Conversation (Re-published in Salon), and at London School of Economics popular Blog.
The progress beyond the state of the art regards the computational methods:
- To determine whether a visited website domain was news, we developed a systematic, comparative, and versatile method to classify domains as news and developed extensive lists of news domains per country. The lists rely on 3 sources of information per country (as detailed in accomplishments) and were labelled by human coders as news or not. Further, we developed a computationally feasible method to estimate media ideology of all these sources in the three countries on a continuous from -1 (left) to 1 (right). These approaches allow for the meaningful analysis of online data.
- We developed several advanced classifiers based on thousands of articles and news titles labeled in the three countries by student coders (E.g. whether articles were about politics or not, mentioned polarization, etc.). Currently, we are finalizing classifiers to predict the topic discussed in articles (based on extensively labeled data, an important undertaking that has taken time and effort using computationally innovative ways to "transfer" the models from English to Dutch and Polish e.g. mapping embedding across languages, re-training models v. translating text, etc.).
- We have popularized, adapted, and translated the first and only open-source software – Web Historian – that allows for transparent data sharing. Web Historian is an open-source tool that accesses people’s browser history stored on their computers (up to 90 days of one’s web browsing history) and displays it to them using visualizations (e.g. network graph of websites visited, word cloud of used search terms, searchable table of browser history). After reviewing their data, participants can eliminate the domains and search terms they prefer not to share, and submit the data to the study. Web Historian is advantageous over other solutions. In contrast to “black box” tools from proprietary companies (e.g. Wakoopa, Netquest), it facilitates scientific replication and validation. In addition, most existing tools use a data-creation approach, where people first install the software and their data are collected going forward in time. Web Historian uses a found-data approach, meaning that the software relies on the browsing history already stored in a web browser, thereby bypassing the problems of participants dropping out during data collection or changing their behavior because they are being observed (i.e. the data were generated before they entered the study).
team1-horiz.jpg