Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

The Implications of Selective Information Sampling for Individual and Collective Judgments

Periodic Reporting for period 4 - InfoSampCollectJgmt (The Implications of Selective Information Sampling for Individual and Collective Judgments)

Reporting period: 2022-11-01 to 2024-04-30

With my ERC project, I set out to understand the mechanisms leading to the polarization of attitudes across social groups. This is important because excessive polarization of attitudes prevents people from understanding each other, creating challenges for democratic debate and imposing significant costs on our societies.

A number of commentators had proposed that 'filter bubbles' are the key explanation for the polarization of political attitudes. People live in information silos, being mostly exposed to ideas consistent with their opinions and lacking exposure to contrarian ideas. The 'filter bubble' conjecture is a sampling explanation: it focuses on how past experiences and the social environment affect the information people sample. Like other sampling explanations, including those I have developed in my past research, it does not specify the cognitive mechanisms that translate information into attitudes.

My ERC project incorporated insights from cognitive and social psychology into this sampling explanation. The idea was to study how mechanisms such as confirmation biases or motivated cognition affect the predictions of the filter bubble hypothesis and other sampling-based mechanisms. What is groundbreaking about the project is that it brings together two classes of mechanisms (sampling-based and information processing-based) that have been treated in isolation. The project analyzes how the interaction between these mechanisms affects the dynamics of individual judgments and attitudes, collective judgments and attitudes, and finally, the distribution of attitudes over social networks.
# Work Package 1 (Models of belief and attitude formation individual level) and Work Package 2 (tests of models of WP1)

I developed
- models that combine sampling and information processing components (and tested them in multiple experiments) to explain judgments about the variability of categories and social groups in particular and to explain evaluative judgments toward categories,
- models how mental categories affect inferences and evaluations and tested them with Twitter data,
- a learning model to explain how the structure of categories affect exploration and performance and when people learn from sampling and tested it in several experiments,
- a learning model to explain preference for skewness in decision from experience and tested it in several experiments,
- a model that explains how mental categories affects learning from experience to lead to systematic evaluative biases and tested it some experiments,
- a model that explains how experienced rating distributions affect perceptions of quality differences between rated products or services,
- a model that explains how sampling of experiences affect self-perceptions of personality,
- a model of how politicians adjust their attention to policy issues based on feedback they get on social media.

I analyzed the consequences of adaptive sampling for the chooser’s wellbeing.

I also ran experiments that examine how the sampling mode (active versus passive) affects information processing.

I developed novel methodologies
- to measure the typicality of text documents in concepts:
- an approach that trains text classifiers on discretely labeled data to construct a relevant semantic space to measure typicality,
- an approach that uses GPT-4.
- the position of text documents in policy and ideological spaces using large language models.


# Work Package 3 (Models of collective belief formation and tests)

I have developed / analyzed
- a model of for the emergence of consensus,
- a model that explains how review websites and recommendation systems can lead to systematic biases in collective evaluations ‘The Collective Hot Stove Effect’,
- a model that explores the implications of ranking algorithms for the popularity of news sources, and tested model predictions in online experiments,
- a model that clarifies the conditions under which majority-based social influence can lead to lock-in on inferior options and tested the predictions by reanalyzing data of previously published experiments.


# Work Package 4 (Models of belief formation in networks, and tests)

I have developed / analyzed
- a model of categorization-based social influence,
- models to explain attitude polarization based on social media feedback and tested model assumptions in experiments and with Twitter data,
- a series of models that explain attitude polarization and opinion homogenization based on decision from experiences,
- a model that explains how social media feedback can lead to divergence in issue attention between female and male politicians and tested its assumptions using Twitter data.

I have ran
- a large scale experiment that test the predictions of the model and have designed a follow up (not yet run),
- multiple social influence experiments to understand whether people tend to follow the common behavior or the behavior of the majority when these differ.
I had identified three key limitations in research prior to my project:
1. The unclear interaction between information sampling biases and information processing biases in producing judgment biases.
2. Uncertain ecological validity of sampling-based theories’ premises.
3. Unclear implications of sampling-based theories for collective judgments and beliefs’ distribution over social networks.

My project addresses these limitations as follows:

1. Information Processing of Sampled Data:
• Mental categories generalize sampled choice alternatives, amplifying judgment errors.
• When people evaluate categories or social groups, they use a simple-averaging heuristic.
• People misperceive the behavior of the majority when observing others’ choices repeatedly and follow this mistaken perception.
• Actively sampled information about uncertain options is processed differently from passively obtained information.
• People use a frequent-winning heuristic for skewed payoff distributions rather than integrating information.
• Every day sampling experiences biases self-reports of personality traits.
• Larger in-group information samples is enough to explain perceived in-group diversity over out-groups.
2. Ecological Validity:
• Twitter data from US and Spanish politicians shows:
• Politicians’ topic choices and feedback reactions align with the assumptions of sampling models.
• Emergent behavior patterns match the predictions of sampling models.
3. Implications for Collective Judgments:
• Recommendation systems influencing sampling behavior can spread incorrect beliefs and fake news.
• These systems create biases in average ratings on review websites and contribute to lock-in.
• Social media interactions lead to role specialization and behavioral divergence among politicians.
• Easier positive feedback on social media leads to more extreme opinions.

New Directions:
Developed methods for representing text documents in continuous semantic spaces to improve sampling models’ assessment in naturally occurring environments: an article building on BERT improved text-based typicality measures’ correspondence with human ratings; another article further enhances typicality measurement using Large Language Models. A third article positions political texts in ideological and policy spaces, better matching expert coders’ positioning that the state-of-the-art.
screen-shot-2021-09-08-at-10-26-43.png