CORDIS - EU research results
CORDIS

Improving collective decisions by eliminating overconfidence: mental, neural and social processes

Periodic Reporting for period 2 - rid-O (Improving collective decisions by eliminating overconfidence: mental, neural and social processes)

Reporting period: 2021-03-01 to 2022-08-31

Human decision making is flawed by numerous biases. Among them, overconfidence is perhaps the most pernicious one. For example, we have a very good intuitive estimate of the chance that a colleague (who is in our age and health condition) may die of cancer. But we grossly underestimate the same probability when thinking about ourselves. Previous failed attempts at reducing overconfidence targeted individual decisions made privately in the psychology lab. Rid-O proposes to reduce this bias at SOCIAL level of decision making by determining the underlying mental, neural and social processes involved in overconfidence and testing these models by causal interventions in group situations.
Objectives:
To develop a brain-computer-interface (BCI) that helps people communicate their uncertainty to each other without being impaired by overconfidence. This BCI will measure the brain signals from two people disagreeing with each other and give them, as feedback, an estimate of their uncertainty based on their brain signal. Our objective is to see if resolving the disagreement with this BCI will reduce the interference from overconfidence in joint decisions. Our second objective is to work with Game Theory experts to develop a comprehensive theoretical and empirical understanding of overconfidence under conditions of conflict of interest. We want to understand the mental and neural processes underlying strategic manipulation of others. Finally, our third goal is develop a systematic method to remove overconfidence from group processes by elucidating the role of seeking consensus within- and aggregation of opinions between groups.
The project’s work was started in Sept 2019. Within less than 6 months, Covid-19 pandemic had completely prevailed and much of active, laboratory based research had come to a halt. In response to this unprecedent situation, we took a number of measures and made some changes to our previous plans, bringing forward the parts of the project that were compatible with the pandemic restrictions.
As part of work package 1, we looked at the theoretical and empirical determinants of social influence in the absence of conflict of interest. Two very important projects were completed in this work package and another project is nearly finished and will be submitted for peer review in the upcoming weeks. In one work we looked at the neurobiological basis of changing of mind in response to social disagreement. In this work, we had the participants’ brain activity measured in fMRI while they performed many trials of private and social decision making. Our paper showed a specific brain area called dorsal anterior cingulate (dACC) is differentially activated in these two different forms of conformity. This paper was published in PLoS Biology in early 2022 and can be found here https://t.co/yWDKKaIqSA. In another project we looked at the role of social influence in learning. In collaboration with our lab partners in Ecole Normale Superior in Paris, we devised a reinforcement learning paradigm for observational learning from watching another individual’s choices and outcomes. In this sense, we show that social learning can be a great benefit because it helps simplify our mental workload. The paper was published in PLoS Biology in 2020 and can be found here https://bit.ly/3JccQoy. In a third part of the work package 1, we used computational modelling, behavioural experiments, eye tracking and electroencephalography (EEG) to examine the key research question of work package 1. we examined perceptual decisions under uncertainty in social context similar to when basketball referees deliberate an incident. We introduced a biophysically plausible neural ‘attractor’ population model for joint perceptual decision making whose predictions were supported by our behavioral and neurobiological data. The paper describing these findings is currently under review.
As promised in Work package 2, we applied Game Theory to understanding the role of strategic overconfidence in social interactions that involve conflict of interest. This work was a multi-centric collaboration with Ralf Kurvers (Max Planck Center for Adaptive Rationality, Berlin), Uri Hertz (Haifa University, Israel) and Ken Binmore (from UCL, London). Having established the game theoretic basis of overconfidence in social interaction, we then proceeded to examine the predictions of this theory in more than 10 experiments and replications. Some of the experiments were conducted online and others were conducted in our labs and more 800 participants were recruited. The result was a paper published in iScience in 2021. A short summary of the paper can be found here https://bit.ly/370KOiy. In this period we also managed to deliver another key objective of work package 2 by publishing a paper in which we examined the manifestation of strategic overconfidence in mental health and mental illness.
In work package 3, we looked at the role of face-to-face discussions in groups and how, aggregation of opinions in a hierarchical structure could enhance the wisdom of crowds and avoid overconfidence. . We asked if discussion could help predict the future in an efficient, cheap, and inclusive way? We showed that small groups (i.e. no more than 4-5 individuals) of lay individuals, when organized, come up with better predictions than those they provide alone.This work has now been published in Journal of Experimental Psychology Applied.
Rid-O was conceived to examine the role of overconfidence in social interaction. In the course of the research we have so far conducted, a very important new question has arisen not just for us but for the scientific and wider community: Human-AI interactions.

Social interactions shape the human brain - but how will this same human brain adjust to interactions with a new type of intelligence? Human and autonomous artificial agents have started to interact with each other as equals with diverging interests and preferences. The introduction of virtual bots, mechanical robots, and algorithmic medical or legal advisors in human society creates hybrid interactions in which humans have to adjust and invent new norms of social interaction. Many such hybrid interactions have no evolutionary precedents. Take for example, a self-driving Peugeot that is equipped with capacity to coordinate with and access the distributed collective memory of all other Peugeots on the street. Mistreatment from a human would be noted not just by the one but by all Peugeots on the road. What sort of reputation management or cooperation will emerge in humans in response to sharing social spaces with such agents? Building on the research in Rid-O and my established expertise in social neuroscience and human-human interaction, the core research questions that we have taken up as essential for the future is: How can research in social cognitive neuroscience inform and help us anticipate the social implications of these future interactions? Rid-O research so far has shown that humans do not adhere to social norms such as reciprocity (Mahmoodi et al PLoS Biology 2022) and fairness (Karpus et al, iScience 2021) when interacting with AI compared to interacting with other humans.
Reciprocity in interactive decision making
Morality in Human-AI interactions