Skip to main content
Vai all'homepage della Commissione europea (si apre in una nuova finestra)
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS

Searching for the Approximation Method used to Perform rationaL inference by INdividuals and Groups

Periodic Reporting for period 4 - SAMPLING (Searching for the Approximation Method used to Perform rationaL inference by INdividuals and Groups)

Periodo di rendicontazione: 2023-10-01 al 2024-09-30

Over the past two decades, Bayesian models have been used to explain behaviour in domains from intuitive physics and causal learning, to perception, motor control and language. Yet people produce clearly incorrect answers in response to even the simplest questions about probabilities. How can a supposedly Bayesian brain paradoxically reason so poorly with probabilities? Perhaps brains do not represent or calculate probabilities at all and are, indeed, poorly adapted to do so. Instead, they could be approximating Bayesian inference through sampling: drawing samples from a distribution of likely hypotheses over time.

This promising approach has been used in existing work to explain biases in judgment. However, different algorithms have been used to explain different biases, and the existing data does not distinguish between sampling algorithms. The first aim of this action was to identify which sampling algorithm is used by the brain by collecting behavioural data on the sample generation process, and comparing it to a variety of sampling algorithms from computer science and statistics. The second aim was to show how the identified sampling algorithm can systematically generate classic probabilistic reasoning errors in individuals, with the goal of upending the longstanding consensus on these effects. Finally, the third aim was to investigate how the identified sampling algorithm provides a new perspective on group decision making biases and errors in financial decision making, and harness the algorithm to produce novel and effective ways for human and artificial experts to collaborate.

The conclusions of the action were that a self-consistent sampling approach can explain a wide range of behavioural data, including psychological time series, probabilistic reasoning errors, and financial forecasting. New ways for human and artificial agents to collaborate were identified.
The action was very successful. For the first aim, our work to identify the sampling algorithm has highlighted how the algorithm used by the brain likely has multiple chains and momentum. For the second aim, we have worked on the theoretical framework underlying the project and have shown how sampling can explain individual probabilistic reasoning errors. We have developed a model, the Bayesian Sampler, of how people might make estimates from samples, trading off the coherence of probabilistic judgments for improved accuracy, and provides a single framework for explaining phenomena associated with diverse biases such as conservatism and the conjunction fallacy. Other successes include showing how a particular form of sampling can explain how non-informative information can bias judgments, known as the dilution effect, and how sampling and representation can interact when categorizing objects. For the third aim, we have shown that price prediction time series produced by both people and a sampling algorithm match the dynamics of actual market prices, and have used the approach to identify new ways to utilise AI to probe human cognitive representations.

The action’s detailed results were published in top-tier journals in the field, including four full-length publications in Psychological Review, two publications in Cognition, two publications in PLoS Computational Biology, one publication in Psychological Science, with others articles published and in preparation as well. Overview articles of the approach have appeared in Current Directions in Psychological Science and Perspectives on Psychological Science, and in several book chapters. In addition to publications, a up-to-date tagged list of the relevant literature has been maintained on the action’s website (https://www.sampling.warwick.ac.uk/(si apre in una nuova finestra)) while data and computer code implementing the models that were developed was disseminated in two R packages (samplr and samplrData).
The action’s conclusions were summarised in a published capstone model, the Autocorrelated Bayesian Sampler, that provides a rational reinterpretation of “noise” in a beyond the state-of-the-art model of judgment and decision making. It inherits the Bayesian sampler’s explanation of a range of probabilistic fallacies as the result of sacrificing probabilistic coherence for increased accuracy. The Autocorrelation Bayesian Sampler then extends that account to a much wider range of response modes, including decisions, response times, and estimates, as well as the metacognitive judgments of confidence and confidence intervals. This model is driven by a sampling algorithm that was identified in the action’s first aim, and uses simple mapping rules to explain the interplay of these different response modes. It fits both known empirical effects and predicts new effects. It represents significant progress toward a standard model of decision making, and we have outlined how it can be integrated with existing AI approaches to predict human behaviour in complex environments.
Sampling from a complex S-shaped distribution
Il mio fascicolo 0 0