Descrizione del progetto
Il senso della probabilità degli esseri umani ha natura bayesiana?
L’inferenza bayesiana stima in maniera ottimale le probabilità a partire da dati limitati e imprecisi, prendendo in considerazione i livelli di incertezza. Il progetto NEURAL-PROB, finanziato dall’UE, propone l’idea che il senso della probabilità degli esseri umani sia bayesiano, sulla base della nozione che le stime della probabilità degli esseri umani siano accompagnate da livelli di sicurezza razionali che ne definiscono la precisione. Tale natura bayesiana del senso della probabilità umano limita la stima, la rappresentazione neurale e l’utilizzo delle probabilità. I ricercatori del progetto costruiranno la loro teoria grazie alla combinazione di psicologia, modelli di calcolo e neuroimaging. La caratterizzazione del senso della probabilità aiuterà a comprendere meglio come il cervello umano rappresenti il mondo con modelli interni probabilistici, come impari e prenda decisioni.
Obiettivo
Bayesian inference optimally estimates probabilities from limited and noisy data by taking into account levels of uncertainty. I noticed that human probability estimates are accompanied by rational confidence levels denoting their precision; I thus propose here that the human sense of probability is Bayesian. This Bayesian nature constrains the estimation, neural representation and use of probabilities, which I aim to characterize by combining psychology, computational models and neuro-imaging.
I will characterize the Bayesian sense of probability computationally and psychologically. Human confidence as Bayesian precision will be my starting point, I will test other formalizations and look for the human algorithms that approximate Bayesian inference. I will test whether confidence depends on explicit reasoning (with implicit electrophysiological measures), develop ways of measuring its accuracy in a learning context, test whether it is trainable and domain-general.
I will then look for the neural codes of Bayesian probabilities, leveraging encoding models for functional magnetic resonance imaging (fMRI) and goal-driven artificial neural networks to propose new codes. I will ask whether the confidence information is embedded in the neural representation of the probability estimate itself, or separable.
Last, I will investigate a key function of confidence: the regulation of learning. I will test the implication of neuromodulators such as noradrenaline in this process, using both within and between-subject variability in the activity of key neuromodulatory nuclei (with advanced fMRI), the cortical release of noradrenaline during learning and its receptor density (with positron-emission tomography) and test for causality with pharmacological intervention.
Characterizing the sense of probability has broad implications: it should improve our understanding of the way we represent our world with probabilistic internal models, the way we learn and make decisions.
Campo scientifico
- natural sciencesmathematicsapplied mathematicsstatistics and probabilitybayesian statistics
- social sciencespsychology
- engineering and technologymedical engineeringdiagnostic imagingmagnetic resonance imaging
- natural sciencescomputer and information sciencesartificial intelligencecomputational intelligence
Parole chiave
Programma(i)
Argomento(i)
Meccanismo di finanziamento
ERC-STG - Starting GrantIstituzione ospitante
75015 PARIS 15
Francia