Skip to main content

Developmental trajectories for model-free and model-based reinforcement learning: computational and neural bases

Objective

Adolescence is defined as the period of life that starts with the biological changes of puberty and ends at the time at which the individual attains a stable, independent role in society. During this period, decisions are characterized by being impulsive and high-risk, and this can have serious consequences, for example accidents caused by dangerous driving and experimentation with alcohol and drugs. It is thus important to understand the neurocognitive processes that underlie decision-making in adolescence.
Decision-making depends on the interaction of several component processes, including the representation of value, response selection and learning. In reinforcement learning all these processes are integrated. Reinforcement learning deals with the ability of learning to improve one’s future choices in order to maximize the occurrence of pleasant events (rewards) and minimize the occurrence of unpleasant events (punishments).
Theoretical models for reinforcement learning postulate the existence of a dual controller for action selection: ‘Model-free’, which is computationally simple, more reflexive and inflexible and ‘Model-based’, which is more reflective and flexible, but computationally complex.
The development of reinforcement learning in human adolescence has been studied only recently. Studies to date have not attempted to disentangle model-free and model-based behaviour. The aims of the current project are: i. to map the development of reinforcement learning systems – via behavioural/computational analysis - and ii. to relate the maturation of model-based reinforcement learning to the functional maturation of specific neural circuits – via functional magnetic resonance experiment.
This study will provide further insights in adolescent reward-based decision-making and its neural bases, within the theoretical framework of reinforcement learning.

Field of science

  • /natural sciences/computer and information sciences/artificial intelligence/machine learning/reinforcement learning

Call for proposal

FP7-PEOPLE-2012-IEF
See other projects for this call

Funding Scheme

MC-IEF - Intra-European Fellowships (IEF)

Coordinator

University College London
Address
Gower Street
WC1E 6BT London
United Kingdom
Activity type
Higher or Secondary Education Establishments
EU contribution
€ 231 283,20
Administrative Contact
Giles Machell (Mr.)