Skip to main content

A Unified Framework for the Assessment and Application of Cognitive Models

Periodic Reporting for period 2 - UNIFY (A Unified Framework for the Assessment and Application of Cognitive Models)

Reporting period: 2019-07-01 to 2020-12-31

Cognitive models formalize substantive theory about how people reason, learn, decide, and act.
Cognitive models also serve as measurement tools that explain observed behavior in terms of constituent
psychological processes. Because of their unique ability to estimate latent processes, cognitive models are
increasingly applied throughout cognitive neuroscience and clinical psychology. Despite their theoretical
appeal and growing popularity, however, the field of cognitive modeling presents an often bewildering
proliferation of ideas and techniques. Current applications appear idiosyncratic, and the state-of-the-art
remains unclear. This lack of systematicity makes it difficult for researchers and practitioners to develop,
understand, and apply important cognitive models.

The main goal of the Advanced ERC project “UNIFY” is to provide a unified, systematic treatment of cognitive
models. By adhering to the basic principles of Bayesian inference we develop new methods and
propose new procedures to address core modeling questions. The innovation takes place both on an
abstract level (through the activities of a Quantitative Development Team) and on a concrete, model-specific
level (through the activities of a Core Applications Team). The model-specific applications –drift decision
models, stop-signal race models, reinforcement learning models– were chosen because of their enduring
theoretical impact and their practical relevance for fields such as neuroscience and clinical science.

By setting new standards for cognitive modeling we aim to advance a more systematic treatment of
uncertainty and push cognitive model evaluation and application to the next level. A secondary goal is to
increase the availability and boost the impact of the project by making the new procedures available in free
software packages such as R and JASP.
The primary achievement so far concerns the development and application of two underused but highly promising statistical techniques – bridge sampling and model-averaging. With bridge sampling, researchers can compute a model’s predictive performance in an efficient and reliable manner. With model-averaging, researchers can base their overall conclusion on many models simultaneously: each model’s contribution is combined with that of the others, with its influence weighted with past predictive performance. In addition, considerable progress has been made to make JASP suitable as a general-purpose software program for cognitive models. Specifically, much work has been done on making it easy to add modules, on obtaining the underlying R code, and on developing a module that allows probabilistic programming with the help of a graphical user interface. Behind the scenes, considerable effort has been expended to (1) write course books on Bayesian inference and cognitive modeling; (2) simplify the drift-diffusion model and develop a state-of-the-art routine for its estimation.
As the Core Applications Team starts to unfold its activities, the plan is to create dedicated JASP modules that make it easy to fit the specific models of interest to data and draw conclusions. This also requires that we develop a generic method to place a statistical structure on the model parameters.