Skip to main content
European Commission logo
Deutsch Deutsch
CORDIS - Forschungsergebnisse der EU
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Innovative Methods for Psychology: Reproducible, Open, Valid, and Efficient

Periodic Reporting for period 4 - IMPROVE (Innovative Methods for Psychology: Reproducible, Open, Valid, and Efficient)

Berichtszeitraum: 2021-12-01 bis 2022-11-30

With numerous failures to replicate, common misreporting of results, widespread failure to publish non-significant results or to share data, and considerable potential bias due the flexibility of analyses of data and researcher’s tendency to exploit that flexibility, psychological science is said to experience a crisis of confidence. These issues lead to dissemination of false positive results and inflate effect size estimates in meta-analyses. This leads to poor theory building, an inefficient scientific system, a waste of resources, lower trust in psychological science, and psychology’s outcomes being less useful for society.The goal of this ERC project is to improve psychological science by offering novel solutions to five vexing challenges: (1) to counter misreporting of results by using our new tool statcheck in several studies on reviewers’ tendency to demand perfection and by applying it to actual peer review. (2) to counter the biasing effects of common explorations of data (p-hacking) by professing and studying pre-registration and by developing promising new statistical approaches that allow for exploration and confirmation with the same data. (3) to counter the common problem of selective outcome reporting in psychological experiments by developing powerful latent variable methods that render it fruitless to not report all outcome variables in a study. (4) to counter the problem of publication bias by studying and correcting misinterpretations of non-significance. (5) to develop and refine meta-analytic methods that allow for the correction of biases that currently inflate estimates of effects and obscure moderation. The innovative tools developed in this project have the potential to improve the way psychologists (and other scientists) analyse data, disseminate findings, and draw inferences.
The IMPROVE project reanalyzed open data from over 300 published psychological studies to study potential biases caused by flexibility in how data can be analyzed. By systematically varying analytic decisions, the IMPROVE team found large variation in effect sizes across alternative ways to analyse data, which highlights that most unregistered studies are vulnerable to bias if researchers decide to selectively report results based on desirability of the results or significance. Diving deeper into the statistical characteristics, we managed to uncover the functional form of this variation, which allows for further developments of statistical tools to appropriate analyze data and to correct for biases in both primary studies and meta-analyses that seek to summarise results in a line of research.
The IMPROVE extensively investigated the use and usefulness of preregistration as a promising methodological tool to avoid biases in the analyses and reporting of results. Team members considered over 550 preregistrations to assess their quality and effectiveness in countering biases emerging from researcher degrees of freedom in the analysis of studies. In over 450 of these preregistrations, we also considered the extent to which later publications of studies conform to the preregistered plans. The results of both studies showed that preregistrations do promote the use of power analyses, but are often insufficiently detailed to fully counter biases in the analysis of data. Furthermore, many articles were found to not conform to registered plans by adding, omitting, or changing hypotheses. This work contributes to improving scientific practice by pointing at how preregistration templates and practices can be improved.
In surveys covering over 1300 psychological researchers, the IMPROVE project found that poor statistical intuitions lead many researchers to underestimate the likelihood of studies yielding non-significant results, which could play a key role in the common failure to publish negative results. To handle publication bias, the IMPROVE project developed, extended, and refined several meta-analytic tools that allow for corrections of publication bias and selective reporting of (correlated) outcomes and developed a graphical tool that highlights potential publication biases in meta-analytic results. We also contributed to a major mutlidisciplinary survey featuring nearly 7000 academics on their use of questionable research practices and responsible research practices, and the factors that play a role in such suboptimal or preferred practices. This survey pointed at the import role of publication pressure, scientific norms, and mentoring in both types of behavior, which highlights that improvements in mentoring, promotion of the norms of good science, and tackling publication pressure could help promote responsible research practices. The IMPROVE project built a flexible, efficient and extensible simulator called the Science Abstract Model (SAM) which can be used to study effects of p-hacking and publication bias, and the usefulness of novel methodological and statistical tools to counter and correct for biases in the current scientific system. We used simulations to get a better understanding of severity of biases, and the effectiveness of corrective tools developed in the context of meta-analyses.
The IMPROVE project found that the calculation of effect sizes in meta-analyses is commonly irreproducible and prone to error. In line with the high prevalence of reporting errors in the literature, we found that reporting errors are easily overlooked by reviewers. The openly available tool statcheck retrieves statistical results from articles and manuscripts to check for internal consistency, allowing researchers to detect and correct errors in the reporting of statistical results, which have been found to be prevalent in the literature. In a large study using the updated version of statcheck, we found the implementation of statcheck during peer review at major psychology journals is associated with a steep decline in reporting errors in published articles, highlighting the usefulness of the tool to counter reporting errors.
The IMPROVE project studied the performance of state-of-the-art psychometric tools that allow users to study the latent psychological variables that underly psychological scales and tests. We applied these new tools to find that measurement characteristics are often different between experimental conditions and groups, to study interactions between latent variables in predicting important medical outcomes, and to gain a better understanding of how speed limits differently affect gender groups' performance on cognitive tests.
The results of the IMPROVE were disseminated via a meta-research conference, presentations at conferences, online presentations, workshops on preregistration, peer reviewed articles, preprints, a website, 15 open data sets, openly available computer code, and a registration template.
The IMPROVE project yielded new insights into the nature of p-hacking and developed solutions to counter the biases in the form of advanced analytic techniques using the multiverse and latent variable models. We further gained extensive insights into structural factors involved in questionable and responsible research practices, biases caused by poor intuitions and suboptimal practices in preregistration and selective reporting of outcomes. The project developed new methodological tools that can be used in primary studies and meta-analyses to avoid and correct for common biases in psychological science and many other fields.
sixteen reanalysed datasets showing wide variation of outcomes based on alternative ways of analysis