Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS

Information Economics for Science

Periodic Reporting for period 1 - InfoEcoScience (Information Economics for Science)

Reporting period: 2023-01-01 to 2025-06-30

What is the most effective way of funding researchers? What is the optimal mix of incentives for financing knowledge creation and aligning researchers’ incentives with social objectives? How should resources be allocated across different fields?
This project develops tools for the design of policies for supporting and funding science. The starting point is a theoretical and empirical investigation of grant mechanisms currently used by research funding organizations. We then analyze the performance of changes in current practices.
The first completed paper, published in 2024 in the Quarterly Journal of Economics (co-authored with Jérôme Adda), develops a foundational model of grantmaking. The baseline model with a single field considers researchers who differ in intrinsic and privately known merit. Researchers benefit from receiving funding but face application costs. On the supply side, the grantmaker can only fund a fraction of applicants and aims to allocate grants to the most meritorious. The funding decision is made on the basis of a noisy signal of each applicant’s merit that the grantmaker obtains through the peer review process. A higher signal suggests higher merit, so grants are awarded to those exceeding an acceptance threshold, which is set to exhaust the available budget of grants. Anticipating this selection process, researchers apply only if they are sufficiently optimistic about their chances of generating a signal above the acceptance threshold. Consequently, there exists an application threshold, distinct from the acceptance threshold, above which researchers decide to apply.
A key insight from the baseline model with fixed budget for a single field concerns the effect of evaluation noise on application incentives. Noise in evaluation—whether due to variability in how carefully proposals are assessed or the use of a funding lottery—affects self-selection. We show that increased noise raises incentives to apply, reducing self-selection. Intuitively, as evaluation becomes noisier, the probability of securing funding becomes less tied to merit, encouraging more low-merit researchers to apply. This finding highlights how allocation mechanisms influence the applicant pool.
An extended version of the model examines the proportional budget allocation system used by major research funding organizations, including the US National Institutes of Health and the European Research Council (ERC). Under this system, we show that reducing evaluation noise in one field decreases applications in that field while increasing applications in other fields. We empirically validate this prediction using a 2014 ERC reform, which linked the budget of each individual panel (such as “LS1: Molecules of Life: Biological Mechanisms, Structures and Functions”) not only to applications in their own domain (e.g. Life Sciences, LS) but also to applications in other domains (e.g. Social Sciences and Humanities, SH, and Physical Sciences and Engineering, PE).
On-going work investigates the design of efficient funding rules. This includes understanding whether reducing the ordeal of applying helps allocating more efficiently when considering that the evaluation of applicants is noisy. A different project investigates how the budget should optimally be linked to the number of applicants to counteract the adverse incentives that stem from evaluation noise.
Budget changes and inter-rater agreement across fields
My booklet 0 0