Skip to main content

Designing Institutions to Evaluate Ideas

Final Report Summary - EVALIDEA (Designing Institutions to Evaluate Ideas)

The project investigated a number of central questions in the development of mechanisms for the evaluation of new ideas and ventures. Following the three headings of the Description of Work, we focus on the highlight activities and results for each heading.
First, we based our general framework on an equilibrium model of persuasion by a sponsor of an idea/product through a signal-jamming model of information acquisition and selective disclosure. We applied the framework to privacy regulation in the context of marketing and targeted campaigning. In a capstone paper on strategic sample selection, we compared the information value of random data to selected data. We characterized situations in which selective assignment based on untreated outcomes—which is typically thought of exclusively as a threat to internal validity—actually benefits an evaluator who observes the reported data. In the process, we also developed a new methodology for comparing the value of information structures on the basis of local dispersion.
Second, turning to the positive analysis of evaluation institutions and their workings, we analyzed the incentives for truthful reporting of information by forecasters as well as the incentives of evaluators to assess the quality of forecasters—from a theoretical, experimental, and empirical viewpoint. We systematized the literature on the topic in a high-profile overview published as a handbook chapter. In this context, we also developed a methodology for structural estimation and then applied it empirically to economic forecast data. In a subproject, we also developed an experimental methodology to dissect the different elements explaining behavior in a reporter-evaluator model with reputational concerns.
The highlight of this second part of the project is a theoretical analysis of the performance of markets as evaluation tools. We characterized how market prices react to information when market participants have heterogeneous prior beliefs and are subject to wealth effects. When favorable information is publicly revealed about one state, the price for the asset that pays off in that state must increase, so that traders, who according to their prior beliefs are inclined to purchase this asset, reduce their demand because they either have a limited budget or they suffer a negative wealth effect. Thus, the market can only equilibrate if the price must weigh more the beliefs of traders that go against the new information. Through this mechanism, prices underreact to information, and dynamically exhibit momentum and over-reaction in the long term—explaining a number of empirical puzzles noticed in the asset pricing and prediction market literatures
Third, we worked on the design of evaluation institutions. To this end, we formulated a tractable strategic version of Wald’s classic model of sequential information acquisition and linked the framework to the canonical model on optimal persuasion. The model is geared to analyze the incentives to carry out research (clinical trials) in order to convince a decision maker (regulator) to approve a new idea (drug). Within this framework we compared the performance and explained the historical evolution of different institutions and approval protocols.
The highlight of this third part of the project is our empirical analysis of results of clinical trials across research stages. Given that human lives are at stake, planning and execution of clinical research on volunteers as well as the publication of results should conform to highest ethical standards. However, the produced evidence is susceptible to many kinds of biases because investigators might suffer from conflicts of interest, given the high economic stakes at play. Examining the statistical results reported to the online registry, we documented a suspicious excess mass of statistically significant results from industry-sponsored studies in phase-three relative to phase-two trials, but no bunching of results just above the significance threshold. By matching trials across phases, we disentangled different channels that may increase the presence of significant results. We attributed more than half of the excess mass of significant results to selection economizing on research costs. While for trials by large pharma companies the residual are explained by selective reporting, for smaller investigators—which presumably have less reputation at stake—a quarter of the excess mass of significant results remains unexplained.