CORDIS - Wyniki badań wspieranych przez UE
CORDIS

The Consequences of Mismeasuring Economic Activity

Periodic Reporting for period 2 - MEImpact (The Consequences of Mismeasuring Economic Activity)

Okres sprawozdawczy: 2021-04-01 do 2022-08-31

Measuring economic activity is a fundamental challenge for empirical work in economics. Most empirical projects raise concerns about whether the data do in fact measure what they purport to measure. Mismeasurement may lead to severe model misspecification, biased estimates, and misled conclusions and policy decisions. Unfortunately, formally accounting for the possibility of mismeasurement in the econometric model is complicated and possible only under strong assumptions that limit the credibility of resulting conclusions. Therefore, the most common approaches to measurement issues are to ignore them, to informally argue why they may not be of first-order importance, to abandon the project, or to search for better data.

The objective of the research described in this proposal is to develop new methodologies for formally assessing the potential impact of measurement error (ME) on all aspects of an empirical project: on model-building, on estimation and inference, and on decision-making. For instance, the new inference procedures allow the researcher to test whether ME is a statistically significant feature that should be modelled, whether ME distorts objects of interest (e.g. a production or utility function), whether ME distorts conclusions from hypothesis tests, and whether ME affects subsequent decision-making.

The project consists of three stages, each with several sub-projects:

Stage I: model-building
(a) Importance of Frictions
(b) ME Specification Tests
(c) Post-Model Selection Inference
(d) Improving Power Against Targeted ME Models

Stage II: Estimation and Inference
(a) ME Impact on Functionals
(b) ME-Robust Hypothesis Testing

Stage III: Decision-Making
(a) The Rank Confidence Set
(b) ME Impact on the Rank Confidence Set and on Decision Theory
As of May 2022, work on this project has been focused on Stages I (especially (b) and (c)) and III (especially (a)), which has led to three accepted papers and one working paper:

- "Inference for Ranks with Applications to Mobility across Neighborhoods and Academic Achievements across Countries" (with Mogstad, Romano, and Shaikh), forthcoming at the Review of Economic Studies
- "Comment on 'Invidious Comparisons: Ranking and Selection as Compound Decisions' " (with Mogstad, Romano, and Shaikh), forthcoming at Econometrica
- "Statistical Uncertainty in the Ranking of Journals and Universities" (with Mogstad, Romano, and Shaikh), forthcoming at AEA Papers and Proceedings
- "Finite- and Large-Sample Inference for Ranks using Multinomial Data with an Application to Ranking Political Parties" (with Bazylik, Mogstad, Romano, and Shaikh), submitted

A new open source software package has been developed that implements all the new statistical procedures proposed in the above papers.

Several theoretical results have been derived for Stages I(a)-(c), which will lead to further working papers in the near future.

In addition, the research resulting from this project has been presented at several conferences and at invited seminars.
As of May 2022, all three stages of the project are progressing well and as expected.

Especially the work undertaken for Stage III, namely the development of new statistical techniques for assessing the uncertainty of rankings, has led to significant advances beyond the state of the art. Rankings are ubiquitous and important for decision making throughout society. Since the quality (or “performance”) of objects to be ranked can rarely be directly observed, the quality (or “performance indicator”) is usually estimated. Such performance indicators might be university, school, or teacher value-added, neighborhood exposure effects, average hospital waiting times, average student-to-teacher ratios in schools, or customer satisfaction measures for local government services. Rankings of the performance indicators typically ignore the fact that the indicators are estimated and potentially poor proxies for the true performance of the objects to be ranked. In this sense, the ranking problem suffers from the presence of ME and may mislead subsequent decisions.
The work undertaken in Stage III has developed new statistical techniques for assessing the uncertainty in rankings. The proposed confidence intervals for ranks are measures of the informativeness of a ranking and are shown to be valid under weak assumptions. We have applied the new techniques to various important ranking problems such as that of ranking neighborhoods in the U.S. according to measures of intergenerational mobility, countries by student achievement ("PISA scores"), academic departments and researchers by impact factors and political parties by their vote shares. In these applications, we have shown that some rankings may be very informative in the sense of short confidence intervals for the ranks while some are not. These measures of uncertainty might be important inputs for subsequent decisions that depend on the underlying ranking.
readingmarg.jpg