Forecasting is a fundamental tool in Economics, Statistics, Business and other sciences. Judging whether forecasts are good and robust is of great importance since forecasts are used everyday to guide policymakers' and practitioners' decisions. The proposal aims at addressing four important issues that researchers encounter in practice.
A first issue is how to assess whether forecasts are optimal in the presence of instabilities. Optimality is an important property of models’ forecasts: if forecasts are not optimal, then the model can be improved. Existing methods to assess forecast optimality are not robust to the presence of instabilities, which are widespread in the data. How to obtain such robust methods and what they tell us about widely used economic models’ forecasts is the first task of this project.
A second problem faced by forecasters in practice is to evaluate density forecasts. Density forecasts are important tools for policymakers since they quantify uncertainty around forecasts. However, existing methodologies focus on a null hypothesis that is not necessarily the one of interest to the forecaster. The second task is to develop tests for forecast density evaluation that address forecasters’ needs.
A third, important question is “Why Do We Use Forecast Tests To Evaluate Models’ Performance?”. The third task of this project is to understand the relationship between traditional in-sample and forecast evaluation tests, and develop a framework that helps to understand under which circumstances forecast tests are more useful than typical in-sample tests.
A final question is how researchers can improve models that do not forecast well. Model misspecification is widespread, still economists are often left wondering exactly which parts of their models are misspecified. The fourth task is to propose an empirical framework for addressing this issue. By estimating time-varying wedges, we assess where misspecification is located, and how important it is.
Call for proposal
See other projects for this call