Skip to main content

New Methods and Applications for Forecast Evaluation

Final Report Summary - FORECASTING (New Methods and Applications for Forecast Evaluation)

Forecasting is a fundamental tool in Economics, Statistics, Business and other sciences. Judging whether forecasts are good and robust is of great importance since forecasts are used everyday to guide policymakers' and practitioners' decisions. This project has developed forecast methodologies and empirical analyses that address several important issues that researchers encounter in practice.

First, the PI developed forecast optimality tests that evaluate whether forecasts are optimal in the presence of instabilities. Optimality is an important property of models’ forecasts: if forecasts are not optimal, then the model can be improved. Previously available methods to assess forecast optimality were not robust to the presence of instabilities, which are widespread in the data. In addition, the PI also developed forecast evaluation tests designed to tackle an important source of time variation, namely state-dependence.

Second, the PI developed tests to evaluate the correct specification of forecast densities. Density forecasts are important tools for policymakers since they quantify uncertainty around forecasts. However, existing methodologies focus on a null hypothesis that is not necessarily the one of interest to the forecaster. The PI’s novel tests, instead, focus on evaluating forecasting ability. In addition, the PI developed a novel uncertainty index based on predictive densities

Third, the PI developed methodologies to improve models that do not forecast well. In particular, the PI proposed using time-varying margins to assess where the model misspecification is located, and how important it is, in order to help researchers identify exactly which parts of their models can be improved. In addition, the PI investigated several big-data dimension-reduction techniques that are expected to improve in-sample fit and out-of-sample forecasting performance in the presence of instabilities; the theoretical framework includes a large number predictors which may or may not have a factor structure.

Finally, the PI studied a framework to understand the relationship between traditional in-sample and forecast evaluation tests that helps to understand under which circumstances forecast tests are more useful than typical in-sample tests.