Skip to main content

New Methods For Forecast Evaluation

Final Report Summary - NEW FORECAST METHODS (New Methods For Forecast Evaluation)

Forecasting is a fundamental tool in Economics, Statistics, Business and other sciences. Judging whether forecasts are good and robust is of great importance since forecasts are used every day to guide policymakers' and practitioners' decisions. The Marie Curie grant has helped the researcher finalize several projects that develop tools for improving the way forecasts are evaluated.

A first project, titled “Out-of-Sample Forecast Tests Robust to the Choice of Window Size” (with A. Inoue, Journal of Business and Economic Statistics 30(3), 2012, 432-453), develops new methodologies for evaluating the forecasting performance of economic models. The novelty of the methodologies is that they are robust to the choice of the estimation window size. The choice of the estimation window size has always been a concern for practitioners, since the use of different window sizes may lead to different empirical results in practice. In fact, ad-hoc window sizes used by researchers may not detect significant predictive ability even if there would be significant predictive ability for some other window size choices. In addition, there is the concern that satisfactory results might have been obtained simply by chance, after trying many window sizes. The project proposes new testing procedures to evaluate the models’ forecasting performance for a variety of estimation window sizes by taking summary statistics of the sequence. The project derives the properties of the test, its critical values and shows its usefulness in forecasting exchange rates.

A second project titled “Conditional Predictive Density Evaluation in the Presence of Instabilities”, (with T. Sekhposyan, Journal of Econometrics 177 (2), 2013, 199-212) develops new tests to evaluate the correct specification of predictive densities in the presence of instabilities. Predictive densities provide a measure of uncertainty around mean forecasts, thus enabling researchers to quantify the risk in forecast-based decisions. The project develops new tests for the correct specification of density forecasts at each point in time. A special case of the test is a test for the constancy of predictive densities over time. The project investigates the small sample properties of the proposed tests in Monte Carlo simulation exercises and shows that the proposed tests have good power to detect mis-specification in the predictive distribution even when the mis-specification affects only a sub-sample, a situation where existing tests may fail.

A third project, titled “Evaluating Predictive Densities of US Output Growth and Inflation in a Large Macroeconomic Data Set” (with T. Sekhposyan, International Journal of Forecasting 30(3), July-September 2014, 662–682), evaluates conditional predictive densities for US output growth and inflation using a number of commonly-used forecasting models that rely on large numbers of macroeconomic predictors. The project finds that normality is rejected for most models. Interestingly, however, combinations of predictive densities appear to be approximated correctly by a normal density.

A fourth project, “Forecast Rationality Tests in the Presence of Instabilities, With Applications to Federal Reserve and Survey Forecasts” (with T. Sekhposyan, Journal of Applied Econometrics 31(3), April-May 2016, 457-610), develops tests for forecast optimality that can be used in unstable environments. These tests are generally applicable to all regression-based tests of forecast optimality. The proposed tests are used to evaluate the optimality of the Federal Reserve Greenbook forecasts, as well as a variety of survey-based forecasts. The tests finds that the forecasts are not rational nor optimal, calling into question previous results in the literature.

A fifth project, “Alternative Tests for Correct Specification of Conditional Forecast Densities”, with T. Sekhposyan (Barcelona Graduate School of Economics Working Paper 758, Revised January 2016), proposes new tests for evaluating the correct specification of density forecasts. The proposed methods are developed in an environment where the estimation error of the parameters used to construct the predictive densities is preserved asymptotically under the null hypothesis. The tests offer a simple way to evaluate the correct specification of predictive densities. Monte Carlo simulation results indicate that the tests are well-sized and have good power in detecting misspecifications. An empirical application to the Survey of Professional Forecasters and a baseline macroeconomic model shows the usefulness of the methodology.

Some of the tests developed by the researcher are currently considered by the European Central Bank to add to their forecast evaluation procedures.

In addition, the investigator has successfully started a new series of workshops, the “Barcelona GSE Summer Forum in Time Series”, that effectively promotes and develops a network of time series econometricians and forecasters in Europe, essential for the investigator’s permanent integration at the host institution and the successful continuation of her research career in Europe.

Papers and replication codes are available at: