Skip to main content
European Commission logo print header

The Science of Forecasting: Probabilistic Foundations, Statistical Methodology and Applications

Final Report Summary - SCIENCEFORE (The Science of Forecasting: Probabilistic Foundations, Statistical Methodology and Applications)

The future being uncertain, forecasts ought to be probabilistic in nature, that is, they ought to take the form of probability distributions over future quantities and events. In the ScienceFore project, we advanced the mathematical foundations of the science of forecasting, developed statistical methodology for making and evaluating probabilistic forecasts, and applied our findings in meteorological and economic case studies.

Comparing the accuracy of two or more forecasting methods is a matter of ubiquitous interest: What weather channel provides the best temperature forecasts? How do economic experts compare to government agencies or ordinary consumers when it comes to predicting the coming year’s inflation rate? When conducting such forecast comparisons, one needs to be mindful to pick appropriate measures of predictive accuracy. Technically, a performance measure is said to be proper or consistent for a given task if a forecaster's best strategy is to provide the most careful and honest forecasts she can generate. However, the choice of a particular consistent measure is often hard to make and justify. In our work, we tackled this problem from a novel perspective. Specifically, we showed that for a wide range of common prediction tasks, every proper or consistent measure of predictive accuracy admits a representation as a weighted average of very simple and easily interpretable functions, which we call the elementary scoring functions. This result is not only of mathematical appeal, but also practically useful: It suggests a simple graphical tool, which we call a Murphy diagram, that monitors forecasting performance in terms of the elementary scoring functions, in fully comprehensive and economically interpretable and relevant ways.

We also developed original statistical methodology for probabilistic forecasts and applied them in economic and meteorological case studies. In particular, we studied methods of forecast aggregation and showed, surprisingly, that calibrated forecasts (i.e. forecasts yielding event probabilities in accordance with observations) become uncalibrated under linear aggregation. For a desirable alternative, we developed nonlinear aggregation methods, such as the beta-transformed linear pool. We applied these findings in meteorological and economic settings and devised statistical postprocessing techniques for temperature forecasts that are based on ensembles of numerical weather prediction (NWP) models. Our investigations of survey and panel forecasts suggest that ensemble methods can be successfully applied in economics as well.