Project III


Forecast Interpretation and Evaluation

Hailiang Du

Description

Simulation models are widely employed to make forecasts of future conditions for complex physical systems such as meteorology systems, cosmology systems and energy systems. Although most simulation models are deterministic, i.e. the system dynamics and initial condition define the future state unambiguously, in order to make reliable forecast one need to account for multiple uncertainties due to unknown initial condition and parameter value and model discrepancy. In practice, ensemble forecasting (a form of Monte Carlo analysis) is often adopted to account for these uncertainties. Instead of making a single forecast, an ensemble of forecasts is produced.

In both scientific studies as well as practical applications, probabilistic forecasts are often more convenient to convey additional information on forecast uncertainty than a set of point values. A probability forecast describes our expectation of how likely an event is on a particular occasion. The question then arises how to transform an ensemble into a probability distribution function, a task often referred to as (ensemble) forecast interpretation. Given a probability forecast, one may wish to ask whether it is right or wrong. Unlike point forecasts, however, single probability forecasts have no such clear sense of “right” and “wrong”. One can only measure how good a probabilistic forecast is by looking at a large set of forecasts. The performance of probabilistic forecasts can be evaluated using scoring rules.

The goals of the project is: i) to conduct ensemble forecast interpretation to produce probabilistic forecasts; ii) to evaluate the performance of probabilistic forecasts. You may investigate ensemble forecasts from real data or generated by yourself using for example a simple dynamical system.

Prerequisites

Calculus and Probability I, Statistical Concepts II

References

  • B. W. Silverman, Density Estimation for Statistics and Data Analysis, Chapman & Hall, (1998).
  • Daniel S. Wilks. Comparison of ensemble–MOS methods in the Lorenz’96 setting. Meteorological Applications, 13, (2006).
  • T. Gneiting, A. E. Raftery, Strictly proper scoring rules, prediction and estimation. Journal of the American Statistical Association,477:359-378 (2007).
  • KJ. Brocker and L. A. Smith, From ensemble forecasts to predictive distribution functions. Tellus A 60:663-678 (2008).

email: Hailiang Du


Back