Impact of a quasi-stochastic cellular automaton backscatter scheme on the systematic error and seasonal prediction skill of a global climate model.
Philos Trans A Math Phys Eng Sci 366:1875 (2008) 2561-2579
Abstract:
The impact of a nonlinear dynamic cellular automaton (CA) model, as a representation of the partially stochastic aspects of unresolved scales in global climate models, is studied in the European Centre for Medium Range Weather Forecasts coupled ocean-atmosphere model. Two separate aspects are discussed: impact on the systematic error of the model, and impact on the skill of seasonal forecasts. Significant reductions of systematic error are found both in the tropics and in the extratropics. Such reductions can be understood in terms of the inherently nonlinear nature of climate, in particular how energy injected by the CA at the near-grid scale can backscatter nonlinearly to larger scales. In addition, significant improvements in the probabilistic skill of seasonal forecasts are found in terms of a number of different variables such as temperature, precipitation and sea-level pressure. Such increases in skill can be understood both in terms of the reduction of systematic error as mentioned above, and in terms of the impact on ensemble spread of the CA's representation of inherent model uncertainty.Toward seamless prediction: Calibration of climate change projections using seasonal forecasts
Bulletin of the American Meteorological Society 89:4 (2008) 459-470
Abstract:
Trustworthy probabilistic projections of regional climate are essential for society to plan for future climate change, and yet, by the nonlinear nature of climate, finite computational models of climate are inherently deficient in their ability to simulate regional climatic variability with complete accuracy. How can we determine whether specific regional climate projections may be untrustworthy in the light of such generic deficiencies? A calibration method is proposed whose basis lies in the emerging notion of seamless prediction. Specifically, calibrations of ensemble-based climate change probabilities are derived from analyses of the statistical reliability of ensemble-based forecast probabilities on seasonal time scales. The method is demonstrated by calibrating probabilistic projections from the multimodel ensembles used in the Fourth Assessment Report (AR4) of the Intergovernmental Panel on Climate Change (IPCC), based on reliability analyses from the seasonal forecast Development of a European Multimodel Ensemble System for Seasonal-to-Interannual Prediction (DEMETER) dataset. The focus in this paper is on climate change projections of regional precipitation, though the method is more general. © 2008 American Meteorological Society.Dynamically-based seasonal forecasts of Atlantic tropical storm activity issued in June by EUROSIP
Geophysical Research Letters 34:16 (2007)
Abstract:
Most seasonal forecasts of Atlantic tropical storm numbers are produced using statistical-empirical models. However, forecasts can also be made using numerical models which encode the laws of physics, here referred to as "dynamical models". Based on 12 years of re-forecasts and 2 years of real-time forecasts, we show that the so-called EUROSIP (EUROpean Seasonal to Inter-annual Prediction) multi-model ensemble of coupled ocean atmosphere models has substantial skill in probabilistic prediction of the number of Atlantic tropical storms. The EUROSIP real-time forecasts correctly distinguished between the exceptional year of 2005 and the average hurricane year of 2006. These results have implications for the reliability of climate change predictions of tropical cyclone activity using similar dynamically-based coupled ocean-atmosphere models.How good is an ensemble an capturing truth? Using bounding boxes for forecast evaluation
Quarterly Journal of the Royal Meteorological Society 133:626 A (2007) 1309-1325
Abstract:
Ensemble prediction systems aim to account for uncertainties of initial conditions and model error. Ensemble forecasting is sometimes viewed as a method of obtaining (objective) probabilistic forecasts. How is one to judge the quality of an ensemble at forecasting a system? The probability that the bounding box of an ensemble captures some target (such as 'truth' in a perfect model scenario) provides new statistics for quantifying the quality of an ensemble prediction system: information that can provide insight all the way from ensemble system design to user decision support. These simple measures clarify basic questions, such as the minimum size of an ensemble. To illustrate their utility, bounding boxes are used in the imperfect model context to quantify the differences between ensemble forecasting with a stochastic model ensemble prediction system and a deterministic model prediction system. Examining forecasts via their bounding box statistics provides an illustration of how adding stochastic terms to an imperfect model may improve forecasts even when the underlying system is deterministic. Copyright © 2007 Royal Meteorological Society.Historical Overview of Climate Change Science
Chapter in Intergovernmental Panel on Climate Change (IPCC), 4th Assessment Report, Working Group 1: The Physical Basis of Climate Change, (2007) 1