Predicting multiyear North Atlantic Ocean variability

Journal of Geophysical Research: Oceans 118:3 (2013) 1087-1098

Authors:

W Hazeleger, B Wouters, GJ Van Oldenborgh, S Corti, T Palmer, D Smith, N Dunstone, J Kröger, H Pohlmann, JS Von Storch

Abstract:

We assess the skill of retrospective multiyear forecasts of North Atlantic ocean characteristics obtained with ocean-atmosphere-sea ice models that are initialized with estimates from the observed ocean state. We show that these multimodel forecasts can skilfully predict surface and subsurface ocean variability with lead times of 2 to 9 years. We focus on assessment of forecasts of major well-observed oceanic phenomena that are thought to be related to the Atlantic meridional overturning circulation (AMOC). Variability in the North Atlantic subpolar gyre, in particular that associated with the Atlantic Multidecadal Oscillation, is skilfully predicted 2-9 years ahead. The fresh water content and heat content in major convection areas such as the Labrador Sea are predictable as well, although individual events are not captured. The skill of these predictions is higher than that of uninitialized coupled model simulations and damped persistence. However, except for heat content in the subpolar gyre, differences between damped persistence and the initialized predictions are not significant. Since atmospheric variability is not predictable on multiyear time scales, initialization of the ocean and oceanic processes likely provide skill. Assessment of relationships of patterns of variability and ocean heat content and fresh water content shows differences among models indicating that model improvement can lead to further improvements of the predictions. The results imply there is scope for skilful predictions of the AMOC. © 2013. American Geophysical Union. All Rights Reserved.

Impact of snow initialization on sub-seasonal forecasts

Climate Dynamics (2013) 1-14

Authors:

YJ Orsolini, R Senan, G Balsamo, F Vitart, A Weisheimer, FJ Doblas-Reyes, A Carrasco, RE Benestad

Abstract:

The influence of the snowpack on wintertime atmospheric teleconnections has received renewed attention in recent years, partially for its potential impact on seasonal predictability. Many observational and model studies have indicated that the autumn Eurasian snow cover in particular, influences circulation patterns over the North Pacific and North Atlantic. We have performed a suite of coupled atmosphere-ocean simulations with the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble forecast system to investigate the impact of accurate snow initialisation. Pairs of 2-month ensemble forecasts were started every 15 days from the 15th of October through the 1st of December in the years 2004-2009, with either realistic initialization of snow variables based on re-analyses, or else with "scrambled" snow initial conditions from an alternate autumn date and year. Initially, in the first 15 days, the presence of a thicker snowpack cools surface temperature over the continental land masses of Eurasia and North America. At a longer lead of 30-day, it causes a warming over the Arctic and the high latitudes of Eurasia due to an intensification and westward expansion of the Siberian High. It also causes a cooling over the mid-latitudes of Eurasia, and lowers sea level pressures over the Arctic. This "warm Arctic-cold continent" difference means that the forecasts of near-surface temperature with the more realistic snow initialization are in closer agreement with re-analyses, reducing a cold model bias over the Arctic and a warm model bias over mid-latitudes. The impact of realistic snow initialization upon the forecast skill in snow depth and near-surface temperature is estimated for various lead times. Following a modest skill improvement in the first 15 days over snow-covered land, we also find a forecast skill improvement up to the 30-day lead time over parts of the Arctic and the Northern Pacific, which can be attributed to the realistic snow initialization over the land masses. © 2013 Springer-Verlag Berlin Heidelberg.

REVOLUTIONIZING CLIMATE MODELING WITH PROJECT ATHENA A Multi-Institutional, International Collaboration

BULLETIN OF THE AMERICAN METEOROLOGICAL SOCIETY 94:2 (2013) 231-245

Authors:

JLIII Kinter, B Cash, D Achuthavarier, J Adams, E Altshuler, P Dirmeyer, B Doty, B Huang, EK Jin, L Marx, J Manganello, C Stan, T Wakefield, T Palmer, M Hamrud, T Jung, M Miller, P Towers, N Wedi, M Satoh, H Tomita, C Kodama, T Nasuno, K Oouchi, Y Yamada, H Taniguchi, P Andrews, T Baer, M Ezell, C Halloy, D John, B Loftis, R Mohr, K Wong

Singular vectors, predictability and ensemble forecasting for weather and climate

Journal of Physics A: Math. Theor. 46:25 (2013) 254018

Authors:

TN Palmer, L Zanna

The use of imprecise processing to improve accuracy in weather & climate prediction

Journal of Computational Physics (2013)

Authors:

PD Düben, TN Palmer, H McNamara

Abstract:

The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations. This would allow higher resolution models to be run at the same computational cost. © 2013 The Authors.