Assessing the robustness of multidecadal variability in Northern Hemisphere wintertime seasonal forecast skill

Quarterly Journal of the Royal Meteorological Society Wiley 146:733 (2020) qj.3890

Authors:

Christopher H O'Reilly, Antje Weisheimer, David MacLeod, Daniel J Befort, Tim Palmer

Abstract:

Recent studies have found evidence of multidecadal variability in northern hemisphere wintertime seasonal forecast skill. Here we assess the robustness of this finding by extending the analysis to analysing a diverse set of ensemble atmospheric model simulations. These simulations differ in either numerical model or type of initialisation and include atmospheric model experiments initialised with reanalysis data and free‐running atmospheric model ensembles. All ensembles are forced with observed SST and seaice boundary conditions. Analysis of large‐scale Northern Hemisphere circulation indicesover the Northern Hemisphere (namely the North Atlantic Oscillation, Pacific North American pattern and the Arctic Oscillation) reveals that in all ensembles there is larger correlation skill in the late century periods than during periods in the mid‐century. Similar multidecadal variability in skill is found in a measure of total skill integrated over the whole of the extratropics. Most of the differences in large‐scale circulation skill between the skillful late period (as well as early period) and the less skillful mid‐century period seem to be due to a reduction in skill over the North Pacific and a disappearance in skill over North America and the North Atlantic. The results are robust across different models and different types of initialisation, indicating that the multidecadal variability in Northern Hemisphere winter skill is a robust feature of 20th century climate variability. Multidecadal variability in skill therefore arises from the evolution of the observed SSTs, likely related to a weakened influence of ENSO on the predictable extratropical circulation signal during the middle of the 20th century, and is evident in the signal‐to‐noise ratio of the different ensembles, particularly the larger ensembles.

A Vision for Numerical Weather Prediction in 2030

ArXiv 2007.0483 (2020)

Short-term tests validate long-term estimates of climate change

Nature Springer Nature 582:7811 (2020) 185-186

Revisiting the identification of wintertime atmospheric circulation regimes in the Euro‐Atlantic sector

Quarterly Journal of the Royal Meteorological Societyhttps://doi.org/10.1002/qj.3818 Wiley (2020)

Authors:

S Falkena, J de Wiljes, A WEISHEIMER, TG Shepherd

Abstract:

Atmospheric circulation is often clustered in so‐called circulation regimes, which are persistent and recurrent patterns. For the Euro‐Atlantic sector in winter, most studies identify four regimes: the Atlantic Ridge, Scandinavian Blocking and the two phases of the North Atlantic Oscillation. These results are obtained by applying k‐means clustering to the first several empirical orthogonal functions (EOFs) of geopotential height data. Studying the observed circulation in reanalysis data, it is found that when the full field data are used for the k‐means cluster analysis instead of the EOFs, the optimal number of clusters is no longer four but six. The two extra regimes that are found are the opposites of the Atlantic Ridge and Scandinavian Blocking, meaning they have a low‐pressure area roughly where the original regimes have a high‐pressure area. This introduces an appealing symmetry in the clustering result. Incorporating a weak persistence constraint in the clustering procedure is found to lead to a longer duration of regimes, extending beyond the synoptic time‐scale, without changing their occurrence rates. This is in contrast to the commonly used application of a time‐filter to the data before the clustering is executed, which, while increasing the persistence, changes the occurrence rates of the regimes. We conclude that applying a persistence constraint within the clustering procedure is a better way of stabilizing the clustering results than low‐pass filtering the data.

Rethinking superdeterminism

Frontiers in Physics Frontiers 8 (2020) 139

Authors:

Sabine Hossenfelder, Tim Palmer

Abstract:

Quantum mechanics has irked physicists ever since its conception more than 100 years ago. While some of the misgivings, such as it being unintuitive, are merely aesthetic, quantum mechanics has one serious shortcoming: it lacks a physical description of the measurement process. This “measurement problem” indicates that quantum mechanics is at least an incomplete theory—good as far as it goes, but missing a piece—or, more radically, is in need of complete overhaul. Here we describe an approach which may provide this sought-for completion or replacement: Superdeterminism. A superdeterministic theory is one which violates the assumption of Statistical Independence (that distributions of hidden variables are independent of measurement settings). Intuition suggests that Statistical Independence is an essential ingredient of any theory of science (never mind physics), and for this reason Superdeterminism is typically discarded swiftly in any discussion of quantum foundations. The purpose of this paper is to explain why the existing objections to Superdeterminism are based on experience with classical physics and linear systems, but that this experience misleads us. Superdeterminism is a promising approach not only to solve the measurement problem, but also to understand the apparent non-locality of quantum physics. Most importantly, we will discuss how it may be possible to test this hypothesis in an (almost) model independent way.