Robustness of the stochastic parameterization of sub-grid scale wind variability in sea-surface fluxes

Monthly Weather Review American Meteorological Society (2023)

Authors:

Kota Endo, Adam H Monahan, Julie Bessac, Hannah Christensen, Nils Weitzel

Abstract:

High-resolution numerical models have been used to develop statistical models of the enhancement of sea surface fluxes resulting from spatial variability of sea-surface wind. In particular, studies have shown that the flux enhancement is not a deterministic function of the resolved state. Previous studies focused on single geographical areas or used a single high-resolution numerical model. This study extends the development of such statistical models by considering six different high-resolution models, four different geographical regions, and three different ten-day periods, allowing for a systematic investigation of the robustness of both the deterministic and stochastic parts of the data-driven parameterization. Results indicate that the deterministic part, based on regressing the unresolved normalized flux onto resolved scale normalized flux and precipitation, is broadly robust across different models, regions, and time periods. The statistical features of the stochastic part of the model (spatial and temporal autocorrelation and parameters of a Gaussian process fit to the regression residual) are also found to be robust and not strongly sensitive to the underlying model, modelled geographical region, or time period studied. Best-fit Gaussian process parameters display robust spatial heterogeneity across models, indicating potential for improvements to the statistical model. These results illustrate the potential for the development of a generic, explicitly stochastic parameterization of sea-surface flux enhancements dependent on wind variability.

Environmental Precursors to Mesoscale Convective Systems

(2023)

Authors:

Mark Muetzelfeldt, Robert Plant, Hannah Christensen

Abstract:

Mesoscale convective systems (MCSs) are important components of the Earth’s weather and climate systems. They produce a large fraction of tropical rainfall and their top-heavy heating profiles can feedback onto atmospheric dynamics. Understanding the large-scale environmental precursor conditions that cause their formation is normally done as case studies or on a regional basis. Here, we take a global view on this problem, linking tracked MCSs to the environmental conditions that lead to their growth and maintenance. We consider common variables associated with deep convection, such as CAPE, total column water vapour and moisture convergence. We take care to distinguish between conditions associated with deep convection, and conditions associated with MCSs specifically. Furthermore, we pose the question in a way that is useful for the development of an MCS parametrization scheme, by asking what environmental conditions lead to MCS occurrence, instead of locating an MCS and then finding the associated conditions.

Using reliability diagrams to interpret the ‘signal-to-noise paradox’ in seasonal forecasts of the winter North Atlantic Oscilation

(2023)

Authors:

Kristian Strommen, Molly MacRae, Hannah Christensen

Using probabilistic machine learning to better model temporal patterns in parameterizations: a case study with the Lorenz 96 model

GEOSCIENTIFIC MODEL DEVELOPMENT 16:15 (2023) 4501-4519

Authors:

Raghul Parthipan, Hannah M Christensen, J Scott Hosking, Damon J Wischik

Insights into the quantification and reporting of model-related uncertainty across different disciplines.

iScience Cell Press 25:12 (2022) 105512

Authors:

Emily G Simmonds, Kwaku Peprah Adjei, Christoffer Wold Andersen, Hannah Christensen

Abstract:

Quantifying uncertainty associated with our models is the only way we can express how much we know about any phenomenon. Incomplete consideration of model-based uncertainties can lead to overstated conclusions with real-world impacts in diverse spheres, including conservation, epidemiology, climate science, and policy. Despite these potentially damaging consequences, we still know little about how different fields quantify and report uncertainty. We introduce the “sources of uncertainty” framework, using it to conduct a systematic audit of model-related uncertainty quantification from seven scientific fields, spanning the biological, physical, and political sciences. Our interdisciplinary audit shows no field fully considers all possible sources of uncertainty, but each has its own best practices alongside shared outstanding challenges. We make ten easy-to-implement recommendations to improve the consistency, completeness, and clarity of reporting on model-related uncertainty. These recommendations serve as a guide to best practices across scientific fields and expand our toolbox for high-quality research.