Skip to main content
Home
Department Of Physics text logo
  • Research
    • Our research
    • Our research groups
    • Our research in action
    • Research funding support
    • Summer internships for undergraduates
  • Study
    • Undergraduates
    • Postgraduates
  • Engage
    • For alumni
    • For business
    • For schools
    • For the public
Menu
Juno Jupiter image

Tim Palmer

Emeritus

Sub department

  • Atmospheric, Oceanic and Planetary Physics

Research groups

  • Predictability of weather and climate
Tim.Palmer@physics.ox.ac.uk
Telephone: 01865 (2)72897
Robert Hooke Building, room S43
  • About
  • Publications

Single-precision in the tangent-linear and adjoint models of incremental 4D-VAr

Monthly Weather Review American Meteorological Society 148:4 (2020) 1541-1552

Authors:

S Hatfield, A McRae, T Palmer, P Düben

Abstract:

The use of single-precision arithmetic in ECMWF’s forecasting model gave a 40% reduction in wall-clock time over double-precision, with no decrease in forecast quality. However, using reduced-precision in 4D-Var data assimilation is relatively unexplored and there are potential issues with using single-precision in the tangent-linear and adjoint models. Here, we present the results of reducing numerical precision in an incremental 4D-Var data assimilation scheme, with an underlying two-layer quasigeostrophic model. The minimizer used is the conjugate gradient method. We show how reducing precision increases the asymmetry between the tangent-linear and adjoint models. For ill-conditioned problems, this leads to a loss of orthogonality among the residuals of the conjugate gradient algorithm, which slows the convergence of the minimization procedure. However, we also show that a standard technique, reorthogonalization, eliminates these issues and therefore could allow the use of single-precision arithmetic. This work is carried out within ECMWF’s data assimilation framework, the Object Oriented Prediction System.
More details from the publisher
Details from ORA
More details

Seasonal forecasts of the 20th century

Bulletin of the American Meteorological Society American Meteorological Society 101:8 (2020) E1413-E1426

Authors:

Antje Weisheimer, Daniel Befort, David Macleod, Timothy Palmer, Chris O’Reilly, Kristian Strømmen

Abstract:

New seasonal retrospective forecasts for 1901-2010 show that skill for predicting ENSO, NAO and PNA is reduced during mid-century periods compared to earlier and more recent high-skill decades.

Forecasts of seasonal climate anomalies using physically-based global circulation models are routinely made at operational meteorological centers around the world. A crucial component of any seasonal forecast system is the set of retrospective forecasts, or hindcasts, from past years which are used to estimate skill and to calibrate the forecasts. Hindcasts are usually produced over a period of around 20-30 years. However, recent studies have demonstrated that seasonal forecast skill can undergo pronounced multi-decadal variations. These results imply that relatively short hindcasts are not adequate for reliably testing seasonal forecasts and that small hindcast sample sizes can potentially lead to skill estimates that are not robust. Here we present new and unprecedented 110-year-long coupled hindcasts of the next season over the period 1901 to 2010. Their performance for the recent period is in good agreement with those of operational forecast models. While skill for ENSO is very high during recent decades, it is markedly reduced during the 1930s to 1950s. Skill at the beginning of the 20th Century is, however, as high as for recent high-skill periods. Consistent with findings in atmosphere-only hindcasts, a mid-century drop in forecast skill is found for a range of atmospheric fields including large-scale indices such as the NAO and the PNA patterns. As with ENSO, skill scores for these indices recover in the early 20th Century suggesting that the mid-century drop in skill is not due to lack of good observational data.

A public dissemination platform for our hindcast data is available and we invite the scientific community to explore them.

More details from the publisher
Details from ORA
More details

Human creativity and consciousness: unintended consequences of the brain's extraordinary energy efficiency?

Entropy MDPI 22:3 (2020) 281

Abstract:

It is proposed that both human creativity and human consciousness are (unintended) consequences of the human brain's extraordinary energy efficiency. The topics of creativity and consciousness are treated separately, though have a common sub-structure. It is argued that creativity arises from a synergy between two cognitive modes of the human brain (which broadly coincide with Kahneman's Systems 1 and 2). In the first, available energy is spread across a relatively large network of neurons, many of which are small enough to be susceptible to thermal (ultimately quantum decoherent) noise. In the second, available energy is focussed on a smaller subset of larger neurons whose action is deterministic. Possible implications for creative computing in silicon are discussed. Starting with a discussion of the concept of free will, the notion of consciousness is defined in terms of an awareness of what are perceived to be nearby counterfactual worlds in state space. It is argued that such awareness arises from an interplay between memories on the one hand, and quantum physical mechanisms (where, unlike in classical physics, nearby counterfactual worlds play an indispensable dynamical role) in the ion channels of neural networks, on the other. As with the brain's susceptibility to noise, it is argued that in situations where quantum physics plays a role in the brain, it does so for reasons of energy efficiency. As an illustration of this definition of consciousness, a novel proposal is outlined as to why quantum entanglement appears to be so counter-intuitive.
More details from the publisher
Details from ORA
More details
More details

Reduced-precision parametrization: lessons from an intermediate-complexity atmospheric model

Quarterly Journal of the Royal Meteorological Society Wiley 146:729 (2020) 1590-1607

Authors:

Leo Saffin, Sam Hatfield, Peter Duben, Tim Palmer

Abstract:

Reducing numerical precision can save computational costs which can then be reinvested for more useful purposes. This study considers the effects of reducing precision in the parametrizations of an intermediate complexity atmospheric model (SPEEDY). We find that the difference between double-precision and reduced-precision parametrization tendencies is proportional to the expected machine rounding error if individual timesteps are considered. However, if reduced precision is used in simulations that are compared to double-precision simulations, a range of precision is found where differences are approximately the same for all simulations. Here, rounding errors are small enough to not directly perturb the model dynamics, but can perturb conditional statements in the parametrizations (such as convection active/inactive) leading to a similar error growth for all runs. For lower precision, simulations are perturbed significantly. Precision cannot be constrained without some quantification of the uncertainty. The inherent uncertainty in numerical weather and climate models is often explicitly considered in simulations by stochastic schemes that will randomly perturb the parametrizations. A commonly used scheme is stochastic perturbation of parametrization tendencies (SPPT). A strong test on whether a precision is acceptable is whether a low-precision ensemble produces the same probability distribution as a double-precision ensemble where the only difference between ensemble members is the model uncertainty (i.e., the random seed in SPPT). Tests with SPEEDY suggest a precision as low as 3.5 decimal places (equivalent to half precision) could be acceptable, which is surprisingly close to the lowest precision that produces similar error growth in the experiments without SPPT mentioned above. Minor changes to model code to express variables as anomalies rather than absolute values reduce rounding errors and low-precision biases, allowing even lower precision to be used. These results provide a pathway for implementing reduced-precision parametrizations in more complex weather and climate models.
More details from the publisher
Details from ORA
More details

The physics of numerical analysis: a climate modelling case study

Philosophical Transactions A: Mathematical, Physical and Engineering Sciences Royal Society 378:2166 (2020) 20190058

Abstract:

The case is made for a much closer synergy between climate science, numerical analysis and computer science. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'.
More details from the publisher
Details from ORA
More details
More details

Pagination

  • First page First
  • Previous page Prev
  • …
  • Page 4
  • Page 5
  • Page 6
  • Page 7
  • Current page 8
  • Page 9
  • Page 10
  • Page 11
  • Page 12
  • …
  • Next page Next
  • Last page Last

Footer Menu

  • Contact us
  • Giving to the Dept of Physics
  • Work with us
  • Media

User account menu

  • Log in

Follow us

FIND US

Clarendon Laboratory,

Parks Road,

Oxford,

OX1 3PU

CONTACT US

Tel: +44(0)1865272200

University of Oxfrod logo Department Of Physics text logo
IOP Juno Champion logo Athena Swan Silver Award logo

© University of Oxford - Department of Physics

Cookies | Privacy policy | Accessibility statement

Built by: Versantus

  • Home
  • Research
  • Study
  • Engage
  • Our people
  • News & Comment
  • Events
  • Our facilities & services
  • About us
  • Current students
  • Staff intranet