Skip to main content
Home
Department Of Physics text logo
  • Research
    • Our research
    • Our research groups
    • Our research in action
    • Research funding support
    • Summer internships for undergraduates
  • Study
    • Undergraduates
    • Postgraduates
  • Engage
    • For alumni
    • For business
    • For schools
    • For the public
  • Support
Menu
Juno Jupiter image

Tim Reichelt

Encode AI Fellow

Research theme

  • Climate physics

Sub department

  • Atmospheric, Oceanic and Planetary Physics

Research groups

  • Climate processes
tim.reichelt@physics.ox.ac.uk
Atmospheric Physics Clarendon Laboratory, room 104
Personal Website
GitHub
  • About
  • Publications

Calibration of climate model parameterizations using Bayesian experimental design

Machine Learning: Earth IOP Publishing 2:1 (2026) 015003-015003

Authors:

Tim Reichelt, Tom Rainforth, Duncan Watson-Parris
More details from the publisher

Sensitivity analysis for climate science with generative flow models

NeurIPS (2025)

Authors:

Alex Dobra, Jakiw Pidstrigach, Tim Reichelt, Paolo Fraccaro, Johannes Jakubik, Anne Jones, Chris Schroeder de Witt, Philip Torr, Philip Stier

Abstract:

Sensitivity analysis is a cornerstone of climate science, essential for understanding phenomena ranging from storm intensity to long-term climate feedbacks. However, computing these sensitivities using traditional physical models is often prohibitively expensive in terms of both computation and development time. While modern AI-based generative models are orders of magnitude faster to evaluate, computing sensitivities with them remains a significant bottleneck. This work addresses this challenge by applying the adjoint state method for calculating gradients in generative flow models. We apply this method to the cBottle generative model, trained on ERA5 and ICON data, to perform sensitivity analysis of any atmospheric variable with respect to sea surface temperatures. We quantitatively validate the computed sensitivities against the model’s own outputs. Our results provide initial evidence that this approach can produce reliable gradients, reducing the computational cost of sensitivity analysis from weeks on a supercomputer with a physical model to hours on a GPU, thereby simplifying a critical workflow in climate science. The code can be found at https://github.com/Kwartzl8/ cbottle_adjoint_sensitivity.
Details from ORA

Lossy neural compression for geospatial analytics: a review

IEEE Geoscience and Remote Sensing Magazine IEEE 13:3 (2025) 97-135

Authors:

Carlos Gomes, Isabelle Wittmann, Damien Robert, Johannes Jakubik, Tim Reichelt, Stefano Maurogiovanni, Rikard Vinge, Jonas Hurst, Erik Scheurer, Rocco Sedona, Thomas Brunschwiler, Stefan Kesselheim, Matej Batic, Philip Stier, Jan Dirk Wegner, Gabriele Cavallaro, Edzer Pebesma, Michael Marszalek, Miguel A Belenguer-Plomer, Kennedy Adriko, Paolo Fraccaro, Romeo Kienzler, Rania Briq, Sabrina Benassou, Michele Lazzarini, Conrad M Albrecht

Abstract:

Over the past decades, there has been an explosion in the amount of available Earth observation (EO) data. The unprecedented coverage of Earth’s surface and atmosphere by satellite imagery has resulted in large volumes of data that must be transmitted to ground stations, stored in data centers, and distributed to end users. Modern Earth system models (ESMs) face similar challenges, operating at high spatial and temporal resolutions, producing petabytes of data per simulated day. Data compression has gained relevance over the past decade, with neural compression (NC) emerging from deep learning and information theory, making EO data and ESM outputs ideal candidates because of their abundance of unlabeled data.

In this review, we outline recent developments in NC applied to geospatial data. We introduce the fundamental concepts of NC, including seminal works in its traditional applications to image and video compression domains with a focus on lossy compression. We discuss the unique characteristics of EO and ESM data, contrasting them with “natural images,” and we explain the additional challenges and opportunities they present. Additionally, we review current applications of NC across various EO modalities and explore the limited efforts in ESM compression to date. The advent of self-supervised learning (SSL) and foundation models (FMs) has advanced methods to efficiently distill representations from vast amounts of unlabeled data. We connect these developments to NC for EO, highlighting the similarities between the two fields and elaborate on the potential of transferring compressed feature representations for machine-to-machine communication. Based on insights drawn from this review, we devise future directions relevant to applications in EO and ESMs.

More details from the publisher
Details from ORA
More details

Beyond Bayesian model averaging over paths in probabilistic programs with stochastic support

Proceedings of The 27th International Conference on Artificial Intelligence and Statistics Journal of Machine Learning Research (2024) 829-837

Authors:

Tim Reichelt, Luke Ong, Thomas Rainforth

Abstract:

The posterior in probabilistic programs with stochastic support decomposes as a weighted sum of the local posterior distributions associated with each possible program path. We show that making predictions with this full posterior implicitly performs a Bayesian model averaging (BMA) over paths. This is potentially problematic, as BMA weights can be unstable due to model misspecification or inference approximations, leading to sub-optimal predictions in turn. To remedy this issue, we propose alternative mechanisms for path weighting: one based on stacking and one based on ideas from PAC-Bayes. We show how both can be implemented as a cheap post-processing step on top of existing inference engines. In our experiments, we find them to be more robust and lead to better predictions compared to the default BMA weights.
Details from ORA
More details

Expectation Programming: Adapting Probabilistic Programming Systems to Estimate Expectations Efficiently

Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence, UAI 2022 (2022) 1676-1685

Authors:

T Reichelt, A Goliński, L Ong, T Rainforth

Abstract:

We show that the standard computational pipeline of probabilistic programming systems (PPSs) can be inefficient for estimating expectations and introduce the concept of expectation programming to address this. In expectation programming, the aim of the backend inference engine is to directly estimate expected return values of programs, as opposed to approximating their conditional distributions. This distinction, while subtle, allows us to achieve substantial performance improvements over the standard PPS computational pipeline by tailoring computation to the expectation we care about. We realize a particular instance of our expectation programming concept, Expectation Programming in Turing (EPT), by extending the PPS Turing to allow so-called target-aware inference to be run automatically. We then verify the statistical soundness of EPT theoretically, and show that it provides substantial empirical gains in practice.
More details

Pagination

  • Current page 1
  • Page 2
  • Next page Next
  • Last page Last

Footer Menu

  • Contact us
  • Giving to the Dept of Physics
  • Work with us
  • Media

User account menu

  • Log in

Follow us

FIND US

Clarendon Laboratory,

Parks Road,

Oxford,

OX1 3PU

CONTACT US

Tel: +44(0)1865272200

University of Oxfrod logo Department Of Physics text logo
IOP Juno Champion logo Athena Swan Silver Award logo

© University of Oxford - Department of Physics

Cookies | Privacy policy | Accessibility statement

Built by: Versantus

  • Home
  • Research
  • Study
  • Engage
  • Our people
  • News & Comment
  • Events
  • Our facilities & services
  • About us
  • Giving to Physics
  • Current students
  • Staff intranet