Skip to main content
Home
Department Of Physics text logo
  • Research
    • Our research
    • Our research groups
    • Our research in action
    • Research funding support
    • Summer internships for undergraduates
  • Study
    • Undergraduates
    • Postgraduates
  • Engage
    • For alumni
    • For business
    • For schools
    • For the public
Menu
Juno Jupiter image

Tim Reichelt

Postdoctoral Research Assistant

Research theme

  • Climate physics

Sub department

  • Atmospheric, Oceanic and Planetary Physics

Research groups

  • Climate processes
tim.reichelt@physics.ox.ac.uk
Atmospheric Physics Clarendon Laboratory, room 104
GitHub
Personal Website
  • About
  • Publications

Beyond Bayesian model averaging over paths in probabilistic programs with stochastic support

Proceedings of The 27th International Conference on Artificial Intelligence and Statistics Journal of Machine Learning Research (2024) 829-837

Authors:

Tim Reichelt, Luke Ong, Thomas Rainforth

Abstract:

The posterior in probabilistic programs with stochastic support decomposes as a weighted sum of the local posterior distributions associated with each possible program path. We show that making predictions with this full posterior implicitly performs a Bayesian model averaging (BMA) over paths. This is potentially problematic, as BMA weights can be unstable due to model misspecification or inference approximations, leading to sub-optimal predictions in turn. To remedy this issue, we propose alternative mechanisms for path weighting: one based on stacking and one based on ideas from PAC-Bayes. We show how both can be implemented as a cheap post-processing step on top of existing inference engines. In our experiments, we find them to be more robust and lead to better predictions compared to the default BMA weights.
Details from ORA
More details

Expectation Programming: Adapting Probabilistic Programming Systems to Estimate Expectations Efficiently

Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence, UAI 2022 (2022) 1676-1685

Authors:

T Reichelt, A Goliński, L Ong, T Rainforth

Abstract:

We show that the standard computational pipeline of probabilistic programming systems (PPSs) can be inefficient for estimating expectations and introduce the concept of expectation programming to address this. In expectation programming, the aim of the backend inference engine is to directly estimate expected return values of programs, as opposed to approximating their conditional distributions. This distinction, while subtle, allows us to achieve substantial performance improvements over the standard PPS computational pipeline by tailoring computation to the expectation we care about. We realize a particular instance of our expectation programming concept, Expectation Programming in Turing (EPT), by extending the PPS Turing to allow so-called target-aware inference to be run automatically. We then verify the statistical soundness of EPT theoretically, and show that it provides substantial empirical gains in practice.
More details

Rethinking Variational Inference for Probabilistic Programs with Stochastic Support

Advances in Neural Information Processing Systems 35 (2022)

Authors:

T Reichelt, L Ong, T Rainforth

Abstract:

We introduce Support Decomposition Variational Inference (SDVI), a new variational inference (VI) approach for probabilistic programs with stochastic support. Existing approaches to this problem rely on designing a single global variational guide on a variable-by-variable basis, while maintaining the stochastic control flow of the original program. SDVI instead breaks the program down into sub-programs with static support, before automatically building separate sub-guides for each. This decomposition significantly aids in the construction of suitable variational families, enabling, in turn, substantial improvements in inference performance.
More details

Automating Bayesian computation for stochastic simulators with probabilistic programming

Abstract:

Probabilistic programming systems (PPSs) automate the process of running Bayesian inference in stochastic simulator models. These stochastic simulators are ubiquitous in science and engineering: climate researchers build earth system models to predict future climate change; particle physicists build simulators to understand the experimental outcomes of particle colliders; and epidemiologists build models to predict how diseases spread. PPSs give us a principled way to incorporate these simulators into our decision-making process by enabling us to calibrate them to observed data using the tools of Bayesian inference. However to do so, PPS inference algorithms need to deal with all the complexities of modern programming languages. Importantly for this thesis modern PPSs often permit the usage of stochastic control flow, leading to so-called programs with stochastic support: programs in which the number and type of latent variables are no longer fixed.

We will make the argument for treating these programs as mixtures over program paths. Using this breakdown we derive a new variational inference algorithm that we term Support Decomposition Variational Inference (SDVI). In contrast to prior work which constructs the variational family on a variable-by-variable basis, SDVI constructs the guide as a mixture over program paths, constructing a separate variational distribution for each path independently. This allows us to bring advances from variational inference from the static support setting to the stochastic support setting.

The breakdown of the program into a mixture over paths does not only help us derive new inference algorithms. We will also use it to investigate the properties of the posterior distribution more generally. Specifically, we show that the weights assigned to individual program paths can often be unstable; a problem that can arise either due to model misspecification or inference approximations. These instabilities make it harder to replicate results and can potentially give the user misleading confidence in their model's inferences. To alleviate these issues, we will propose alternative mechanisms to weight the program paths that instead optimize the path weights on predictive objectives.

Many PPSs focus on the goal of automating inference, however, it is important to also consider how the outcomes of inference are used in practice. Many workflows use the outputs of inference engines to estimate downstream expectations. To facilitate this use case, we will introduce the concept of expectation programming which allows users to directly define and estimate expectations in a target-aware manner; meaning the backend computation engine specifically tailors the estimation algorithm towards a user-specified expectation.

Details from ORA

Footer Menu

  • Contact us
  • Giving to the Dept of Physics
  • Work with us
  • Media

User account menu

  • Log in

Follow us

FIND US

Clarendon Laboratory,

Parks Road,

Oxford,

OX1 3PU

CONTACT US

Tel: +44(0)1865272200

University of Oxfrod logo Department Of Physics text logo
IOP Juno Champion logo Athena Swan Silver Award logo

© University of Oxford - Department of Physics

Cookies | Privacy policy | Accessibility statement

Built by: Versantus

  • Home
  • Research
  • Study
  • Engage
  • Our people
  • News & Comment
  • Events
  • Our facilities & services
  • About us
  • Current students
  • Staff intranet