The Velocity Field Olympics: Assessing velocity field reconstructions with direct distance tracers
Monthly Notices of the Royal Astronomical Society Oxford University Press (OUP) (2025) staf1960
Abstract:
Abstract The peculiar velocity field of the local Universe provides direct insights into its matter distribution and the underlying theory of gravity, and is essential in cosmological analyses for modelling deviations from the Hubble flow. Numerous methods have been developed to reconstruct the density and velocity fields at z ≲ 0.05, typically constrained by redshift-space galaxy positions or by direct distance tracers such as the Tully–Fisher relation, the fundamental plane, or Type Ia supernovae. We introduce a validation framework to evaluate the accuracy of these reconstructions against catalogues of direct distance tracers. Our framework assesses the goodness-of-fit of each reconstruction using Bayesian evidence, residual redshift discrepancies, velocity scaling, and the need for external bulk flows. Applying this framework to a suite of reconstructions—including those derived from the Bayesian Origin Reconstruction from Galaxies (BORG) algorithm and from linear theory—we find that the non-linear BORG reconstruction consistently outperforms others. We highlight the utility of such a comparative approach for supernova or gravitational wave cosmological studies, where selecting an optimal peculiar velocity model is essential. Additionally, we present calibrated bulk flow curves predicted by the reconstructions and perform a density–velocity cross-correlation using a linear theory reconstruction to constrain the growth factor, yielding S8 = 0.793 ± 0.035. The result is in good agreement with both weak lensing and Planck, but is in strong disagreement with some peculiar velocity studies.A 1-per cent-accurate method to include baryonic effects in galaxy–galaxy lensing models
Monthly Notices of the Royal Astronomical Society Oxford University Press 544:4 (2025) 3512-3532
Abstract:
The clustering of galaxies and galaxy–galaxy lensing are two of the main observational probes in Stage-IV large-scale structure surveys, such as Euclid and LSST. Unfortunately, the complicated relationship between galaxies and matter greatly limits the exploitation of this data. Sophisticated theoretical galaxy bias models–such as the hybrid Lagrangian bias expansion – allow describing galaxy clustering down to scales as small as . However, the galaxy–matter cross-power spectra are already affected by baryons on these scales, directly impacting the modelling of galaxy–galaxy lensing. In this work, we propose a way to extend state-of-the-art models of the galaxy–matter cross-power spectrum (currently only accounting for dark matter) by including a baryonic correction term inferred from the matter component [the suppression ], so that . We use the FLAMINGO hydrodynamical simulations to measure the effect of baryons on the galaxy–matter cross-power spectrum and to assess the performance of our model. Specifically, we perform a Bayesian analysis of synthetic data, implementing a model based on BACCO’s hybrid Lagrangian bias expansion (for the non-linear galaxy bias) and Baryon Correction Model (for the baryon suppression of the matter power spectrum). Ignoring the effect of baryons on the galaxy–matter cross-power spectrum leads to a biased inference of the galaxy bias parameters, while ignoring baryons in both the galaxy–matter and matter–matter power spectra leads to a biased inference of both the galaxy bias and cosmological parameters. In contrast, our method is 1 per cent accurate compared to all physics variations in FLAMINGO and on all scales described by hybrid perturbative models (). Moreover, our model leads to inferred bias and cosmological parameters compatible within 1 with their reference values. We anticipate that our method will be a promising candidate for analysing forthcoming Stage-IV survey data.Creating halos with autoregressive multistage networks
Physical Review D American Physical Society 112:10 (2025) 103503
Abstract:
To maximize the amount of information extracted from cosmological datasets, simulations that accurately represent these observations are necessary. However, traditional simulations that evolve particles under gravity by estimating particle-particle interactions (𝑁-body simulations) are computationally expensive and prohibitive to scale to the large volumes and resolutions necessary for the upcoming datasets. Moreover, modeling the distribution of galaxies typically involves identifying virialized dark matter halos, which is also a time- and memory-consuming process for large 𝑁-body simulations, further exacerbating the computational cost. In this study, we introduce CHARM, a novel method for creating mock halo catalogs by matching the spatial, mass, and velocity statistics of halos directly from the large-scale distribution of the dark matter density field. We develop multistage neural spline flow-based networks to learn this mapping at redshift 𝑧 =0.5 directly with computationally cheaper low-resolution particle mesh simulations instead of relying on the high-resolution 𝑁-body simulations. We show that the mock halo catalogs and painted galaxy catalogs have the same statistical properties as obtained from 𝑁-body simulations in both real space and redshift space. Finally, we use these mock catalogs for cosmological inference using redshift-space galaxy power spectrum, bispectrum, and wavelet-based statistics using simulation-based inference, performing the first inference with accelerated forward model simulations and finding unbiased cosmological constraints with well-calibrated posteriors.A Short Introduction to Cosmology and its Current Status
(2025)
Abstract:
SciPost Submission Detail A Short Introduction to Cosmology and its Current StatusEuclid: Early Release Observations – Interplay between dwarf galaxies and their globular clusters in the Perseus galaxy cluster
Astronomy and Astrophysics 703 (2025)