EMUFLOW: normalizing flows for joint cosmological analysis
Monthly Notices of the Royal Astronomical Society Oxford University Press 536:1 (2024) 190-202
Abstract:
Given the growth in the variety and precision of astronomical data sets of interest for cosmology, the best cosmological constraints are invariably obtained by combining data from different experiments. At the likelihood level, one complication in doing so is the need to marginalize over large-dimensional parameter models describing the data of each experiment. These include both the relatively small number of cosmological parameters of interest and a large number of ‘nuisance’ parameters. Sampling over the joint parameter space for multiple experiments can thus become a very computationally expensive operation. This can be significantly simplified if one could sample directly from the marginal cosmological posterior distribution of preceding experiments, depending only on the common set of cosmological parameters. We show that this can be achieved by emulating marginal posterior distributions via normalizing flows. The resulting trained normalizing flow models can be used to efficiently combine cosmological constraints from independent data sets without increasing the dimensionality of the parameter space under study. The method is able to accurately describe the posterior distribution of real cosmological data sets, as well as the joint distribution of different data sets, even when significant tension exists between experiments. The resulting joint constraints can be obtained in a fraction of the time it would take to combine the same data sets at the level of their likelihoods. We construct normalizing flow models for a set of public cosmological data sets of general interests and make them available, together with the software used to train them, and to exploit them in cosmological parameter inference.
Assessment of gradient-based samplers in standard cosmological likelihoods
Monthly Notices of the Royal Astronomical Society Oxford University Press 534:3 (2024) stae2138
Abstract:
We assess the usefulness of gradient-based samplers, such as the no-U-turn sampler (NUTS), by comparison with traditional Metropolis–Hastings (MH) algorithms, in tomographic 3 × 2 point analyses. Specifically, we use the Dark Energy Survey (DES) Year 1 data and a simulated dataset for the Large Synoptic Survey Telescope (LSST) survey as representative examples of these studies, containing a significant number of nuisance parameters (20 and 32, respectively) that affect the performance of rejection-based samplers. To do so, we implement a differentiable forward model using JAX-COSMO, and we use it to derive parameter constraints from both data sets using the NUTS algorithm implemented in NUMPYRO, and the Metropolis–Hastings algorithm as implemented in COBAYA. When quantified in terms of the number of effective number of samples taken per likelihood evaluation, we find a relative efficiency gain of O(10) in favour of NUTS. However, this efficiency is reduced to a factor ∼ 2 when quantified in terms of computational time, since we find the cost of the gradient computation (needed by NUTS) relative to the likelihood to be ∼ 4.5 times larger for both experiments. We validate these results making use of analytical multivariate distributions (a multivariate Gaussian and a Rosenbrock distribution) with increasing dimensionality. Based on these results, we conclude that gradient-based samplers such as NUTS can be leveraged to sample high-dimensional parameter spaces in Cosmology, although the efficiency improvement is relatively mild for moderate (O(50)) dimension numbers, typical of tomographic large-scale structure analyses.$\mathtt{emuflow}$: Normalising Flows for Joint Cosmological Analysis
(2024)
X-Ray-Cosmic-Shear Cross-Correlations: First Detection and Constraints on Baryonic Effects.
Physical review letters American Physical Society (APS) 133:5 (2024) 51001