Periodic Reporting for period 1 - Deledda (Deep Learning the Dark Universe with Gravitational Waves)
Reporting period: 2023-09-01 to 2025-08-31
The Deledda project—Deep Learning the Dark Universe with Gravitational Waves—addresses this challenge by developing advanced machine learning tools to accelerate and improve the analysis of gravitational-wave data. The project builds on the rapid progress of deep learning and simulation-based inference to make parameter estimation faster, more reliable, and more interpretable. In current pipelines, obtaining the physical parameters of a GW source can take from days to weeks of computation on large clusters, limiting the number of events that can be fully characterized and delaying possible multimessenger follow-ups. Deledda aims to replace these costly processes with neural methods that learn from simulated data and can perform inference in seconds, thus unlocking the full scientific potential of current and future GW detectors.
Within this framework, the project pursues three complementary objectives. The first is to integrate physical symmetries and domain knowledge into neural architectures for compact-binary mergers, leading to the development of a simulation-based inference model (Labrador) that achieves high accuracy and interpretability while training within only one day on modern GPUs. The second is to explore alternative inference strategies for PTA datasets, introducing a fast variational inference approach that can analyze the 15-year NANOGrav dataset in minutes instead of days, enabling new studies of the low-frequency gravitational-wave background. The third is to improve the estimation of Bayesian evidence—a key quantity for model selection—through a novel normalizing-flow method (floZ), which is robust and scalable to high-dimensional problems.
By combining expertise in gravitational-wave physics and modern machine learning, Deledda contributes to a new generation of analysis methods that can keep pace with the rapidly expanding GW Universe. The project’s outcomes are expected to enhance the scientific return of large international observatories such as LVK and PTA collaborations, reduce computational costs, and promote the broader integration of AI techniques in fundamental physics research.
The first achievement is the development of Labrador, a simulation-based inference framework that combines neural posterior estimation with domain-specific physical insights. The method compresses detector data through heterodyning against an optimal reference waveform, reparametrizes source parameters to remove degeneracies, and folds the parameter space to eliminate known multimodalities. These design choices make the network approximately equivariant to changes in source parameters, improving both efficiency and interpretability. Labrador achieves state-of-the-art performance with a full end-to-end training time of about one day on a single A100 GPU, representing a major step toward real-time parameter estimation for gravitational-wave events.
The second line of work introduced variational Bayesian inference as a new approach for analyzing pulsar-timing-array datasets. Unlike traditional Markov Chain Monte Carlo techniques, this method optimizes a neural approximation to the posterior distribution using stochastic gradient descent, allowing it to fully exploit the parallelism of modern GPUs. When applied to the NANOGrav 15-year dataset, the approach reduced the analysis time from days to minutes while maintaining statistical accuracy. This breakthrough opens the door to systematic studies of model uncertainties and alternative astrophysical or cosmological scenarios using PTA data.
Finally, the project developed floZ, a general-purpose algorithm to estimate Bayesian evidence directly from posterior samples. Based on normalizing flows, floZ is accurate, robust to sharp posterior features, and scalable to high-dimensional spaces. It provides an efficient alternative to nested sampling and other evidence estimators, and can be integrated with variational or simulation-based inference pipelines.
Together, these results demonstrate the potential of deep learning to transform the analysis of gravitational-wave data, making it faster, more scalable, and physically grounded.
From a methodological perspective, Deledda introduced innovative strategies that integrate physical symmetries and prior knowledge directly into the architecture of neural models. This goes beyond the “black-box” paradigm typical of many machine learning applications and establishes a new framework for interpretable and trustworthy inference. The project also demonstrated that Bayesian evidence—traditionally one of the most expensive quantities to compute—can be efficiently estimated using neural density models such as normalizing flows, extending the applicability of these techniques to high-dimensional astrophysical problems.
Beyond gravitational-wave astronomy, the approaches developed in Deledda are of broad relevance to other fields where large, complex datasets must be interpreted through computationally demanding physical models. They illustrate how deep learning, when combined with rigorous Bayesian methodology and physical insights, can enhance scientific discovery in fundamental physics, cosmology, and beyond.