Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS

Deep Learning the Dark Universe with Gravitational Waves

Periodic Reporting for period 1 - Deledda (Deep Learning the Dark Universe with Gravitational Waves)

Reporting period: 2023-09-01 to 2025-08-31

Over the past decade, gravitational-wave (GW) astronomy has opened an entirely new window on the Universe. By observing the tiny ripples in spacetime produced by the mergers of compact objects such as black holes and neutron stars, scientists can probe gravity in its most extreme regimes and explore the population and evolution of massive objects across cosmic time. In parallel, a global effort has emerged to detect nanohertz gravitational waves through pulsar timing arrays (PTAs), which are sensitive to signals from supermassive black hole binaries in the centers of galaxies. In 2023, the four major PTA collaborations jointly announced compelling evidence for a stochastic gravitational-wave background. Two year later, the LIGO–Virgo–KAGRA (LVK) collaboration released the first part of its fourth observing run catalogue (O4a), containing more than a hundred new compact binary coalescences. These results mark a transformative moment for the field, but they also highlight a major challenge: the exponential growth of data is making traditional analysis methods computationally unsustainable.
The Deledda project—Deep Learning the Dark Universe with Gravitational Waves—addresses this challenge by developing advanced machine learning tools to accelerate and improve the analysis of gravitational-wave data. The project builds on the rapid progress of deep learning and simulation-based inference to make parameter estimation faster, more reliable, and more interpretable. In current pipelines, obtaining the physical parameters of a GW source can take from days to weeks of computation on large clusters, limiting the number of events that can be fully characterized and delaying possible multimessenger follow-ups. Deledda aims to replace these costly processes with neural methods that learn from simulated data and can perform inference in seconds, thus unlocking the full scientific potential of current and future GW detectors.
Within this framework, the project pursues three complementary objectives. The first is to integrate physical symmetries and domain knowledge into neural architectures for compact-binary mergers, leading to the development of a simulation-based inference model (Labrador) that achieves high accuracy and interpretability while training within only one day on modern GPUs. The second is to explore alternative inference strategies for PTA datasets, introducing a fast variational inference approach that can analyze the 15-year NANOGrav dataset in minutes instead of days, enabling new studies of the low-frequency gravitational-wave background. The third is to improve the estimation of Bayesian evidence—a key quantity for model selection—through a novel normalizing-flow method (floZ), which is robust and scalable to high-dimensional problems.
By combining expertise in gravitational-wave physics and modern machine learning, Deledda contributes to a new generation of analysis methods that can keep pace with the rapidly expanding GW Universe. The project’s outcomes are expected to enhance the scientific return of large international observatories such as LVK and PTA collaborations, reduce computational costs, and promote the broader integration of AI techniques in fundamental physics research.
The Deledda project developed, implemented, and validated new machine learning methods for fast and reliable inference in gravitational-wave astronomy. The work addressed three main fronts corresponding to different sources and analysis challenges: compact binary coalescences observed by ground-based detectors, the nanohertz gravitational-wave background probed by pulsar timing arrays, and the general problem of computing Bayesian evidence for model comparison.
The first achievement is the development of Labrador, a simulation-based inference framework that combines neural posterior estimation with domain-specific physical insights. The method compresses detector data through heterodyning against an optimal reference waveform, reparametrizes source parameters to remove degeneracies, and folds the parameter space to eliminate known multimodalities. These design choices make the network approximately equivariant to changes in source parameters, improving both efficiency and interpretability. Labrador achieves state-of-the-art performance with a full end-to-end training time of about one day on a single A100 GPU, representing a major step toward real-time parameter estimation for gravitational-wave events.
The second line of work introduced variational Bayesian inference as a new approach for analyzing pulsar-timing-array datasets. Unlike traditional Markov Chain Monte Carlo techniques, this method optimizes a neural approximation to the posterior distribution using stochastic gradient descent, allowing it to fully exploit the parallelism of modern GPUs. When applied to the NANOGrav 15-year dataset, the approach reduced the analysis time from days to minutes while maintaining statistical accuracy. This breakthrough opens the door to systematic studies of model uncertainties and alternative astrophysical or cosmological scenarios using PTA data.
Finally, the project developed floZ, a general-purpose algorithm to estimate Bayesian evidence directly from posterior samples. Based on normalizing flows, floZ is accurate, robust to sharp posterior features, and scalable to high-dimensional spaces. It provides an efficient alternative to nested sampling and other evidence estimators, and can be integrated with variational or simulation-based inference pipelines.
Together, these results demonstrate the potential of deep learning to transform the analysis of gravitational-wave data, making it faster, more scalable, and physically grounded.
The Deledda project has delivered results that go significantly beyond the current state of the art in gravitational-wave data analysis and in the application of modern machine learning to physical sciences. Traditional parameter estimation techniques in gravitational-wave astronomy rely on sampling-based algorithms such as Markov Chain Monte Carlo or nested sampling, which are accurate but computationally intensive. The new methods developed within Deledda demonstrate that neural networks and variational inference can reproduce or surpass the accuracy of these techniques while reducing computational times by several orders of magnitude. This achievement marks a decisive step toward real-time gravitational-wave astronomy, where source properties can be inferred within seconds of detection to enable electromagnetic follow-up and population studies of compact binaries.
From a methodological perspective, Deledda introduced innovative strategies that integrate physical symmetries and prior knowledge directly into the architecture of neural models. This goes beyond the “black-box” paradigm typical of many machine learning applications and establishes a new framework for interpretable and trustworthy inference. The project also demonstrated that Bayesian evidence—traditionally one of the most expensive quantities to compute—can be efficiently estimated using neural density models such as normalizing flows, extending the applicability of these techniques to high-dimensional astrophysical problems.
Beyond gravitational-wave astronomy, the approaches developed in Deledda are of broad relevance to other fields where large, complex datasets must be interpreted through computationally demanding physical models. They illustrate how deep learning, when combined with rigorous Bayesian methodology and physical insights, can enhance scientific discovery in fundamental physics, cosmology, and beyond.
Poster
My booklet 0 0