Periodic Reporting for period 4 - BePreSysE (Beyond Precision Cosmology: dealing with Systematic Errors)
Reporting period: 2021-12-01 to 2023-05-31
parameters of the model are measured with ~1% precision. The next challenge cosmology has to meet is to enter the era of accuracy. The
precision of a measurement, indicated by the number of significant figures, accounts for statistical errors.
Accuracy quantifies the closeness of the measurement of a quantity to that quantity's true value, which is
the realm of systematic errors. Accuracy is the new precision. Forthcoming observational effort promise to achieve major
advances in answering a number of big questions, with deep links to fundamental physics: unveil the nature
of dark energy, shed light on dark matter, measure the neutrino mass etc. The path towards answering these questions is a challenging one. The analysis
and interpretation of the data need to be revisited in light of the
unprecedented precision enabled by the new data. Precision without accuracy is dangerous, as highly significant (but wrong) results may be inferred. How can
we make sure that the systematic error budget is known and safely below the statistical error?
Without an effective treatment of systematic errors (from mitigation to full accounting) the power of the next generation cosmological surveys
will be seriously compromised.The project faced this challenge by developing a comprehensive
treatment of systematic errors with the goal to minimize, uncover, quantify and account for
otherwise unknown differences between the interpretation of a measurement and reality.
The team has become one of the world-expert in ensuring accuracy and robustness of large-scale structure (and cosmology)
results and had impact in several aspects, from primordial black holes as dark matter, to neutrino properties from cosmology. We
have also played a leading role in the “H0 tension” global discourse. We have contributed to the understanding that this tension is a make-or-break for the standard cosmological model.
We educated the community on the importance/role/advantages and limitations of blind analyses.
Finally we introduced model-independent approaches to the analysis and interpretation of large-scale structure data. This approach is attracting interest by other research fields beyond cosmology and astrophysics.
As the basic cosmological parameters of the standard cosmological model are being determined with increasing and unprecedented precision, it is not guaranteed that the same model will fit more precise observations from widely different cosmic epochs. Discrepancies developing between observations at early and late cosmological time may require an expansion of the standard model, and may lead to the discovery of new physics. There is increasing evidence for discrepancies between determinations of the current expansion rate of the Universe, the Hubble constant. This is a wonderful real world case where to apply developments along the lines of the proposed work. We framed the discussion in terms of early-vs -late which demonstrated to be very fruitful, we also showed that late time solutions are disfavored: the data do not leave enough wiggle room to fix the tension. Early-time solutions are favored by the data. We produced guard-rails that, if the H0 tension is a symptom of new physics, can guide the field toward such discovery.
With my group I am developing a blinding strategy for galaxy surveys that can be applied at the catalog level. This will protect from experimenter's bias. The methodology works very well and has been tested on mock surveys. This reflects one of the goals of the proposal.
Then we proposed, developed, tested and transformed into an operational pipeline, the first catalog-based blind analysis method for galaxy redshift surveys that blinds for the Universe’s expansion history and growth of structure. We demonstrated that it satisfies all the requirements for a good blinding procedure. This is now part of the official DESI pipeline and will protect the experiment and its findings from confirmation bias.
The group and I have been exploring the impact and importance of effects at the largest scales: if ignored there effects might bias the cosmological interpretation of our observations and we might ignore a key window into new physics.
We have also provided a procedure to interpret correctly the measurement of neutrino masses from forthcoming surveys, eliminating systematic effects that would mask the true value of the neutrino mass.
The two points above are well in line with the goals of the proposal.
We have shown that beside standard rulers and standard candles, the Universe also offers standard clocks which can (and should) weigh in in the unsolved issues raised by the H0 tension. This has open a new research direction: the Universe expansion history can now be tracked not only by measuring distances but also by measuring hot time passes as a function if the Universe's scale factor. This new emerging probe is increasingly gaining support, now featuring prominently in white papers/reviews etc.
We have paved the way to analyse jointly the lowest order statistic (the power spectrum) with higher-order ones (e.g. the bispectrum) in forthcoming large-scale structure surveys. this combination is key to a) reduce parameters degeneracy b) use their redundancy in terms of signal to ensure robustness of any result.
We also introduced a model-independent analysis of the galaxy redshift surveys power spectrum. Model-independent analyses are key to move the field beyond parameter fitting, which is one of the major sources of systematic error in the interpretation of the data. We demonstrated that this approach is essentially lossless (while usually standard model-independent analyses are not lossless). We have tested in on a suite of mock surveys and on (blind) mock challenges. Finally we have applied it to state-of-the-art data. This is also now part of the pipeline of the forthcoming survey DESI.
As part of this work we have shown that to go from model-dependent to model independent analyses, one must focus on extracting and interpreting specific physical signatures rather than doing a “global fit” of a specific model to the data. The specific physical signatures are the fingerprints of physics law of physical processes, which in many cases can be made insensitive to the specific model’s assumptions.
The exploitation of the results can be summarised as follows: some of the main deliverables are now integral part of the official pipelines of the biggest (to date) galaxy redshift survey. The H0 global discourse would likely have been less constructive if this work did not exist or if we did not take part.