Periodic Reporting for period 1 - ClOThIlde (The Cluster Observations and Theory Intersection: Providing selection functions and scaling relations to set constraints on the physics of the accelerating universe.)
Reporting period: 2016-01-01 to 2017-12-31
In order to measure and understand Dark Energy, we need to statistically sample large samples of astronomical objects across time and use them to put constraints on a cosmological model of the Universe. In the next years, three surveys are planned to start mapping a large portion of the sky and collecting data in the optical (J-PAS and LSST) and the Infrared (Euclid). The main objective of our project ClOThIlde is to obtain constraints of the composition of the Universe by using the galaxy clusters detected in these upcoming surveys. This goal requires very high precision statistical measurements and a reliable pipeline needs to be designed. Hence, we aim to provide the necessary input for each step of the pipeline ensuring robust and optimal results for the final constraints for the Dark Energy Cosmological model.
(1) We have first designed realistic simulations that mimic with high precision the properties of the galaxies in the optical and IR. To do this, we have used PhotReal, a technique developed with the goal of improving the resemblance of the simulations and the data, and constructed three mock catalogues for the surveys of interest with very accurate representation of their properties.
(2) We have then run the Bayesian Cluster Finder, an algorithm to detect galaxy clusters with high completeness and purity rates, on these simulations. As a result, we have obtained a well-selected sample of galaxy clusters statistically undistinguishable to the ones that we will observe when the data arrives.
(3) After that, we have used these lists of clusters detected on top of the simulations to compute the selection function and the mass-observable relation for each different survey. The selection function has been obtained by mapping the redshift and halo mass space where the detections are reliable, concluding with the description of the clusters that each survey will obtain and their counts. As for the mass-observable relation, we have modelled this relation with a power-law relation with a dependence on redshift for the detections on the simulations and obtained the best fit parameters using regression models.
(4) Finally, we have obtained cosmological constraints using the previous elements for each survey. To do this, we have produced theoretical cluster counts for two different models of the Universe (Dark Energy and Modified Gravity), by introducing the selection function and mass-observable relation and by accounting for both the Poisson noise distribution of the counts and the cosmic variance. Then, we have fit this model by using a Fisher Matrix analysis on one hand and a MCMC approach and finally obtained the cosmological forecast.
Through the whole project, we have used all possible resources to bring the advancement of the project to the scientific community and the general public. Regarding the dissemination of the research for the scientific community, we have published our results in several peer reviewed journals, visited different institutions and presented our work in a diversity of scientific conferences. As far as the dissemination of the research for the general public, we have participated on different outreach activities including writing articles, making presentations, giving introductory courses, discussing with students and the general public, among others. Moreover, a webpage has been created to show the advancement of the project to everyone interested.
In general, the project has exceeded our expectations in many ways. First, we have included the concurrent optical survey (LSST) in the comparison, which was not included in the original plan. Moreover, we have started different collaborations with the X-ray and Weak Lensing community to complement and improve the obtained constraints about the universe and to extent this analysis to other wavelengths. Complementarily, we have applied and Markov Chain Monte Carlo method to obtain constraints on the cluster counts in addition to the already planned Fisher Matrix analysis, enriching largely the quality of our results. Finally, the built pipeline can be easily extrapolated to other academic and industrial sectors (e.g. Artificial Intelligence, Machine Learning, etc), expecting a large impact with its application.