Skip to main content
European Commission logo
español español
CORDIS - Resultados de investigaciones de la UE
CORDIS

Unraveling effects of anisotropy from low collisionality in the intracluster medium

Periodic Reporting for period 1 - LowCollICM (Unraveling effects of anisotropy from low collisionality in the intracluster medium)

Período documentado: 2021-10-01 hasta 2023-09-30

Despite advances in both instrumental and computational capabilities, there still exists a mismatch between the observations of and theory describing galaxy clusters – the most massive gravitationally bound objects in the Universe. Galaxy clusters are used as probes for cosmological models and thus are important to our fundamental understanding of the Universe. Some differences clearly originate from an incorrect treatment of microphysical processes in large-scale cosmological simulations. This applies, in particular, to the intracluster medium (ICM), which is typically treated as a fully collisional fluid with an isotropic pressure. The overall physics-related objective of the action was to determine key observational effects that stem from low collisionality and the resulting anisotropic pressure in the ICM.

In addition to the astrophysical implications, the changing high performance computing landscape is also requiring a fundamental shift in the way scientific computing applications are designed, developed and used. While standard supercomputers in the past decades used a common architecture (x86 CPUs) and were built using an ever-increasing number of homogeneous nodes, the next generation of exascale supercomputers will use heterogeneous nodes and varying architectures between supercomputers. Thus, the traditional scale up and/or out approach does not translate to exascale supercomputers capable of performing 10^{18} floating point operations per second as hardware-specific programming models differ between exascale supercomputers. The overall technical objective was to determine how to efficiently leverage next-generation, exascale supercomputers for astrophysical simulations.

Finally, the overall training objective was for the researcher to become more independent and prepare for the duties of a research group leader.
The work was split into five work packages: (WP1) Management, (WP2) Code development, (WP3) Idealized turbulence, (WP4) Isolated galaxy clusters, and (WP5) Dissemination & communication.

WP1 covered the broader management and the researcher was involved in all aspects of the project including the financial part. In line with obtaining management skills and towards the objective of becoming a future research group leader, the researcher also mentored students on thesis projects, took part in several multi-day workshops on “Navigating Successfully in Academia” and served as a reviewer for several journals and computing grants. Finally, the researcher successfully obtained multiple, highly competitive computing grants as (Co-)PI, for example, through the DOE INCITE program or the EuroHPC Extreme Scale Access program.

In WP2 the researcher developed the open source AthenaPK and Parthenon community codes in a newly established international collaboration. Parthenon is a is a performance portable mesh refinement framework that serves as basis for multiple application codes including AthenaPK. The latter is the MHD code newly developed as part of the action. A key aspect of the codes is their performance portability, i.e. they can run on any architecture including GPU-based, exascale supercomputers. This code development resulted in three technical publications (the main one being the Parthenon code paper that was lead by the researcher).

In WP3 the researcher conducted a suite of driven, magnetized turbulence simulations focusing on the impact of dynamical range with respect to scale dependent energy transfers. The key result, i.e. that energy transfers are neither constant with respect to their scale nor with respect to the mediator (e.g. the turbulent cascade or magnetic tension), is published in the Astrophysical Journal Letters. Moreover, the researcher studied the impact of cloud-in-wind simulation that included anisotropic thermal conduction. These simulations are representative of, for example, galaxies falling through the hot intracluster medium. The key result, i.e. anisotropic thermal conduction shields the galaxy by preferentially channeling the hot wind around it, is published in the Astrophysical Journal.

In WP4 the researcher developed a new setup to study isolated galaxy clusters in an international collaboration. This setup includes significantly more physics such as a static gravitational potential, cooling, or feedback from an active galactic nucleus (AGN) jet. The first simulations have been concluded as part of an INCITE computing grant and are currently being analyzed. Multiple publications are in preparation and expected to be submitted in 2024. The attached image illustrates one three dimensional rendering of the center of a galaxy cluster and the associated mesh structure in the simulation.

For dissemination of the results under WP5, the researcher presented the results 14 times at conferences and workshops, co-organized one workshop, to-date (co-)authored 7 publications to date, and wrote a research blog. Moreover, the researcher built an interactive supercomputer model for outreach and training activities, which was used on multiple occasions including the Observatory’s Open Day and the annual Girls Day, and more generally supported outreach activities such as “Ask an astronomer” at the “Sternstunden Festival” and was a guide for public tours at the Hamburg Observatory on multiple occasions.
The action resulted in progress beyond state of the art in multiple directions and fields.

First, the results of the impact of the dynamical range in magnetized turbulence demonstrated the equivalency of direct numerical simulations (often used in engineering) and implicit large eddy simulations (often used in astrophysics). In addition and more importantly, despite the comparatively large dynamical range (here also high numerical resolution) of the simulations presented in the paper, no indication of convergence even at the highest resolution was observed. This calls into question whether an asymptotic regime in MHD turbulence exists, and, if so, what it looks like.

Second, the isolated galaxy cluster simulations focused on the the detailed turbulence dynamics and energetics. This required exascale capabilities and the resulting simulation on Frontier are more than an order of magnitude more detailed with respect to the simulation volume resolved at high spatial resolution compared to current state of the art simulations. Differences to current state of the art simulations are immediately visibly on the small scales even in a high level analysis and upcoming publications of the new simulations are expected to provide unprecedented detail on the energy injection and redistribution in galaxy clusters.

Finally, the performance and performance portable aspects of AthenaPK and Parthenon go beyond state of the art as illustrated by achieving more than 92% parallel efficiency on 73728 GPUs in parallel on first TOP500 exascale supercomputer Frontier. This is also supported by two papers: one received the best paper runner-up award at CUG23 and the other a best paper award nomination at SC23. Moreover, the code was used in the acceptance testing of Frontier.
AGN jet (yellow) and cold gas (blue) in a galaxy cluster simulation and associated mesh structuture.