Skip to main content

Energy Oriented Center of Excellence : toward exascale for energy

Periodic Reporting for period 1 - EoCoE-II (Energy Oriented Center of Excellence : toward exascale for energy)

Reporting period: 2019-01-01 to 2020-06-30

Europe is undergoing a transition in energy generation and supply infrastructure. Rapid adoption of solar and wind power generation by EU countries has demonstrated that renewable energy can supply significant fractions of energy needs.
It is becoming apparent that the future energy ecosystem will rely on digitization to drive innovations in production and storage technologies, mitigate power source variability and manage its distribution. Thus sparked the idea for the EoCoE consortium: world-leading research teams from low-carbon energy domains in meteorology, materials, wind, hydrology and fusion, were linked together through a multi-disciplinary platform of high performance computing (HPC) and numerical mathematics to create a network of experts in computational energy science.
EoCoE applies computational methods to accelerate the transition to the production, storage and management of clean, decarbonised energy. It is anchored in the HPC community and targets research institutes, key commercial players and SMEs who develop and enable energy-relevant numerical models to be run on exascale supercomputers. This multidisciplinary effort is harnessing innovations in computer science and mathematical algorithms within a co-design approach to overcome performance bottlenecks and anticipate future HPC hardware developments.
The consortium is a network of expertise in energy science, scientific computing and HPC. New modelling capabilities in selected energy sectors are created at unprecedented scale, demonstrating the potential benefits to the energy industry, such as accelerated design of storage devices, high-resolution wind and solar forecasting for the power grid and quantitative understanding of plasma core-edge interactions in ITER-scale tokamaks.
Main outcomes of EoCoE-II:
• Empower the scientific challenges with efficient applications, targeting future exascale architectures. 
• Design, develop and test new algorithms, new numerical methods, new software tools on next generation architectures;
• Provide realistic, large scale, test-bed to promote the use of HPC/simulation in the energy field;
• Build a sustainable structure to carry the long-term vision of EoCoE.
WP1 – Scientific challenges

Wind:
- Implementation of wall modelled Large Eddy Simulation in Alya
- Development of an implicit sliding mesh coupling for large scale industrial applications.
- Inclusion of thermal coupling, Coriolis forces, canopy, and the actuator disc, in the Alya code
- Alya benchmark

Meteorology:
- Construct probability distribution functions from large ensemble; improving the predictability of cloud and wind
- Development of statistical non-parametric calibration:  Asynchronous I/O and compressibility;  integration of MELISSA-DA
- Final calibration for ESIAS-met
- Two-step optimization of solar prediction system to calibrate COT input
- Ensemble scoring via cloud-motion and flow structure identification

Materials:
- Interfacing between Wannier90 and the libNEGF code
- Improvement of the scalability of libNEGF towards exascale computations
- Benchmarking with QMC Reference Calculations
- O(N) kinetic Monte Carlo code KMC-FMM to model hopping type electron conduction in α-NPD, modeling system of over 500k molecules

Water:
- Comparison of total water storage anomaly from ParFlow and CLM models with GRACE satellite dataset over PRUDENCE regions
- Application of HYPERstreamHS, a hydrological model refactored including dual-layer parallelization
- Use of polynomial chaos (OpenTURNS) and Gaussian Process to reduce the cost of 2D hydraulic model simulation
- Improved Geothermal Modeling

Fusion:
- Alleviate limitations from electrostatic to electromagnetic turbulence, include plasma-wall interactions, & address ITER-like non-circular geometry
- Address physical mechanisms of the plasma-wall interactions
- Study the comparative efficiencies of numerical strategies on Adapatative Mesh Refinement, choose an AMR library (AMReX), and develop a polar multigrid solver
- Refine the GYSELAX code, run very large production simulations

WP2 – Programming models:
- Creation of a network of experts including EoCoE members, POP CoE advisors and tool providers, & PRACE trainers.
- Collaboration with the Alya team on code optimization and geometric mesh partitioning
- Code performance analysis of the EURAD-IM code
- Performance analysis and bottlenecks identification on the libNEGF code through the JUBE automatic worflow
- Collaboration with the Hydrology team on the ParFlow code; GPU implementation based on CUDA; domain decomposition with P4est and MPI communications on AMR
- PDI integration in SHEMAT-suite
- Collaboration with the Fusion team on the GYSELAX prototype
- Collaboration with the super-computer facility CINES in France: test and development on ARM prototypes
- Collaboration with the RCCS in Japan: test and optimization on Fujitsu clusters and the Fugaku pre-Exascale super-computer (A64FX ARM)

WP3 – Scalable solvers:
- Integration of PSBLAS/AMG4PSBLAS Krylov solvers and preconditioners to improve ParFlow solver capability to face hybrid (MPI-CUDA) programming models
- Extensions of PSBLAS and AMG4PSBLAS to run on hybrid architectures
- Interfacing AGMG to PETSc to improve SHEMAT-Suite linear solver capability
- Collaboration with the Fusion team to develop multigrid solvers for the gyrokinetic Poisson equation in GyselaX
- Collaboration with the Wind team to integrate different sparse linear solvers to improve Alya solver capability, to face hybrid (MPI-CUDA) programming models.
So far, integrated solvers are: MUMPS, PaStiX, MaPhyS, AGMG, PSBLAS/AMG4PSBLAS
 
WP4 – IO & Data Flow:
- PDI improvements: API standardization and unification process, adaption of HDF5 support to new API routines and addition of more HDF5 features, for Gysela and SHEMAT
- Rework of FTI plugin
- New PDI plugin to wrap the NetCDF4 library
- New “User-code” and “pycall” plugin to support individual function calls and in-situ approaches (FlowVR)
- Melissa PDI plugin development

WP5 – Ensemble runs:
- Melissa-DA architecture specification
- Integration of PDAF parallel data assimilation engine into Melissa
- Toy use case (Lorenz equation); advanced one (Parflow)
- Experiments on supercomputers with Parflow simulations
- Identification of use cases with code and datasets for the Weather and Hydrology applications
- ESIAS/EURAD-IM: WRF integration into Melissa-DA
 
WP6 – Dissemination & Networking:
- Core dissemination tools
- Exploitation strategy
- Collaboration with EERA
- Training & capacity building
- EoCoE website
The nature of the project means all EoCoE scientific work goes beyond the state of the art.
The main breakthrough the project is on track to achieve is getting flagship code to the exascale. Through the Exascale Co-Design Group , we initiated cooperation between WPs when mission-critical application design decisions are needed, to foster co-design process between the Scientific Challenges and the technical WPs, and monitor the European Processor Initiative hardware roadmap and the EuroHPC prototypes
This strategy has paid dividends, and the results are extremely positive. We can highlight these examples:
- Alya scales up to 100 000 cores,
- ESIAS/EURAD-IM scales up to 262 144 cores,
- GyselaX scales up to 98 304 cores,
- Gysela is being ported on IRENE-AMD and Fugaku-ARM
- ParFlow runs on JUWELS AMD with great results, and is part of the GPU Booster programme
EoCoE project logo