Periodic Reporting for period 2 - greenFLASH (Green Flash, energy efficient high performance computing for real-time science)
Reporting period: 2017-04-01 to 2018-12-31
The project has several objectives in three main areas :
* Real-time HPC using accelerators and smart interconnects:
- prototype a cluster, able to reach a sustained performance of 1.5 TMAC/s of computing power and processing 150 Gb/s of streaming data with a maximum jitter of 100 µs over 1s of operation
- provide a Smart interconnect solution, based on the FPGA technology, compatible with existing high performance switch solutions, relying on standard serial protocols (TCP/UDP through 10G Ethernet), and supported in mainstream non proprietary middleware
- complement the ecosystem of an existing integrated FPGA development environment (QuickPlay from PLDA), by providing data handling and computational blocks, and support for several FPGA options and board designs.
- assess the performance of Cholesky factorization on the prototype cluster, as well as the overall control matrix computation algorithm and the control matrix upload to the controller with minimal introduced latency and jitter in the control process
*Energy efficient platform based on FPGA for HPC:
- prototype a main board, hosting a high cell density FPGA with an integrated ARM-based HPS, including a PCIe Gen3 rootport as an internal interface, and using 10G Ethernet and 40G Infiniband as network interface and back plane
- provide support for this prototype microserver in an integrated FPGA development environment (QuickPlay), deployable in the FPGA BSP and allowing to build a custom design on the FPGA, handling complex data flows through the internal (PCIe), the network interfaces and computing blocks, as well as the driver and end user API to use these features
- build a small scale cluster by interconnecting several prototype microservers through a standard network protocol (10G Ethernet)
* AO RTC prototyping and performance assessment:
- assemble a full functionality prototype for an AO RTC, scalable to the dimensioning of the E-ELT first light instrumentation, including a real-time core and a supervisor module
- implement a real-time simulator, designed to emulate the AO system I/O with various levels of accuracy, using existing AO sub-systems models
- fully characterize the AO RTC prototype performance with several configurations (single and multi-conjugate AO) and propose a strategy for its integration on sky
The main results can be outlined as follows:
- prototyping several multi-accelerator based solution for an AO RTC at the E-ELT scale, based on the latest generation of GPUs and Intel manycore processors. Performance assessment on both the real-time data pipeline, with predictable performance at the level of 10s of µs, and the supervisor module with a throughput above 30 TFLOP/s
- development of a smart interconnect concept using QuickPlay, supporting 10G TCP/UDP and PCIe gen 3, and hosting various features including protocol handling, data processing and pear-to-pear access on the PCIe bus. Support for about 10 various FPGA boards including µXComp
- design study of two FPGA boards: µXComp and µXLink, based on the Arria 10 technology and demonstrating two main features of newer generations of FPGA technologies, respectively the HMC high bandwidth memory and the embedded ARM HPS on the SoC version of the chip. A first µXComp board was delivered and full hardware testing was conducted
- implementation of a real-time simulator solution including two operational modes from the hard real-time rate to decimated, high precision rate. Dedicated interfaces have been designed and the full software stack is operational
- assessment of several solutions in the ecosystem: mathematical libraries (cuBLAS, MAGMA) and programming frameworks (custom deterministic, Chameleon) for the data pipeline and the supervisor, middleware solutions (DDS, ZeroMQ, MPI) between the various sub-systems and an integrated FPGA development environment (QuickPlay)
In terms of publications, the project has already produced 3 proceedings to the SPIE conference on astronomical telescopes in June 2016:
- Gratadour et al, for an overview of the project
- Perret et al, for the interconnect prototyping
- Ferreira et al, for the simulations and error budget
Additionally, a number of contributions are expected at the AO4ELT5 conference in June 2017 (Gratadour et al., Perret et al., Bernard et al., Ferreira et al., Doucet et al., Reeves et al., Vidal et al.).
Finally, two workshops were organized in Paris during the first reporting period about real-time control for AO (RTC4AO3 and RTC4AO4).The partners have contributed significantly to the workshop with 4 presentations at RTC4AO3 and 9 presentations at RTC4AO4.
Beyond hardware developments, a key aspect of our research program is to promote and complement the QuickPlay comprehensive FPGA development environment permitting the design of optimized data flow engines including communication and computing blocks and integrable in the SoC environment. QuickPlay has been designed to reduce drastically the time-to-application and the cost of the development cycle and could facilitate the penetration of the FPGA technology in a domain (HPC) where its intrinsic benefits have not been fully exploited yet. Through Green Flash we are assessing this solution on a demanding application as a case study and improving the value of the product by adding new features.
Green Flash, through the development of new solutions to drive AO systems, at the core of the telescope operations and scalable to the E-ELT dimensioning, is thus expected to have a critical impact on the E-ELT instruments preliminary design studies. Due to the strong involvement of two of the partners in instruments consortia, we expect the output of Green Flash to feed the design of these instruments with critical technological down selection and performance assessment. Moreover, this demonstration of the use of new HPC technologies to address the needs of an extremely large scientific equipment is a good proof of concept that could foster other scientific communities in the world to use HPC as a way of enabling science.