The High Performance Computing (HPC) market is growing steadily. And this new era of computing is not just about hardware: for HPC to live up to industry’s expectations, its potential for parallel computing will have to be realised in a new generation of parallel applications. These will have to balance increased performance with reduced energy consumption. No easy task, especially considering observations that the last 5 or 10 % performance improvements needed by programmers for their application can increase energy consumption by 30 % or more. In HPC, optimisation essentially revolves around trade-offs between these two objectives. The AllScale project provides the missing link. Its tool chain is expected to boost development productivity, portability and runtime efficiency – all this while improving the resource efficiency of small- to extreme-scale parallel systems. “Our programming environment is based on a widely used standard (C++) which enables the expression of parallelism at a high level of abstraction. Instead of coupling different programming paradigms – which is prone to losing information and thus reduces performance – we exploit recursive parallelisms to reduce global synchronisation.” This enable extremely large number of parallel tasks for applications that need to keep extreme scale parallel architectures busy across all processors,” explains Prof Thomas Fahringer, coordinator of the project on behalf of the University of Innsbruck. The AllScale environment can deal with hardware faults, and its underlying runtime system can be optimized for multiple objectives including runtime, resource efficiency and energy consumption. Unlike conventional parallel programming approaches, which couple existing programming paradigms such as MPI and OpenMP, AllScale emphasises a single-source-to-anyscale development environment. It provides programmers with a unified API to express parallelism at a high level of abstraction using standard C++ templates. Another major improvement is the use of nested recursive parallelism to reduce global communication and synchronisation, as the latter usually results in severe scalability issues for large-scale HPC system. Finally, whereas application optimisation and self-tuning of parallel programs used to be almost exclusively based on minimising execution times, AllScale focuses on modern architectures’ need for intelligent management of energy budgets and power envelopes. “AllScale provides a multi-objective scheduling and optimisation component capable of steering execution towards satisfying dynamic, user-controlled optimisation trade-offs among runtime, energy consumption and resource efficiency,” Prof Fahringer explains. AllScale has been tested extensively and applied to pilots in the fields of space weather, environmental hazard (such as the Deepwater oil spill) and fluid dynamics. These three applications are known for challenging HPC systems, both in terms of computational complexity and productivity. “In space weather simulation, AllScale obtained superior results for the Earth's magnetic dipole formation on shared and distributed memory HPC systems at large core counts,” says Prof Fahringer. “For the fluid dynamics code, we could substantially reduce the serial execution time, with a shared memory scalability similar to MPI implementation. Finally, for the environmental hazard application, we demonstrated AllScale’s capability to scale for peta-scale systems. In all three applications we could provide a much greater efficiency in terms of time-to-solution, thus enhancing productivity.” Whilst the project is now completed, all pilot applications will continue to use AllScale technology to further improve performance. The AllScale toolchain has been released with an open source license and several groups keep extending and further improving the software as part of new projects and collaborations across Europe.
AllScale, HPC, high performance computing, software, applications, efficiency, parallelism, parallel programming, runtime, space weather, deepwater, fluid dynamics