Community Research and Development Information Service - CORDIS


DEEP-ER Result In Brief

Project ID: 610476
Funded under: FP7-ICT
Country: Germany

A supercomputer for all seasons

A new turbocharged supercomputer marks a breakthrough in computation capability for a wide range of research applications, from climate change to exploration of the human brain.
A supercomputer for all seasons
Exascale computing, in which a quintillion (1018) calculations can be performed every second, is expected to become the standard for supercomputers over the next few years. The impressive success of applying in silico (performed on computer) simulation techniques to highly complex scientific and commercial problems has increased the demand for large, fast and powerful systems that can handle the intensive workload involved.

The DEEP (Dynamical Exascale Entry Platform) project, involving 16 partners in eight European countries, has built such a high-performance computer, now operating at the Jülich Supercomputing Centre in Germany. The prototype uses the new Cluster-Booster concept, in which complex parts of a program with limited parallelism execute on the Cluster, while the Booster runs the highly parallelizable parts with high energy efficiency.

‘This prototype is a very flexible system that has a lot in common with a turbocharged engine. It will operate for several years to come and will be available to external users later in 2016,’ explains project manager Estela Suárez. ‘It achieves very high density and energy efficiency, and operates with a full system software stack and a standards-compliant programming environment engineered for performance and ease of use.’

Eleven scientific and engineering applications, representative of future Exascale computing requirements, have been carefully selected to drive hardware/software co-design and validate the Cluster-Booster concept. These involve brain simulation, climatology, radio astronomy, seismic imaging for the oil and gas industry, human exposure to electromagnetic fields, earthquake source dynamics, space weather, high-temperature superconductivity, and physics applications including lattice quantum chromodynamics (how particles interact in condensed matter) and computational fluid dynamics (e.g. combustion research for the transport and aerospace industry). Typical future users might thus include neuroscientists, astronomers, meteorologists, seismologists, physicists, aeroplane designers and automotive engineers.

The beauty of the DEEP prototype, a second generation of which is being developed in a sibling project called DEEP-Extended Reach DEEP-ER, is that it is physically not as big as one might imagine. The whole system is very densely packed into less than two racks and highly energy efficient, thanks to chiller-free direct liquid cooling, where water is piped into the rack and circulates through finely engineered plates attached to the compute nodes. This cooling water exits to a heat exchanger that facilitates reusing the extracted thermal energy, e.g. for heating or air conditioning in the rest of the location. Compared to those using conventional air-cooled systems, the prototype has twice the performance in the same space. In terms of energy used, it achieves 3.5 billion Floating Point Operations per second (Flops) per Watt of power input, making it the most efficient Intel Xeon Phi based supercomputer in the world.

The goal of DEEP-ER, which will run until end of March 2017, is to update the Cluster-Booster architecture developed by DEEP and extend it with additional parallel Input/Output (I/O) capability for higher throughput. It also seeks to make the supercomputer more resilient through a multi-level checkpoint and restart mechanism that guards against data loss should any hardware fail.

The EU has contributed EUR 8.3 million to the DEEP project, which ran from 2011 to 2015, and a further EUR 6.4 million to DEEP-ER.

Related information


Supercomputer, HPC, high performance computing, DEEP, DEEP-ER, Exascale, Cluster-Booster, energy efficiency, prototype, direct liquid cooling
Record Number: 182015 / Last updated on: 2016-05-05
Domain: IT, Telecommunications