Project description
Insuring supercomputers against faults
Scientific, engineering and industrial communities rely heavily on supercomputers and their ability to perform efficiently. With their increased processing power and memory, next-generation (exascale) supercomputers are predicted to encounter at least two faults per minute, so it is imperative to find simple and effective solutions to enhance fault tolerance that won’t require high levels of expertise. The EU-funded FTHPC project aims to resolve the issue of fault tolerance by using recent advances in error correcting codes and short probabilistically checkable proofs. The success of this endeavour will eliminate the need for fault-tolerance expertise and make exascale computing accessible to all algorithm designers and programmers.
Objective
Supercomputers are strategically crucial for facilitating advances in science and technology: in climate change research, accelerated genome sequencing towards cancer treatments, cutting edge physics, devising engineering innovative solutions, and many other compute intensive problems. However, the future of super-computing depends on our ability to cope with the ever increasing rate of faults (bit flips and component failure), resulting from the steadily increasing machine size and decreasing operating voltage. Indeed, hardware trends predict at least two faults per minute for next generation (exascale) supercomputers.
The challenge of ascertaining fault tolerance for high-performance computing is not new, and has been the focus of extensive research for over two decades. However, most solutions are either (i) general purpose, requiring little to no algorithmic effort, but severely degrading performance (e.g. checkpoint-restart), or (ii) tailored to specific applications and very efficient, but requiring high expertise and significantly increasing programmers' workload. We seek the best of both worlds: high performance and general purpose fault resilience.
Efficient general purpose solutions (e.g. via error correcting codes) have revolutionized memory and communication devices over two decades ago, enabling programmers to effectively disregard the very
likely memory and communication errors. The time has come for a similar paradigm shift in the computing regimen. I argue that exciting recent advances in error correcting codes, and in short probabilistically checkable proofs, make this goal feasible. Success along these lines will eliminate the bottleneck of required fault-tolerance expertise, and open exascale computing to all algorithm designers and programmers, for the benefit of the scientific, engineering, and industrial communities.
Fields of science
- medical and health sciencesclinical medicineoncology
- engineering and technologyelectrical engineering, electronic engineering, information engineeringelectronic engineeringcomputer hardwaresupercomputers
- natural sciencesearth and related environmental sciencesatmospheric sciencesclimatologyclimatic changes
- natural sciencesbiological sciencesgeneticsgenomes
Keywords
Programme(s)
Funding Scheme
ERC-COG - Consolidator GrantHost institution
91904 Jerusalem
Israel