Descripción del proyecto
Superordenadores a prueba de fallos
La ciencia, la ingeniería y la industria dependen enormemente de los superordenadores y de su capacidad para funcionar de manera eficiente. Se prevé que los superordenadores de nueva generación (exaescala), con su mayor capacidad de procesamiento y memoria, sufran como mínimo dos fallos por minuto, por lo que es esencial encontrar soluciones simples y eficaces para mejorar la tolerancia a los fallos que no requieran un nivel de especialización alto. El proyecto FTHPC, financiado con fondos europeos, se propone resolver el problema de la tolerancia a los fallos mediante el uso de avances recientes en códigos de corrección de errores y pruebas cortas verificables probabilísticamente. El éxito de este trabajo eliminará la necesidad de contar con experiencia en tolerancia a los fallos y hará que la computación a exaescala sea accesible para todos los diseñadores y programadores de algoritmos.
Objetivo
Supercomputers are strategically crucial for facilitating advances in science and technology: in climate change research, accelerated genome sequencing towards cancer treatments, cutting edge physics, devising engineering innovative solutions, and many other compute intensive problems. However, the future of super-computing depends on our ability to cope with the ever increasing rate of faults (bit flips and component failure), resulting from the steadily increasing machine size and decreasing operating voltage. Indeed, hardware trends predict at least two faults per minute for next generation (exascale) supercomputers.
The challenge of ascertaining fault tolerance for high-performance computing is not new, and has been the focus of extensive research for over two decades. However, most solutions are either (i) general purpose, requiring little to no algorithmic effort, but severely degrading performance (e.g. checkpoint-restart), or (ii) tailored to specific applications and very efficient, but requiring high expertise and significantly increasing programmers' workload. We seek the best of both worlds: high performance and general purpose fault resilience.
Efficient general purpose solutions (e.g. via error correcting codes) have revolutionized memory and communication devices over two decades ago, enabling programmers to effectively disregard the very
likely memory and communication errors. The time has come for a similar paradigm shift in the computing regimen. I argue that exciting recent advances in error correcting codes, and in short probabilistically checkable proofs, make this goal feasible. Success along these lines will eliminate the bottleneck of required fault-tolerance expertise, and open exascale computing to all algorithm designers and programmers, for the benefit of the scientific, engineering, and industrial communities.
Ámbito científico
- medical and health sciencesclinical medicineoncology
- engineering and technologyelectrical engineering, electronic engineering, information engineeringelectronic engineeringcomputer hardwaresupercomputers
- natural sciencesearth and related environmental sciencesatmospheric sciencesclimatologyclimatic changes
- natural sciencesbiological sciencesgeneticsgenomes
Palabras clave
Programa(s)
Régimen de financiación
ERC-COG - Consolidator GrantInstitución de acogida
91904 Jerusalem
Israel