Skip to main content
Vai all'homepage della Commissione europea (si apre in una nuova finestra)
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS

Exascale Programming Models for Heterogeneous Systems

CORDIS fornisce collegamenti ai risultati finali pubblici e alle pubblicazioni dei progetti ORIZZONTE.

I link ai risultati e alle pubblicazioni dei progetti del 7° PQ, così come i link ad alcuni tipi di risultati specifici come dataset e software, sono recuperati dinamicamente da .OpenAIRE .

Risultati finali

Report on Current Landscape for Novel Network Hardware and Programming Models (si apre in una nuova finestra)

This report will survey existing available network hardware and software, focusing on a gap analysis between hardware capabilities and programming models and communication library implementations.

Report on dissemination, training, exploitation and standardization activities. (si apre in una nuova finestra)

This deliverable presents the overall EPiGRAMHS dissemination training exploitation activities and interaction with standardization bodies and collaboration with other European exascale projects and initiatives in the three years of the project

Final design specification and prototype im- plementation report of MPI and GPI extensions for heterogeneous systems with distributed GPUs and FPGAs (si apre in una nuova finestra)

This report will present the design for MPI RMA operations for distributed GPUs MPI planned collectives on heterogeneous systems and GPI for distributed FPGAs

Best practices document for porting applications to large-scale heterogeneous systems. (si apre in una nuova finestra)

This report collects experiences about application porting to large-scale heterogeneous systems, describing the difficulties and how these were addressed, and provides best-practices.

Report on final implementation of APIs and runtime system for data placement, migration and access on diverse memories (si apre in una nuova finestra)

This deliverable describes the implementation of APIs and runtime system for data movement on complex memories and report performance results

Report on dissemination, training, exploitation and standardization activities, and updated plan. (si apre in una nuova finestra)

This deliverable presents EPiGRAM-HS dissemination, training, exploitation activities, and interaction with standardization bodies and collaboration with other European exascale projects and initiatives in the first year of the project. This deliverable will include an update of the dissemination and exploitation plan.

Design document of GPI-space extension for distributed FPGAs and DSL for deep-learning applications (si apre in una nuova finestra)

This deliverable presents the initial design of GPI-space extension for distributed FPGAs and DSL for deep-learning applications.

Experiences and best practices on programming emerging transport technologies for data movement. (si apre in una nuova finestra)

This deliverable collects the experience with new and emerging transport technologies for data movement presenting potential challenges in using them and describing how to address them

Report on current and emerging transport technologies for data movement. (si apre in una nuova finestra)

This deliverable provides an overview of the current and emerging transport technologies that are relevant to the EPiGRAM-HS software components.

Report on experiences in using MPI and GASPI on systems with low-power microprocessors. (si apre in una nuova finestra)

This report reports gained using MPI and GASPI using heterogeneous systems with lowpower microprocessors identifying difficulties and proposing possible extensions to MPI and GASPI

Report on EPiGRAM-HS website including vision, plan for updates and website monitoring strategies. (si apre in una nuova finestra)

Project website will be established to show the work of the project. It will focus on giving a clear message of the project’s achievements and act as a repository of information on EPiGRAM-HS.

Report on state of the art of novel compute elements and gap analysis in MPI and GASPI. (si apre in una nuova finestra)

This report surveys new emerging low-power microprocessor, focusing on a gap analysis between hardware capabilities and MPI and GASPI programming systems.

Update on Current Landscape for Novel Network Hardware and Programming Models (si apre in una nuova finestra)

This report will cover additional hardware and software technologies that have become available during the lifetime of the project including a summary of the impact of the outputs generated by this work package

Final design specification and prototype implementation report of APIs and runtime system for data placement, migration and access on diverse memories. (si apre in una nuova finestra)

This deliverable reports about first implementation of APIs and runtime system and eventually changes to the design documents to overcome implementation issues

Report on dissemination, training, exploitation and standardization activities, and updated plan 2. (si apre in una nuova finestra)

This deliverable presents EPiGRAMHS dissemination training exploitation activities and interaction with standardization bodies and collaboration with other European exascale projects and initiatives This deliverable will include an update of the dissemination and exploitation plan

Final design specification and prototype implementation report of GPI-Space extension for distributed FPGAS and DSL for deep-learning applications (si apre in una nuova finestra)

This deliverable presents the initial design of GPI-space extension for distributed FPGAs and DSL for deep-learning applications after initial feedback from applications. In addition, it describes the prototype implementation of the GPU-Space extension and of the DSL.

Integration of EPiGRAM-HS programming environment in applications (si apre in una nuova finestra)

This report presents the parallel performance of all the EPiGRAMHS software components in applications analyzing scalability and improved achieved with respect with the implementation at the beginning of the project

Report on initial porting of applications to large-scale heterogeneous systems (si apre in una nuova finestra)

This deliverable reports about the initial effort of porting of application to distributed systems with GPUs and FPGAs. It describes the porting strategy and a testing plan.

Plan for dissemination, training, exploitation and standardization. (si apre in una nuova finestra)

This deliverable presents the initial dissemination plan including target audiences and activities, training and exploitation plan. In addition, it will present EPiGRAM-HS interaction with the MPI Forum and GASPI Forum.

Report on application requirements and roadmap. (si apre in una nuova finestra)

This deliverable describes the application requirements for the development of the EPiGRAM-HS programming environment, and it identifies the steps for the application developments for the application porting to large-scale heterogeneous systems. In addition, in this deliverable we select the applications to validate each EPiGRAM-HS programming environment component.

Initial design document of MPI and GPI extensions for heterogeneous systems with distributed GPUs and FPGAs (si apre in una nuova finestra)

This report will present the design for MPI RMA operations for distributed GPUs, MPI planned collectives on heterogeneous systems and GPI for distributed FPGAs

Report on final implementation of MPI and GPI extensions for heterogeneous systems with distributed GPUs and FPGAs (si apre in una nuova finestra)

This report will present the design for MPI RMA operations for distributed GPUs MPI planned collectives onheterogeneous systems and GPI for distributed FPGAs

Initial design of memory abstraction device for diverse memories. (si apre in una nuova finestra)

This deliverable describes the initial design of EPiGRAM-HS memory abstraction device, comprising APIs for simplified and optimized data movement and a runtime system for automatic data placement on diverse memories.

Report on final implementation of GPI-Space extension for distributed FPGAs and DSL for deep-learning applications (si apre in una nuova finestra)

This report describes the final implementation of GPISSpace extension for FPGAs and a DSL for deeplearning applications

Pubblicazioni

The Old and the New: Can Physics-Informed Deep-Learning Replace Traditional Linear Solvers? (si apre in una nuova finestra)

Autori: Stefano Markidis 
Pubblicato in: Frontiers in Big Data, Vol 4 (2021), Numero 5, 2021, ISSN 2624-909X
Editore: Frontiers in Big Data
DOI: 10.3389/fdata.2021.669097

MPI collective communication through a single set of interfaces: A case for orthogonality   (si apre in una nuova finestra)

Autori: Jesper Larsson Träff; Sascha Hunold; Guillaume Mercier; Daniel J. Holmes
Pubblicato in: Parallel Computing, Numero 2, 2021, ISSN 0167-8191
Editore: Elsevier BV
DOI: 10.1016/j.parco.2021.102826

RFaaS: RDMA-Enabled FaaS Platform for Serverless High-Performance Computing

Autori: Copik, Marcin; Taranov, Konstantin; Calotoiu, Alexandru; Hoefler, Torsten
Pubblicato in: Numero 10, 2022
Editore: USENIX Annual Technical Conference

Data Movement Is All You Need: A Case Study on Optimizing Transformers

Autori: Ivanov, Andrei; Dryden, Nikoli; Ben-Nun, Tal; Li, Shigang; Hoefler, Torsten
Pubblicato in: Proceedings of Machine Learning and Systems, Numero 11, 2021
Editore: SPCL

Communication and Timing Issues with MPI Virtualization. (si apre in una nuova finestra)

Autori: Alexandr Nigay; Lukas Mosimann; Timo Schneider; Torsten Hoefler
Pubblicato in: EuroMPI, Numero 10, 2020
Editore: ACM
DOI: 10.1145/3416315.3416317

Flare: flexible in-network allreduce (si apre in una nuova finestra)

Autori: Daniele De Sensi, Salvatore Di Girolamo, Saleh Ashkboos, Shigang Li, Torsten Hoefler
Pubblicato in: 2021
Editore: ACM
DOI: 10.5281/zenodo.4836022

A Deep Learning-Based Particle-in-Cell Method for Plasma Simulations (si apre in una nuova finestra)

Autori: Xavier Aguilar; Stefano Markidis
Pubblicato in: 2021 IEEE International Conference on Cluster Computing (CLUSTER), 2021
Editore: IEEE
DOI: 10.1109/cluster48925.2021.00103

Chimera: Efficiently Training Large-Scale Neural Networks with Bidirectional Pipelines (si apre in una nuova finestra)

Autori: Shigang Li, Torsten Hoefler
Pubblicato in: SC '21: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 2021
Editore: ACM
DOI: 10.1145/3458817.3476145

A RISC-V in-network accelerator for flexible high-performance low-power packet processing (si apre in una nuova finestra)

Autori: Salvatore Di Girolamo; Andreas Kurth; Alexandru Calotoiu; Thomas Benz; Timo Schneider; Jakub Beranek; Luca Benini; Torsten Hoefler
Pubblicato in: 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA), Numero 9, 2021
Editore: IEEE
DOI: 10.1109/isca52012.2021.00079

On the parallel I/O optimality of linear algebra kernels: near-optimal matrix factorizations (si apre in una nuova finestra)

Autori: G. Kwasniewski, M. Kabi ́c, T. Ben-Nun, A. Nikolaos Ziogas, J. Eirik Saethre, A. Gaillard, T. Schneider, M. Besta, A. Kozhevnikov, J. Vande- Vondele, T. Hoefler
Pubblicato in: 2021
Editore: ACM
DOI: 10.1145/3458817.3476167

FBLAS: Streaming Linear Algebra on FPGA (si apre in una nuova finestra)

Autori: De Matteis, Tiziano; Licht, Johannes de Fine; Hoefler, Torsten
Pubblicato in: SC20: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, Numero 9, 2020
Editore: IEEE
DOI: 10.1109/sc41405.2020.00063

Learning representations in Bayesian Confidence Propagation neural networks (si apre in una nuova finestra)

Autori: Naresh Balaji Ravichandran, Anders Lansner, Pawel Herman
Pubblicato in: 2020
Editore: IEEE
DOI: 10.1109/ijcnn48605.2020.9207061

Why is MPI (perceived to be) so complex?: Part 1—Does strong (si apre in una nuova finestra)

Autori: Daniel J. Holmes, Anthony Skjellum, Derek Schafer
Pubblicato in: EuroMPI, 2020
Editore: ACM
DOI: 10.1145/3416315.3416318

StreamBrain: An HPC Framework for Brain-like Neural Networks on CPUs, GPUs and FPGAs (si apre in una nuova finestra)

Autori: Artur Podobas, Martin Svedin, Steven W. D. Chien, Ivy B. Peng, Naresh Balaji Ravichandran, Pawel Herman, Anders Lansner, Stefano Markidis
Pubblicato in: 2021
Editore: ACM
DOI: 10.1145/3468044.3468052

Benchmarking the Nvidia GPU Lineage: From Early K80 to Modern A100 with Asynchronous Memory Transfers (si apre in una nuova finestra)

Autori: Martin Svedin, Steven W. D. Chien, Gibson Chikafa, Niclas Jansson, Artur Podobas
Pubblicato in: 2021
Editore: ACM
DOI: 10.1145/3468044.3468053

Mamba: Portable Array-based Abstractions for Heterogeneous High-Performance Systems (si apre in una nuova finestra)

Autori: Dykes, T., Foyer, C., Richardson, H., Svedin, M., Podobas, A., Jansson, N., Markidis, S., Tate, A., McIntosh-Smith, S.
Pubblicato in: 2021
Editore: IEEE
DOI: 10.1109/p3hpc54578.2021.00005

Spectral Element Simulations on the NEC SX-Aurora TSUBASA (si apre in una nuova finestra)

Autori: Niclas Jansson
Pubblicato in: 2021
Editore: ACM
DOI: 10.1145/3432261.3432265

Characterizing Deep-Learning I/O Workloads in TensorFlow (si apre in una nuova finestra)

Autori: Steven W. D. Chien, Stefano Markidis, Chaitanya Prasad Sishtla, Luis Santos, Pawel Herman, Sai Narasimhamurthy, Erwin Laure
Pubblicato in: 2018 IEEE/ACM 3rd International Workshop on Parallel Data Storage & Data Intensive Scalable Computing Systems (PDSW-DISCS), 2018, Pagina/e 54-63, ISBN 978-1-7281-0192-7
Editore: IEEE
DOI: 10.1109/PDSW-DISCS.2018.00011

TensorFlow Doing HPC (si apre in una nuova finestra)

Autori: Steven W. D. Chien, Stefano Markidis, Vyacheslav Olshevsky, Yaroslav Bulatov, Erwin Laure, Jeffrey Vetter
Pubblicato in: 2019 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), 2019, Pagina/e 509-518, ISBN 978-1-7281-3510-6
Editore: IEEE
DOI: 10.1109/IPDPSW.2019.00092

Streaming message interface - high-performance distributed memory programming on reconfigurable hardware (si apre in una nuova finestra)

Autori: Tiziano De Matteis, Johannes de Fine Licht, Jakub Beránek, Torsten Hoefler
Pubblicato in: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 2019, Pagina/e 1-33, ISBN 9781-450362290
Editore: ACM
DOI: 10.1145/3295500.3356201

Exposition, clarification, and expansion of MPI semantic terms and conventions - is a nonblocking MPI function permitted to block? (si apre in una nuova finestra)

Autori: Purushotham V. Bangalore, Rolf Rabenseifner, Daniel J. Holmes, Julien Jaeger, Guillaume Mercier, Claudia Blaas-Schenner, Anthony Skjellum
Pubblicato in: Proceedings of the 26th European MPI Users' Group Meeting on - EuroMPI '19, 2019, Pagina/e 1-10, ISBN 9781-450371759
Editore: ACM Press
DOI: 10.1145/3343211.3343213

Performance Evaluation of Advanced Features in CUDA Unified Memory (si apre in una nuova finestra)

Autori: Steven Chien, Ivy Peng, Stefano Markidis
Pubblicato in: 2019 IEEE/ACM Workshop on Memory Centric High Performance Computing (MCHPC), 2019, Pagina/e 50-57, ISBN 978-1-7281-6007-8
Editore: IEEE
DOI: 10.1109/mchpc49590.2019.00014

MPI Sessions: Evaluation of an Implementation in Open MPI (si apre in una nuova finestra)

Autori: Nathan Hjelm, Howard Pritchard, Samuel K. Gutierrez, Daniel J. Holmes, Ralph Castain, Anthony Skjellum
Pubblicato in: 2019 IEEE International Conference on Cluster Computing (CLUSTER), 2019, Pagina/e 1-11, ISBN 978-1-7281-4734-5
Editore: IEEE
DOI: 10.1109/cluster.2019.8891002

sputniPIC: An Implicit Particle-in-Cell Code for Multi-GPU Systems (si apre in una nuova finestra)

Autori: Steven W. D. Chien, Jonas Nylund, Gabriel Bengtsson, Ivy B. Peng, Artur Podobas, Stefano Markidis
Pubblicato in: 2020
Editore: IEEE
DOI: 10.1109/sbac-pad49847.2020.00030

Higgs Boson Classification: Brain-inspired BCPNN Learning with StreamBrain (si apre in una nuova finestra)

Autori: Svedin, Martin; Podobas, Artur; Chien, Steven W. D.; Markidis, Stefano
Pubblicato in: 2021 IEEE International Conference on Cluster Computing (CLUSTER), Numero 6, 2021
Editore: IEEEE
DOI: 10.1109/cluster48925.2021.00105

Automatic Particle Trajectory Classification in Plasma Simulations (si apre in una nuova finestra)

Autori: Markidis, Stefano; Peng, Ivy; Podobas, Artur; Jongsuebchoke, Itthinat; Bengtsson, Gabriel; Herman, Pawel
Pubblicato in: Crossref, Numero 14, 2020
Editore: IEEE
DOI: 10.1109/mlhpcai4s51975.2020.00014

Collectives and Communicators: A Case for Orthogonality: (Or: How to get rid of MPI neighbor and enhance Cartesian colletives) (si apre in una nuova finestra)

Autori: Jesper Larsson Tr ̈aff, Sascha Hunold, Guillaume Mercier, Daniel J. Holmes
Pubblicato in: EuroMPI/USA '20: 27th European MPI Users' Group Meeting, 2020
Editore: ACM
DOI: 10.1145/3416315.3416319

Semi-supervised learning with Bayesian Confidence Propagation Neural Network (si apre in una nuova finestra)

Autori: Naresh Balaji Ravichandran, Anders Lansner, Pawel Herman
Pubblicato in: ESANN 2021 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, 2021
Editore: European Symposium on Artificial Neural Networks
DOI: 10.14428/esann/2021.es2021-156

A Data-Centric Optimization Framework for Machine Learning

Autori: Oliver Rausch, Tal Ben-Nun, Nikoli Dryden, Andrei Ivanov, Shigang Li, Torsten Hoefler
Pubblicato in: 2021
Editore: Arxiv

Brain-Like Approaches to Unsupervised Learning of Hidden Representations - A Comparative Study (si apre in una nuova finestra)

Autori: Naresh Balaji Ravichandran, Anders Lansner, Pawel Herman
Pubblicato in: Artificial Neural Networks and Machine Learning – ICANN 2021, 2021
Editore: Springer
DOI: 10.1007/978-3-030-86383-8_13

Posit NPB: Assessing the Precision Improvement in HPC Scientific Applications (si apre in una nuova finestra)

Autori: Steven W. D. Chien, Ivy B. Peng, Stefano Markidis
Pubblicato in: Parallel Processing and Applied Mathematics - 13th International Conference, PPAM 2019, Bialystok, Poland, September 8–11, 2019, Revised Selected Papers, Part I, Numero 12043, 2020, Pagina/e 301-310, ISBN 978-3-030-43228-7
Editore: Springer International Publishing
DOI: 10.1007/978-3-030-43229-4_26

Multi-GPU Acceleration of the iPIC3D Implicit Particle-in-Cell Code (si apre in una nuova finestra)

Autori: Chaitanya Prasad Sishtla, Steven W. D. Chien, Vyacheslav Olshevsky, Erwin Laure, Stefano Markidis
Pubblicato in: Computational Science – ICCS 2019 - 19th International Conference, Faro, Portugal, June 12–14, 2019, Proceedings, Part V, Numero 11540, 2019, Pagina/e 612-618, ISBN 978-3-030-22749-4
Editore: Springer International Publishing
DOI: 10.1007/978-3-030-22750-0_58

È in corso la ricerca di dati su OpenAIRE...

Si è verificato un errore durante la ricerca dei dati su OpenAIRE

Nessun risultato disponibile

Il mio fascicolo 0 0