Skip to main content
Weiter zur Homepage der Europäischen Kommission (öffnet in neuem Fenster)
Deutsch Deutsch
CORDIS - Forschungsergebnisse der EU
CORDIS

Exascale Programming Models for Heterogeneous Systems

CORDIS bietet Links zu öffentlichen Ergebnissen und Veröffentlichungen von HORIZONT-Projekten.

Links zu Ergebnissen und Veröffentlichungen von RP7-Projekten sowie Links zu einigen Typen spezifischer Ergebnisse wie Datensätzen und Software werden dynamisch von OpenAIRE abgerufen.

Leistungen

Report on Current Landscape for Novel Network Hardware and Programming Models (öffnet in neuem Fenster)

This report will survey existing available network hardware and software, focusing on a gap analysis between hardware capabilities and programming models and communication library implementations.

Report on dissemination, training, exploitation and standardization activities. (öffnet in neuem Fenster)

This deliverable presents the overall EPiGRAMHS dissemination training exploitation activities and interaction with standardization bodies and collaboration with other European exascale projects and initiatives in the three years of the project

Final design specification and prototype im- plementation report of MPI and GPI extensions for heterogeneous systems with distributed GPUs and FPGAs (öffnet in neuem Fenster)

This report will present the design for MPI RMA operations for distributed GPUs MPI planned collectives on heterogeneous systems and GPI for distributed FPGAs

Best practices document for porting applications to large-scale heterogeneous systems. (öffnet in neuem Fenster)

This report collects experiences about application porting to large-scale heterogeneous systems, describing the difficulties and how these were addressed, and provides best-practices.

Report on final implementation of APIs and runtime system for data placement, migration and access on diverse memories (öffnet in neuem Fenster)

This deliverable describes the implementation of APIs and runtime system for data movement on complex memories and report performance results

Report on dissemination, training, exploitation and standardization activities, and updated plan. (öffnet in neuem Fenster)

This deliverable presents EPiGRAM-HS dissemination, training, exploitation activities, and interaction with standardization bodies and collaboration with other European exascale projects and initiatives in the first year of the project. This deliverable will include an update of the dissemination and exploitation plan.

Design document of GPI-space extension for distributed FPGAs and DSL for deep-learning applications (öffnet in neuem Fenster)

This deliverable presents the initial design of GPI-space extension for distributed FPGAs and DSL for deep-learning applications.

Experiences and best practices on programming emerging transport technologies for data movement. (öffnet in neuem Fenster)

This deliverable collects the experience with new and emerging transport technologies for data movement presenting potential challenges in using them and describing how to address them

Report on current and emerging transport technologies for data movement. (öffnet in neuem Fenster)

This deliverable provides an overview of the current and emerging transport technologies that are relevant to the EPiGRAM-HS software components.

Report on experiences in using MPI and GASPI on systems with low-power microprocessors. (öffnet in neuem Fenster)

This report reports gained using MPI and GASPI using heterogeneous systems with lowpower microprocessors identifying difficulties and proposing possible extensions to MPI and GASPI

Report on EPiGRAM-HS website including vision, plan for updates and website monitoring strategies. (öffnet in neuem Fenster)

Project website will be established to show the work of the project. It will focus on giving a clear message of the project’s achievements and act as a repository of information on EPiGRAM-HS.

Report on state of the art of novel compute elements and gap analysis in MPI and GASPI. (öffnet in neuem Fenster)

This report surveys new emerging low-power microprocessor, focusing on a gap analysis between hardware capabilities and MPI and GASPI programming systems.

Update on Current Landscape for Novel Network Hardware and Programming Models (öffnet in neuem Fenster)

This report will cover additional hardware and software technologies that have become available during the lifetime of the project including a summary of the impact of the outputs generated by this work package

Final design specification and prototype implementation report of APIs and runtime system for data placement, migration and access on diverse memories. (öffnet in neuem Fenster)

This deliverable reports about first implementation of APIs and runtime system and eventually changes to the design documents to overcome implementation issues

Report on dissemination, training, exploitation and standardization activities, and updated plan 2. (öffnet in neuem Fenster)

This deliverable presents EPiGRAMHS dissemination training exploitation activities and interaction with standardization bodies and collaboration with other European exascale projects and initiatives This deliverable will include an update of the dissemination and exploitation plan

Final design specification and prototype implementation report of GPI-Space extension for distributed FPGAS and DSL for deep-learning applications (öffnet in neuem Fenster)

This deliverable presents the initial design of GPI-space extension for distributed FPGAs and DSL for deep-learning applications after initial feedback from applications. In addition, it describes the prototype implementation of the GPU-Space extension and of the DSL.

Integration of EPiGRAM-HS programming environment in applications (öffnet in neuem Fenster)

This report presents the parallel performance of all the EPiGRAMHS software components in applications analyzing scalability and improved achieved with respect with the implementation at the beginning of the project

Report on initial porting of applications to large-scale heterogeneous systems (öffnet in neuem Fenster)

This deliverable reports about the initial effort of porting of application to distributed systems with GPUs and FPGAs. It describes the porting strategy and a testing plan.

Plan for dissemination, training, exploitation and standardization. (öffnet in neuem Fenster)

This deliverable presents the initial dissemination plan including target audiences and activities, training and exploitation plan. In addition, it will present EPiGRAM-HS interaction with the MPI Forum and GASPI Forum.

Report on application requirements and roadmap. (öffnet in neuem Fenster)

This deliverable describes the application requirements for the development of the EPiGRAM-HS programming environment, and it identifies the steps for the application developments for the application porting to large-scale heterogeneous systems. In addition, in this deliverable we select the applications to validate each EPiGRAM-HS programming environment component.

Initial design document of MPI and GPI extensions for heterogeneous systems with distributed GPUs and FPGAs (öffnet in neuem Fenster)

This report will present the design for MPI RMA operations for distributed GPUs, MPI planned collectives on heterogeneous systems and GPI for distributed FPGAs

Report on final implementation of MPI and GPI extensions for heterogeneous systems with distributed GPUs and FPGAs (öffnet in neuem Fenster)

This report will present the design for MPI RMA operations for distributed GPUs MPI planned collectives onheterogeneous systems and GPI for distributed FPGAs

Initial design of memory abstraction device for diverse memories. (öffnet in neuem Fenster)

This deliverable describes the initial design of EPiGRAM-HS memory abstraction device, comprising APIs for simplified and optimized data movement and a runtime system for automatic data placement on diverse memories.

Report on final implementation of GPI-Space extension for distributed FPGAs and DSL for deep-learning applications (öffnet in neuem Fenster)

This report describes the final implementation of GPISSpace extension for FPGAs and a DSL for deeplearning applications

Veröffentlichungen

The Old and the New: Can Physics-Informed Deep-Learning Replace Traditional Linear Solvers? (öffnet in neuem Fenster)

Autoren: Stefano Markidis 
Veröffentlicht in: Frontiers in Big Data, Vol 4 (2021), Ausgabe 5, 2021, ISSN 2624-909X
Herausgeber: Frontiers in Big Data
DOI: 10.3389/fdata.2021.669097

MPI collective communication through a single set of interfaces: A case for orthogonality   (öffnet in neuem Fenster)

Autoren: Jesper Larsson Träff; Sascha Hunold; Guillaume Mercier; Daniel J. Holmes
Veröffentlicht in: Parallel Computing, Ausgabe 2, 2021, ISSN 0167-8191
Herausgeber: Elsevier BV
DOI: 10.1016/j.parco.2021.102826

RFaaS: RDMA-Enabled FaaS Platform for Serverless High-Performance Computing

Autoren: Copik, Marcin; Taranov, Konstantin; Calotoiu, Alexandru; Hoefler, Torsten
Veröffentlicht in: Ausgabe 10, 2022
Herausgeber: USENIX Annual Technical Conference

Data Movement Is All You Need: A Case Study on Optimizing Transformers

Autoren: Ivanov, Andrei; Dryden, Nikoli; Ben-Nun, Tal; Li, Shigang; Hoefler, Torsten
Veröffentlicht in: Proceedings of Machine Learning and Systems, Ausgabe 11, 2021
Herausgeber: SPCL

Communication and Timing Issues with MPI Virtualization. (öffnet in neuem Fenster)

Autoren: Alexandr Nigay; Lukas Mosimann; Timo Schneider; Torsten Hoefler
Veröffentlicht in: EuroMPI, Ausgabe 10, 2020
Herausgeber: ACM
DOI: 10.1145/3416315.3416317

Flare: flexible in-network allreduce (öffnet in neuem Fenster)

Autoren: Daniele De Sensi, Salvatore Di Girolamo, Saleh Ashkboos, Shigang Li, Torsten Hoefler
Veröffentlicht in: 2021
Herausgeber: ACM
DOI: 10.5281/zenodo.4836022

A Deep Learning-Based Particle-in-Cell Method for Plasma Simulations (öffnet in neuem Fenster)

Autoren: Xavier Aguilar; Stefano Markidis
Veröffentlicht in: 2021 IEEE International Conference on Cluster Computing (CLUSTER), 2021
Herausgeber: IEEE
DOI: 10.1109/cluster48925.2021.00103

Chimera: Efficiently Training Large-Scale Neural Networks with Bidirectional Pipelines (öffnet in neuem Fenster)

Autoren: Shigang Li, Torsten Hoefler
Veröffentlicht in: SC '21: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 2021
Herausgeber: ACM
DOI: 10.1145/3458817.3476145

A RISC-V in-network accelerator for flexible high-performance low-power packet processing (öffnet in neuem Fenster)

Autoren: Salvatore Di Girolamo; Andreas Kurth; Alexandru Calotoiu; Thomas Benz; Timo Schneider; Jakub Beranek; Luca Benini; Torsten Hoefler
Veröffentlicht in: 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA), Ausgabe 9, 2021
Herausgeber: IEEE
DOI: 10.1109/isca52012.2021.00079

On the parallel I/O optimality of linear algebra kernels: near-optimal matrix factorizations (öffnet in neuem Fenster)

Autoren: G. Kwasniewski, M. Kabi ́c, T. Ben-Nun, A. Nikolaos Ziogas, J. Eirik Saethre, A. Gaillard, T. Schneider, M. Besta, A. Kozhevnikov, J. Vande- Vondele, T. Hoefler
Veröffentlicht in: 2021
Herausgeber: ACM
DOI: 10.1145/3458817.3476167

FBLAS: Streaming Linear Algebra on FPGA (öffnet in neuem Fenster)

Autoren: De Matteis, Tiziano; Licht, Johannes de Fine; Hoefler, Torsten
Veröffentlicht in: SC20: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, Ausgabe 9, 2020
Herausgeber: IEEE
DOI: 10.1109/sc41405.2020.00063

Learning representations in Bayesian Confidence Propagation neural networks (öffnet in neuem Fenster)

Autoren: Naresh Balaji Ravichandran, Anders Lansner, Pawel Herman
Veröffentlicht in: 2020
Herausgeber: IEEE
DOI: 10.1109/ijcnn48605.2020.9207061

Why is MPI (perceived to be) so complex?: Part 1—Does strong (öffnet in neuem Fenster)

Autoren: Daniel J. Holmes, Anthony Skjellum, Derek Schafer
Veröffentlicht in: EuroMPI, 2020
Herausgeber: ACM
DOI: 10.1145/3416315.3416318

StreamBrain: An HPC Framework for Brain-like Neural Networks on CPUs, GPUs and FPGAs (öffnet in neuem Fenster)

Autoren: Artur Podobas, Martin Svedin, Steven W. D. Chien, Ivy B. Peng, Naresh Balaji Ravichandran, Pawel Herman, Anders Lansner, Stefano Markidis
Veröffentlicht in: 2021
Herausgeber: ACM
DOI: 10.1145/3468044.3468052

Benchmarking the Nvidia GPU Lineage: From Early K80 to Modern A100 with Asynchronous Memory Transfers (öffnet in neuem Fenster)

Autoren: Martin Svedin, Steven W. D. Chien, Gibson Chikafa, Niclas Jansson, Artur Podobas
Veröffentlicht in: 2021
Herausgeber: ACM
DOI: 10.1145/3468044.3468053

Mamba: Portable Array-based Abstractions for Heterogeneous High-Performance Systems (öffnet in neuem Fenster)

Autoren: Dykes, T., Foyer, C., Richardson, H., Svedin, M., Podobas, A., Jansson, N., Markidis, S., Tate, A., McIntosh-Smith, S.
Veröffentlicht in: 2021
Herausgeber: IEEE
DOI: 10.1109/p3hpc54578.2021.00005

Spectral Element Simulations on the NEC SX-Aurora TSUBASA (öffnet in neuem Fenster)

Autoren: Niclas Jansson
Veröffentlicht in: 2021
Herausgeber: ACM
DOI: 10.1145/3432261.3432265

Characterizing Deep-Learning I/O Workloads in TensorFlow (öffnet in neuem Fenster)

Autoren: Steven W. D. Chien, Stefano Markidis, Chaitanya Prasad Sishtla, Luis Santos, Pawel Herman, Sai Narasimhamurthy, Erwin Laure
Veröffentlicht in: 2018 IEEE/ACM 3rd International Workshop on Parallel Data Storage & Data Intensive Scalable Computing Systems (PDSW-DISCS), 2018, Seite(n) 54-63, ISBN 978-1-7281-0192-7
Herausgeber: IEEE
DOI: 10.1109/PDSW-DISCS.2018.00011

TensorFlow Doing HPC (öffnet in neuem Fenster)

Autoren: Steven W. D. Chien, Stefano Markidis, Vyacheslav Olshevsky, Yaroslav Bulatov, Erwin Laure, Jeffrey Vetter
Veröffentlicht in: 2019 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), 2019, Seite(n) 509-518, ISBN 978-1-7281-3510-6
Herausgeber: IEEE
DOI: 10.1109/IPDPSW.2019.00092

Streaming message interface - high-performance distributed memory programming on reconfigurable hardware (öffnet in neuem Fenster)

Autoren: Tiziano De Matteis, Johannes de Fine Licht, Jakub Beránek, Torsten Hoefler
Veröffentlicht in: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 2019, Seite(n) 1-33, ISBN 9781-450362290
Herausgeber: ACM
DOI: 10.1145/3295500.3356201

Exposition, clarification, and expansion of MPI semantic terms and conventions - is a nonblocking MPI function permitted to block? (öffnet in neuem Fenster)

Autoren: Purushotham V. Bangalore, Rolf Rabenseifner, Daniel J. Holmes, Julien Jaeger, Guillaume Mercier, Claudia Blaas-Schenner, Anthony Skjellum
Veröffentlicht in: Proceedings of the 26th European MPI Users' Group Meeting on - EuroMPI '19, 2019, Seite(n) 1-10, ISBN 9781-450371759
Herausgeber: ACM Press
DOI: 10.1145/3343211.3343213

Performance Evaluation of Advanced Features in CUDA Unified Memory (öffnet in neuem Fenster)

Autoren: Steven Chien, Ivy Peng, Stefano Markidis
Veröffentlicht in: 2019 IEEE/ACM Workshop on Memory Centric High Performance Computing (MCHPC), 2019, Seite(n) 50-57, ISBN 978-1-7281-6007-8
Herausgeber: IEEE
DOI: 10.1109/mchpc49590.2019.00014

MPI Sessions: Evaluation of an Implementation in Open MPI (öffnet in neuem Fenster)

Autoren: Nathan Hjelm, Howard Pritchard, Samuel K. Gutierrez, Daniel J. Holmes, Ralph Castain, Anthony Skjellum
Veröffentlicht in: 2019 IEEE International Conference on Cluster Computing (CLUSTER), 2019, Seite(n) 1-11, ISBN 978-1-7281-4734-5
Herausgeber: IEEE
DOI: 10.1109/cluster.2019.8891002

sputniPIC: An Implicit Particle-in-Cell Code for Multi-GPU Systems (öffnet in neuem Fenster)

Autoren: Steven W. D. Chien, Jonas Nylund, Gabriel Bengtsson, Ivy B. Peng, Artur Podobas, Stefano Markidis
Veröffentlicht in: 2020
Herausgeber: IEEE
DOI: 10.1109/sbac-pad49847.2020.00030

Higgs Boson Classification: Brain-inspired BCPNN Learning with StreamBrain (öffnet in neuem Fenster)

Autoren: Svedin, Martin; Podobas, Artur; Chien, Steven W. D.; Markidis, Stefano
Veröffentlicht in: 2021 IEEE International Conference on Cluster Computing (CLUSTER), Ausgabe 6, 2021
Herausgeber: IEEEE
DOI: 10.1109/cluster48925.2021.00105

Automatic Particle Trajectory Classification in Plasma Simulations (öffnet in neuem Fenster)

Autoren: Markidis, Stefano; Peng, Ivy; Podobas, Artur; Jongsuebchoke, Itthinat; Bengtsson, Gabriel; Herman, Pawel
Veröffentlicht in: Crossref, Ausgabe 14, 2020
Herausgeber: IEEE
DOI: 10.1109/mlhpcai4s51975.2020.00014

Collectives and Communicators: A Case for Orthogonality: (Or: How to get rid of MPI neighbor and enhance Cartesian colletives) (öffnet in neuem Fenster)

Autoren: Jesper Larsson Tr ̈aff, Sascha Hunold, Guillaume Mercier, Daniel J. Holmes
Veröffentlicht in: EuroMPI/USA '20: 27th European MPI Users' Group Meeting, 2020
Herausgeber: ACM
DOI: 10.1145/3416315.3416319

Semi-supervised learning with Bayesian Confidence Propagation Neural Network (öffnet in neuem Fenster)

Autoren: Naresh Balaji Ravichandran, Anders Lansner, Pawel Herman
Veröffentlicht in: ESANN 2021 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, 2021
Herausgeber: European Symposium on Artificial Neural Networks
DOI: 10.14428/esann/2021.es2021-156

A Data-Centric Optimization Framework for Machine Learning

Autoren: Oliver Rausch, Tal Ben-Nun, Nikoli Dryden, Andrei Ivanov, Shigang Li, Torsten Hoefler
Veröffentlicht in: 2021
Herausgeber: Arxiv

Brain-Like Approaches to Unsupervised Learning of Hidden Representations - A Comparative Study (öffnet in neuem Fenster)

Autoren: Naresh Balaji Ravichandran, Anders Lansner, Pawel Herman
Veröffentlicht in: Artificial Neural Networks and Machine Learning – ICANN 2021, 2021
Herausgeber: Springer
DOI: 10.1007/978-3-030-86383-8_13

Posit NPB: Assessing the Precision Improvement in HPC Scientific Applications (öffnet in neuem Fenster)

Autoren: Steven W. D. Chien, Ivy B. Peng, Stefano Markidis
Veröffentlicht in: Parallel Processing and Applied Mathematics - 13th International Conference, PPAM 2019, Bialystok, Poland, September 8–11, 2019, Revised Selected Papers, Part I, Ausgabe 12043, 2020, Seite(n) 301-310, ISBN 978-3-030-43228-7
Herausgeber: Springer International Publishing
DOI: 10.1007/978-3-030-43229-4_26

Multi-GPU Acceleration of the iPIC3D Implicit Particle-in-Cell Code (öffnet in neuem Fenster)

Autoren: Chaitanya Prasad Sishtla, Steven W. D. Chien, Vyacheslav Olshevsky, Erwin Laure, Stefano Markidis
Veröffentlicht in: Computational Science – ICCS 2019 - 19th International Conference, Faro, Portugal, June 12–14, 2019, Proceedings, Part V, Ausgabe 11540, 2019, Seite(n) 612-618, ISBN 978-3-030-22749-4
Herausgeber: Springer International Publishing
DOI: 10.1007/978-3-030-22750-0_58

Suche nach OpenAIRE-Daten ...

Bei der Suche nach OpenAIRE-Daten ist ein Fehler aufgetreten

Es liegen keine Ergebnisse vor

Mein Booklet 0 0