European Commission logo
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS

Integrated Data Analysis Pipelines for Large-Scale Data Management, HPC, and Machine Learning

Descrizione del progetto

Nuovi sistemi per le attuali applicazioni basate sui dati

L’infrastruttura per la gestione dei dati sta crescendo a ritmo spedito. Le moderne applicazioni basate sui dati sfruttano grandi raccolte di dati eterogenei per compiere previsioni precise e svelare schemi interessanti. Sono inoltre in grado di costruire solidi modelli di apprendimento automatico per fornire previsioni precise. Di conseguenza, sono stati sviluppati nuovi sistemi utilizzando il calcolo tradizionale ad alte prestazioni e l’architettura dei cluster hardware di base. Si riscontra inoltre la tendenza verso complessi canali di analisi dei dati che combinano diversi sistemi. Il progetto DAPHNE, finanziato dall’UE, delineerà l’infrastruttura di sistemi aperta ed estensibile per canali di analisi dei dati integrate, sviluppando un’implementazione di riferimento delle astrazioni del linguaggio (API e un linguaggio specifico del dominio) e una rappresentazione intermedia nonché tecniche di compilazione e di runtime.

Obiettivo

Modern data-driven applications leverage large, heterogeneous data collections to find interesting patterns, and build robust machine learning (ML) models for accurate predictions. Large data sizes and advanced analytics spurred the development and adoption of data-parallel computation frameworks like Apache Spark or Flink as well as distributed ML systems like MLlib, TensorFlow, or PyTorch. A key observation is that these new systems share many techniques with traditional high-performance computing (HPC), and the architecture of underlying HW clusters converges. Yet, the programming paradigms, cluster resource management, as well as data formats and representations differ substantially across data management, HPC, and ML software stacks. There is a trend though, toward complex data analysis pipelines that combine these different systems. Examples are workflows of distributed data pre-processing, tuned HPC libraries, and dedicated ML systems, but also HPC applications that leverage ML models for more cost-effective simulation. Major obstacles are (1) limited development productivity for integrated analysis pipelines due to different programming models, and separated cluster environments, (2) unnecessary data movement overhead and underutilization due to separate, statically provisioned clusters, and (3) lack of a common system infrastructure with good interoperability. For these reasons, DAPHNE’s overall objective is the definition of an open and extensible systems infrastructure for integrated data analysis pipelines. We aim at building a reference implementation of language abstractions (i.e. APIs and a domain-specific language), an intermediate representation, as well as compilation and runtime techniques with support for integrating and scheduling heterogeneous accelerator and storage devices. A variety of real-world, high-impact use cases, datasets, and a new benchmark will be used for qualitative and quantitative analysis compared to state-of-the-art.

Invito a presentare proposte

H2020-ICT-2018-20

Vedi altri progetti per questo bando

Bando secondario

H2020-ICT-2020-1

Meccanismo di finanziamento

RIA - Research and Innovation action

Coordinatore

KNOW-CENTER GMBH RESEARCH CENTER FOR DATA-DRIVEN BUSINESS & BIG DATA ANALYTICS
Contribution nette de l'UE
€ 737 732,50
Indirizzo
SANDGASSE 36/4
8010 Graz
Austria

Mostra sulla mappa

Regione
Südösterreich Steiermark Graz
Tipo di attività
Research Organisations
Collegamenti
Costo totale
€ 737 732,50

Partecipanti (13)