Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

EXtreme-scale Analytics via Multimodal Ontology Discovery & Enhancement

Project description

Application of extreme analytic methods in healthcare

Exascale volumes of diverse healthcare data stand out in size (the 2020 production exceeded 2 000 exabytes), heterogeneity (numerous media and acquisition methods), knowledge (diagnostic reports) and commercial value. The supervised nature of deep learning models requires large labelled, annotated data, preventing models from extracting knowledge and value. The aim of the EU-funded EXA MODE project is to allow easy and fast, weakly supervised knowledge discovery of exascale heterogeneous data, limiting human interaction. The project objectives include the development and release of new methods and tools for extreme-scale analytics for precise predictions, supporting decision making by industry and hospitals. The multimodal semantic middleware will offer easier and faster management and analysis of heterogeneous data, improving architectures for complex distributed systems and increasing the speed of data throughput and access.

Objective

Exascale volumes of diverse data from distributed sources are continuously produced. Healthcare data stand out in the size produced (production 2020 >2000 exabytes), heterogeneity (many media, acquisition methods), included knowledge (e.g. diagnostic reports) and commercial value. The supervised nature of deep learning models requires large labeled, annotated data, which precludes models to extract knowledge and value. EXA MODE solves this by allowing easy & fast, weakly supervised knowledge discovery of exascale heterogeneous data provided by the partners, limiting human interaction. Its objectives include the development and release of extreme analytic methods and tools, that are adopted in decision making by industry and hospitals. Deep learning naturally allows to build semantic representations of entities and relations in multimodal data. Knowledge discovery is performed via document-level semantic networks in text and the extraction of homogeneous features in heterogeneous images. The results are fused, aligned to medical ontologies, visualized and refined. Knowledge is then applied using a semantic middleware to compress, segment and classify images and it is exploited in decision support and semantic knowledge management prototypes. EXA MODE is relevant to ICT12 in several aspects: 1) Challenge: it extracts knowledge and value from heterogeneous quickly increasing data volumes. 2) Scope: the consortium develops and releases new methods and concepts for extreme scale analytics to accelerate deep analysis also via data compression, for precise predictions, support decision making and visualize multi-modal knowledge. 3) Impact: the multi-modal/media semantic middleware makes heterogeneous data management & analysis easier & faster, it improves architectures for complex distributed systems with better tools increasing speed of data throughput and access, as resulting from tests in extreme analysis by industry and in hospitals.

Call for proposal

H2020-ICT-2018-20

See other projects for this call

Sub call

H2020-ICT-2018-2

Coordinator

HAUTE ECOLE SPECIALISEE DE SUISSE OCCIDENTALE
Net EU contribution
€ 886 875,00
Address
Route de Moutier 14
2800 Delemont
Switzerland

See on map

Region
Schweiz/Suisse/Svizzera Espace Mittelland Jura
Activity type
Higher or Secondary Education Establishments
Links
Total cost
€ 886 875,00

Participants (9)