Project description
Application of extreme analytic methods in healthcare
Exascale volumes of diverse healthcare data stand out in size (the 2020 production exceeded 2 000 exabytes), heterogeneity (numerous media and acquisition methods), knowledge (diagnostic reports) and commercial value. The supervised nature of deep learning models requires large labelled, annotated data, preventing models from extracting knowledge and value. The aim of the EU-funded EXA MODE project is to allow easy and fast, weakly supervised knowledge discovery of exascale heterogeneous data, limiting human interaction. The project objectives include the development and release of new methods and tools for extreme-scale analytics for precise predictions, supporting decision making by industry and hospitals. The multimodal semantic middleware will offer easier and faster management and analysis of heterogeneous data, improving architectures for complex distributed systems and increasing the speed of data throughput and access.
Objective
Exascale volumes of diverse data from distributed sources are continuously produced. Healthcare data stand out in the size produced (production 2020 >2000 exabytes), heterogeneity (many media, acquisition methods), included knowledge (e.g. diagnostic reports) and commercial value. The supervised nature of deep learning models requires large labeled, annotated data, which precludes models to extract knowledge and value. EXA MODE solves this by allowing easy & fast, weakly supervised knowledge discovery of exascale heterogeneous data provided by the partners, limiting human interaction. Its objectives include the development and release of extreme analytic methods and tools, that are adopted in decision making by industry and hospitals. Deep learning naturally allows to build semantic representations of entities and relations in multimodal data. Knowledge discovery is performed via document-level semantic networks in text and the extraction of homogeneous features in heterogeneous images. The results are fused, aligned to medical ontologies, visualized and refined. Knowledge is then applied using a semantic middleware to compress, segment and classify images and it is exploited in decision support and semantic knowledge management prototypes. EXA MODE is relevant to ICT12 in several aspects: 1) Challenge: it extracts knowledge and value from heterogeneous quickly increasing data volumes. 2) Scope: the consortium develops and releases new methods and concepts for extreme scale analytics to accelerate deep analysis also via data compression, for precise predictions, support decision making and visualize multi-modal knowledge. 3) Impact: the multi-modal/media semantic middleware makes heterogeneous data management & analysis easier & faster, it improves architectures for complex distributed systems with better tools increasing speed of data throughput and access, as resulting from tests in extreme analysis by industry and in hospitals.
Fields of science
Keywords
Programme(s)
Funding Scheme
RIA - Research and Innovation actionCoordinator
2800 Delemont
Switzerland