Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS

Next Generation Machine Intelligence for Medical Image Representation and Analysis

Periodic Reporting for period 4 - MIRA (Next Generation Machine Intelligence for Medical Image Representation and Analysis)

Reporting period: 2022-08-01 to 2024-01-31

Medical imaging is a key technology in clinical routine for diagnosing disease, guiding interventions, and assessing the efficacy of treatment. Clinical experts need to undergo long training to be able to interpret medical scans, learning to spot subtle signs of disease. With an increasing complexity and large volume of data, however, the interpretation of medical images and extraction of clinically useful information push human abilities to the limit. Without the support from intelligent computational tools, there is high risk that critical patterns of disease go undetected with possibly life-threatening consequences for patients and huge burden on society.

The successful integration of AI for tasks such as image-based diagnostics has the potential to transform healthcare at many levels. AI could directly help to make better clinical care more widely accessible and reduce the socio-economic burden on overstretched healthcare systems. Artificial intelligence has already become a key pillar of a new global economy and a driver for creating thousands of jobs across Europe. Large European initiatives have been launched which promote the safe and ethical use of AI and this project aimed to contribute towards the success of this ambition.

This project was devoted to the development of new intelligent computational tools using the power of machine learning for automated image analysis. The objective was to build robust, reliable and trustworthy algorithms to optimally support experts when making clinical decisions by extracting accurate, quantitative measurements from medical imaging data. The ultimate goal was to deliver the next generation machine intelligence to tackle major clinical challenges such as early detection of disease and gaining new insights about complex pathology.

The project has successfully delivered on its key objectives of developing new machine learning strategies for more robust and reliable image analysis. New disease detection models have been developed together with a novel causal analysis, enabling the detection of bias and failure cases which is important for the safe and effective deployment of AI in clinical applications.
The research objectives of this project are formulated as part of four key challenges:
1) Development of intelligent algorithms that can learn from each other and exchange information in order to solve complex image analysis tasks;
2) Leveraging large-scale population data and linking images with non-imaging data such as demographics, lifestyle, genetics and disease to construct powerful statistical models;
3) Utilising these models to inform novel approaches for abnormality detection to find subtle signs of disease in medical scans;
4) Obtaining a better understanding on how black box machine learning derives at decisions to gain the trust of doctors, patients and policy makers for use of AI in clinical applications.

The project team has made significant progress on all four challenges, and published more than 50 scientific papers in high impact journals, international conferences and workshops. We have developed and demonstrated several new approaches for multi-modal, multi-task learning leveraging knowledge across different imaging domains and different clinical applications. We have shown that image segmentation can benefit by learning from multi-modal data. We have also demonstrated that solving multiple tasks jointly can improve accuracy and efficiency of image segmentation models. The team has developed new methodology for domain generalisation making machine learning models more robust against changes in the input data which is important for clinical deployment.

A major achievement is the development of a novel approach to causal generative AI. We have developed a novel causal AI approach, deep structural causal models, for high-fidelity image generation under causal assumptions. Deep structural causal models enable, for the first time, high-resolution and plausible counterfactual image generation which as applications in robust machine learning, bias mitigation, and fairness. We have also developed new algorithms for fully automatic, highly accurate image segmentation based on deep learning. Specifically, we have demonstrated the clinical utility of our Brain Lesion Analysis and Segmentation Tool for Computed Tomography (BLAST-CT) for the quantitative assessment of traumatic brain injuries. BLAST-CT uses deep convolutional neural networks to accurately detect, identify and segment different types of bleedings in the brain which provides important information for deciding on treatment strategies and patient management in the setting of emergency and intensive care. The results of our multicentre validation study provide strong evidence for the clinical value of automated image segmentation algorithms.

In order to build trust in AI for clinical use, we need to better understand how complex machine learning models such as deep neural networks work under different settings and under which conditions they may fail. We have developed new methodology for automated quality control and prediction of failure cases that can be used during deployment. In particular, we have proposed new approaches for estimating the performance of AI models in the absence of ground truth. This is critical methodology for AI deployment, allowing us to predict when a model works and when it may fail. Another output of our research is a comprehensive stress testing approach to assess the robustness of image classification models. Stress testing can reveal weak points of a model, facilitating the improvement and auditing of AI models.
We have developed novel methodologies to address key challenges in medical imaging including new learning strategies to leverage diverse, heterogeneous and multi-modal data which has been shown to improve the state-of-the-art in semantic segmentation and detection of pathology. We have demonstrated the benefit of multi-task learning and devised new methodology to exploit unlabelled data via semi- and self-supervised learning. A significant contribution relates to the introduction of a causal perspective key challenges in machine learning for medical imaging such as the scarcity of high-quality annotated data and mismatch between the development dataset and the target environment. We present theoretical arguments and strong evidence from real-world applications for the importance of taking the causal story behind the data into account when designing machine learning models which can help to identify and avoid issues arising from dataset shift and sample selection bias. Our causal analysis has paved the way for a major breakthrough of causal generative AI in medical imaging. Our most recent work on deep structural causal models enables causal reasoning with high-dimensional, multi-modal data. For the first time, we have shown that it is possible to generate plausible, high-resolution counterfactual "what-if" images with important applications in robustness, reliability, and fairness.
Project banner
Brain lesion detection and segmentation results
Counterfactual image generation
Causal perspective on predictive modelling in medical imaging