Skip to main content

Computational Biophotonics for Endoscopic Cancer Diagnosis and Therapy

Periodic Reporting for period 4 - COMBIOSCOPY (Computational Biophotonics for Endoscopic Cancer Diagnosis and Therapy)

Reporting period: 2020-01-01 to 2020-12-31

Replacing traditional open surgery with minimally-invasive interventions represents one of the most important challenges in modern healthcare. Minimally-invasive procedures provide numerous advantages over open surgery, including reduced surgical trauma, lesser need for pain medication, earlier convalescence, better cosmetic results, shorter hospitalization terms and lower costs. Furthermore, they are often the only promising treatment option for patients that are not eligible for surgery due to old age or poor overall medical condition, for example. Conventional medical imaging equipment in minimally-invasive procedures (e.g. endoscopes, laparoscopes), however, often offers poor tissue differentiation (e.g. healthy vs (pre)malignant or perfused vs not perfused), which results in inadequate treatment, long procedure times and high complication rates. Fusion of interventional imaging data with diagnostic data has shown promise to overcome some of these issues but suffers from the fact that tissue dynamics (e.g. hemodynamic changes resulting in ischemia) cannot be taken into account. Given these challenges, the goal of the COMBIOSCOPY project was to develop new safe and cost-effective concepts for interventional imaging that are particularly well-suited for supporting endoscopic interventions.

Conclusion of the action
In the scope of the COMBIOSCOPY project, we developed new imaging concepts that (1) provide real-time discrimination of local tissue with a high contrast-to-noise-ratio, (2) are radiation-free to prevent the patient and staff from being exposed to harmful ionizing radiation and (3) feature a compact design at a low cost for a wide range of applicability and acceptance. Our methodology leverages recent spectral imaging techniques including multispectral optical and optoacoustic imaging as well as modern machine learning techniques to enable augmented reality visualization of a range of important morphological and functional parameters invisible to the naked eye. New methods for uncertainty analysis ensure high error awareness and robustness of the approach when applied in a clinical setting. According to a multistage validation process involving ongoing in-human studies, the methodology holds great potential for clinical translation.
The imaging concept we pursued within the COMBIOSCOPY project stands upon four pillars: the development of (1) cutting-edge spectral imaging hardware, (2) innovative methods for live monitoring of oxygenation and hemodynamic changes, (3) novel methods for automatic classification of tissue, and finally, (4) a framework for uncertainty handling. These four principal pillars were rounded off by an unscheduled fifth topic related to meta science and validation. Our contributions to all five topics are detailed in the following:

Spectral imaging hardware
When light enters biological tissue, it undergoes complex interactions, including reflection, absorption, and scattering. In this project we exploited the fact that different tissue components have unique optical scattering and absorption properties. Specifically, we built upon two spectral imaging techniques that make use of multiple bands across the electromagnetic spectrum:
Multispectral imaging (MSI) is a passive technique based on 2D reflection images which requires no contact with the object under investigation. It is based on capturing the reflectance spectrum of the tissue - i.e. the reflectance for a range of different wavelengths of light - over an entire surface, which encodes structural and functional surface and subsurface information otherwise not visible to the naked eye. In the scope of the COMBIOSCOPY project, we developed the first multispectral laparoscopic imaging setup featuring a compact and lightweight laparoscope and the possibility to complement the conventional endoscopic view on the patient with relevant morphological and functional information in real time. We further explored approaches that are compatible with flexible medical devices, as required in colonoscopy procedures, for example. Finally, new methods for automatically selecting the most relevant wavelengths for a given application have been proposed to enable (near) real-time acquisition speed for MSI.
The depth sensitivity of MSI is only several millimeters at most and can thus only reveal tissue characteristics on or close to the visible surface. Photoacoustic imaging (PAI) addresses the issue of limited depth range by measuring optical properties via acoustic signals (‘light in – sound out’ approach). In this method, the probe requires contact with the tissue which is illuminated by means of light pulses, leading to the absorption of photons and subsequent heating of the tissue. The resulting thermoelastic expansion generates pressure waves, which can then be detected by broadband ultrasonic transducers and converted into absorption images. In this manner, tomographic images of optical properties can be produced at a high level of resolution and with a range of up to several centimeters. Leveraging this principle, we developed a hybrid imaging device that simultaneously reconstructs functional (PAI) and structural (ultrasound) information. To provide additional context information, we further developed a novel approach to compounding individual image slices to a full 3D image, which does not rely on external tracking devices and can thus be smoothly integrated in clinical workflows.

Live oxygenation monitoring
The primary challenge addressed in this project was to convert the spectral imaging data into clinically relevant information. In this context we focused on recovering blood volume and tissue oxygenation from MSI data and photoacoustic data in order to monitor hemodynamic changes in a spatially resolved manner. While previous methods for functional parameter estimation are based on either simple linear methods or complex model-based approaches exclusively suited for offline processing, our novel approach combines the high accuracy of model-based approaches with the speed and robustness of modern machine learning methods. In this context, the COMBIOSCOPY project pioneered the idea of teaching machine learning algorithms on the basis of physics-based simulations (patents pending). While the simulations in the training data cover a broad range of possible patient and acquisition conditions, methods for automatic light source estimation and domain adaptation enable the adaptation of the method to a specific clinical setting. According to our validation studies, the award-winning and patented concepts are well-suited for monitoring hemodynamic changes in various tissues including colon, kidney and brain. An ongoing patient study investigates automatic detection of ischemia in minimally invasive surgery.

Tissue Classification
Accurate and robust local tissue classification can be of great benefit in many clinical applications where exact delineation of organs, tumors and other anatomical structures is important for diagnostic or therapeutic purposes. While conventional intraoperative imaging modalities such as ultrasound and laparoscopic imaging are limited in their ability to allow correct differentiation of those structures, we developed novel machine learning-based tissue classification approaches specifically designed for multispectral and photoacoustic imaging data. As a foundation for this work, we assembled a huge data base of more than 1,000 spectral images featuring 20 different organs and various pathologies. Based on this data, we were the first to show that (1) the variability in spectral reflectance is primarily explained by the tissue type rather than by the individual from which the measurements are taken or the specific acquisition conditions (paper about to be submitted) and that (2) classification of tissue with MSI is substantially more robust compared to the conventional approach relying on standard RGB video images.

Uncertainty Handling
A key limiting factor for translating research results into clinical practice is often not the accuracy of a method but its robustness. We thus put a particular focus on the uncertainty awareness of our methodology. While a lot of research has addressed uncertainty related to the potential intrinsic randomness of the data generation process (so-called aleatoric uncertainty) as well as to insufficient training data (so-called epistemic uncertainty), a type of uncertainty that has received very little attention in literature is the potential inherent ambiguity of the problem. Converting the measured spectrum in a pixel to a single estimation of oxygenation (so-called point estimate), for example, neglects the fact that multiple plausible solutions may exist. Consequently, the estimations cannot generally be trusted to be close to the ground truth. We addressed this problem by designing neural network architectures that are inherently invertible and thus have the capability to generate full probability distributions for the predicted values compared to point estimates. Leveraging this new architecture, we showed that the uncertainty in a measurement as well as the ambiguity of a problem depends crucially on the measurement device and pose and can thus be compensated by adapting device design and measurement protocols. We further demonstrated that invertible network architectures can be leveraged for determining whether an algorithm is qualified to process a given new data set based on the data it was trained on.

Meta Science
An unexpected contribution of the project was related to the validation of biomedical image analysis algorithms. For comparing different methods, international competitions (‘challenges’) providing common benchmarking datasets have become increasingly customary. In the scope of this project we developed the hypothesis of there being a great discrepancy between the importance of challenges and their quality. Although not envisioned in the original proposal, we found this hypothesis sufficiently alarming to react to it. In a multi-center study, we revealed major flaws in common practice, which has already led to changes in the way the most important biomedical image analysis society (MICCAI) conducts its challenges today. To further contribute to good scientific practice in the specific field of computational biophotonics, several members of the COMBIOSCOPY team are active in the International Photoacoustics Standardisation Consortium (IPASC), partially as work package lead.
Within the COMBIOSCOPY grant, my group developed a novel machine learning-based approach to convert high-dimensional spectral data into intuitive information that can be used by physicians for real-time clinical decision making. Main obstacles were the lack of analysis methods that feature both high accuracy and speed, and the absence of a quantitative reference for the measurements - traditionally a key requirement for applying machine learning techniques. To overcome this hurdle and enable functional spectral imaging for the first time in this field, we pursued the approach of teaching machine learning models on the basis of highly accurate simulations. With our award-winning and patent pending concepts we are now able to pioneer live perfusion monitoring and automatic tissue classification in various clinical settings.
Machine learning - based real-time quantification of tissue oxygenation in laparoscopic surgery.