Skip to main content
European Commission logo print header

Multi-functional Computational Microscopy for Quantitative Cell Tracking

Periodic Reporting for period 2 - MCMQCT (Multi-functional Computational Microscopy for Quantitative Cell Tracking)

Reporting period: 2017-08-01 to 2018-07-31

The invention of the optical microscope in the 16th century has changed the human race’s perception of the world forever. The ability to observe the world at the micron (1/1000-th of a millimeter) level, opened a window to the exploration of fundamental phenomena in biology, physics, materials science and medicine as it became an indispensable tool with direct implications for many aspects of our lives and enabling new inventions and insights. For global health, microscopes are the standard go-to tool in clinics for diagnosis of many types of conditions and diseases, as well as for fundamental research and scientific recreation. A few decades ago, the digital age was introduced to microscopy, with the advent of the digital cameras and the connection of these sensors to computers. This enabled a surge in development of novel measurement techniques, creating a new renaissance for microscopy and enabling automation and real time, longitudinal tracking of events, as they unfold in the microscopic or even nano-scopic (millionth of a millimeter) world.
While these exciting technological advances were the enablers of progress in many fields, the demand for high quality imaging data is ever increasing, influenced by our own perception of modern image quality. However, state of the art optical microscopes are expensive, require a skillful human resource to operate and maintain them, which might create bottlenecks in clinics and research centers, where this resource is shared by numerous laboratories, which ultimately slows down diagnosis and discovery.
In this project, we are developing novel microscopes and computational tools to address the issues of throughput, cost and labor required for biomedical imaging, for a new generation of optical microscopes with the ultimate goal of making biomedical imaging more accessible to clinics, research and recreation.
1) Reducing the number of measurements required for holographic on-chip microscopy: We have demonstrated a high throughput (field of view >20mm^2), compact lens-free holographic based computational imaging microscope, which enables the high-resolution imaging of clinically relevant dense biological samples obtained from less than 50% of the previously reported nominal number of measurements, using a novel reconstruction algorithm which applies loose sparsity constraint during the iterative reconstruction.
2) Deep learning based holographic image reconstruction: Phase recovery is a classical inverse imaging problem with wide range of applications for materials, life sciences and fundamental physics, using diverse radiation sources and detectors in regimes such as X-ray, electron beams and visible light. Numerical inverse methods applied to address the phase retrieval problem has led to numerous scientific discoveries. We have demonstrated a deep convolutional neural network for holographic image recovery and phase retrieval, using only a single hologram. We show that the results are comparable to results that are previously obtained with 2-4 measurements, and reconstruction speed >4 times faster than previously reported. These results provide an exciting new approach for phase recovery and inverse imaging problems.
3) Data driven microscope image enhancement: We have demonstrated a significant enhancement of imaging throughput, extended depth-of-field and spatial resolution of an optical microscope using a deep convolutional neural network. The performance enhancement doesn’t require any additional hardware or special design to the microscope. The results were demonstrated for clinically relevant tissues. The technique is widely applicable to other microscopic modalities and inverse imaging problems.
4) High-fidelity image reconstruction from noisy and distorted images: Standard deconvolution microscopy techniques are as good as the numerical approximation of the image formation model. However, spatial aberrations, spectral aberrations, noise, and even sample preparations issues, might be significantly different from image to image and effect the imaging process in a manner that makes the process of numerically modelling the image formation an intractable problem, even between different fields-of-view of the same sample, taken with the same imaging system. We have demonstrated a deep network to learn the statistical transformations between the mobile and optimized benchtop microscope images, which represent two different ends of the spectrum in terms of image quality. This result represents a new generation of inverse imaging techniques, which Instead of trying to come up with a non-tractable forward model for image degradation, learns how to predict the benchtop microscope image that is most statistically likely to correspond to the degraded input image.
5) Enhancing microscope images, using an algorithmic framework: Using deep learning tools, we have been able to decode meaningful images from novel optical instruments that substantially deviate from standard linear-shift-invariant optical components. For example, we have created a “shadow casting” structure, that can allow us to acquire 3D information in spectral regions that lenses and optical components are challenging to fabricate, such as X-ray. We have also demonstrated meaningful imaging through multi-mode under white light illumination, which presents a leapfrog in comparison to previously reported imaged objects through a multi-mode fiber in terms of their information capacity. This project has a potential to be used for thin medical devices and industrial inspection purposes.
The results reported in the previous paragraph represent a paradigm shift from current state of the art practices in computational microscopy. Traditionally, to increase metrics such as resolution, depth-of-field, or field-of-view, one should make assumptions about the object being static (or not changing in some domain, under some probing) and use that to encode the information in that static domain to infer the information at the desired domain. Using the tools that we have developed, we show that we can create novel set of tools that learns the statistics between the data and its enhanced version, without making assumptions on its dynamics or even the imaging process. These results will enable a new generation of tools for live samples imaging. The results will also put computational techniques at the forefront of imaging, augmenting it in a seeming-less way in the near future with existing hardware, with equal importance.