Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Functional learning: From theory to application in bioimaging

Periodic Reporting for period 2 - FunLearn (Functional learning: From theory to application in bioimaging)

Reporting period: 2023-04-01 to 2024-09-30

The application of deep neural networks (DNNs) is revolutionizing numerous scientific and engineering fields, with a significant impact on imaging. Traditional image-reconstruction algorithms are now being outperformed by DNN-based techniques, both qualitatively and quantitatively. This shift opens new opportunities to extend the capabilities of existing imaging infrastructure. Specifically, DNNs can improve signal-to-noise ratios, enhance image resolution, and enable image reconstruction from fewer measurements (compressed sensing), which leads to faster imaging and reduced radiation doses for patients.

However, while these advancements are promising, caution is needed. The inner workings of DNNs remain poorly understood, while top-performing reconstruction algorithms often show vulnerabilities such as a reduced robustness and a propensity to generate so-called hallucinations. We attribute this behavior to their inherent instability. The latter can be quantified through the Lipschitz constant of the network, which measures the degree to which small input perturbations can cause significant output deviations.

In the FunLearn project, we aim to leverage advanced machine learning to push the frontiers of bioimaging, while ensuring the reliability and trustworthiness of our methods. To reach our goal, we therefore first need to develop safer computational architectures. While the Lipschitz constant of a deep neural network can be controlled in a layer-wise fashion, such a control negatively affects expressivity and, hence, performance. Since the effect worsens with depth, our solution is to rely on shallow neural architectures which are easier to control. Moreover, we can enhance their expressivity by increasing the sophistication of the layers, for instance by including learnable activations as alternatives to the fixed units (ReLU) of conventional networks or by considering higher-dimensional trainable nonlinearities.

Our main objectives are thus: (i) to develop novel, more robust learning architectures based on functional-optimization methods; and (ii) to apply these tools to address significant challenges in biomedical imaging.
I. Methodology

I.1 Functional Learning: We have investigated a variational formulation of robust learning that relies on the minimization of a novel roughness penalty (HTV). It promotes solutions that are piecewise-linear and continuous (CPWL), like the ones produced by deep ReLU networks. The essential difference, however, is that we are seeking the most “regular” CPWL fit of the data: the one with the fewest linear pieces, in the spirit of Occam’s razor. We have developed numerical solvers to find these solutions efficiently in low dimensions, including variants for the precise control of their Lipschitz constant. The 1D version of the scheme is particularly effective and forms the core of our deep-spline framework, which enables the inclusion of trainable activations in neural architectures.

I.2 Bayesian Framework for Signal Reconstruction: The key here is to rely on neural networks to encode the prior distribution of the signal, either in the form of a Gibbs energy or as an explicit generative model. The Gibbs approach has enabled us to improve traditional energy-minimization methods for image reconstruction, while making sure that the outcome remains trustworthy. We have also developed a full Bayesian pipeline that relies on generative models and stochastic sampling to reconstruct images with the least statistical error.

II. Applications to Biomedical Image Reconstruction

As for the practical facet of the research, we have applied our methods to a variety of imaging modalities. We have benchmarked our algorithms by comparing them with state-of-the-art reconstruction techniques for MRI and x-ray computer tomography. We have also developed specific algorithms for computational optics. These include new reconstruction methods for diffraction tomography, projection tomography, lensless imaging, and dynamic Fourier ptychography.
Our research has revealed new variational principles that support the use of specific neural architectures, in particular, adaptive simplicial splines and two-layer neural networks. In addition to improving our understanding, these principles suggest new directions of research in approximation theory. We like to draw a conceptual parallel between our functional formulation, which involves the construction of suitable Banach spaces, and the theory of reproducing-kernel Hilbert spaces, which supports the use of kernel methods in classical machine learning.

Another research highlight is our variational reconstruction scheme with learned priors (recurrent regularization networks), which is now state-of-the-art among the reconstruction methods that can be labeled as “trustworthy.” The technique is generic and applicable to a wide range of imaging problems.

Beside the refinement and extension of our machine-learning toolbox, the main thrust of our research for the remaining term will be to apply our framework to the resolution of outstanding problems in imaging. In particular, we aim at improving the capabilities of cryo-electron tomography (cryo-ET) to enable the structural dissection of cellular components in their native context. What makes cryo-ET particularly challenging is that the signal-to-noise ratio is kept deliberately low to minimize radiation damage, and that the resolution is anisotropic due to the missing wedge. To date, one must resort to sub-tomogram averaging to maximize the resolution of images, which requires the availability of multiple copies of the same configuration of the same structure, a limitation that we would prefer to mitigate.
Functional HTV-based learning versus ReLU neural nets