European Commission logo
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS

Computational Light fields IMaging

Periodic Reporting for period 4 - CLIM (Computational Light fields IMaging)

Période du rapport: 2021-03-01 au 2022-02-28

Light fields cameras capture light rays as they interact with the scene. The flow of rays yields a rich description of the scene enabling advanced image creation capabilities, 3D scene geometry estimation and scene reconstruction. Applications include photography, augmented reality, autonomous vehicles, surveillance, but also microscopy, medical imaging, and particle image velocimetry. However, the trajectory to a deployment of light fields remains cumbersome. Barriers are limitations of capturing devices in terms of spatial or angular resolution, or noise. Another barrier is the huge amount of high-dimensional data produced by light fields with implications on storage and processing time. The development of efficient methods for scene analysis, depth estimation or scene flow estimation, from light fields, for editing is another challenge for technology adoption.

The objective was to address these barriers by leveraging advances in image processing, computer vision and machine learning, and lay algorithmic foundations for the light field processing chain. A first challenge was the design of camera models to capture light fields with good spatio-angular resolution. This involves algorithmic developments in the framework of compressive sensing with deep learning reconstruction. A second challenge was related to the high data dimensionality. Data processing becomes tougher as dimensionality increases, hence the need for tools for data dimensionality reduction or low dimensional embedding. These models, together with scene analysis algorithms, have been shown to be key components of light field compression architectures. A third challenge was related to technological limitations of capturing devices with impact on the light field spatio-angular resolution and noise.

The project methodology has evolved from signal processing to machine learning, from hand-crafted to learned signal priors. The project has thus contributed to the leveraging of advances in deep learning in the light field processing chain, going from compressive acquisition with novel camera designs and deep reconstruction, low rank and neural radiance field representation and compression, to restoration, including computer vision problems such as view synthesis, scene flow estimation. The project has also contributed to novel methodologies in the underlying fields of scene modeling and inverse problems with machine learning.
The project has addressed the light field processing chain, starting from the design of light field cameras with simple 2D sensors. We have proposed a multi-mask camera model, in a compressive sensing framework, that accounts for the presence of color filter arrays in image sensors. It has been extended to high dynamic range light field acquisition, combining multi-ISO photography with coded mask acquisition. We have developed an unrolled optimization method combining optimization with learned signal priors, for reconstructing light fields from compressed measurements. The quality of the reconstructed views makes practical the capture of multiple views on a 2D sensor.

Light field processing is challenging due to the high data dimensionality. The project has developed tools for light field dimensionality reduction, low rank approximation and low-dimensional embedding. Low rank models and graph signal processing methods have been leveraged from 2D to 4D, proposing solutions for handling the complexity of these methods for high dimensional data. The project has developed sampling methods for reducing the graph dimension, and designed scene geometry-aware graph transforms for capturing data correlation. These tools have allowed us to obtain high compression performance, and solve inverse problems in light field imaging.

Data compression requires data dimensionality reduction but not only. Correlation can also be removed via synthesis or prediction mechanisms. A critical component of view synthesis is depth estimation that remains difficult in the case of light fields. We have thus developed methods based on deep learning yielding depth maps with high accuracy. The problem of deep learning is however the huge number of network parameters with implications on memory footprints. Our focus has been the design of lightweight neural networks for view synthesis, with few input views. We have also addressed scene flow estimation, introducing a novel parametric model in the 4D ray space valid for both sparsely and densely sampled light fields.

The project has proposed a shift of paradigm for light field compression by introducing compressed generative models for light fields. The problem becomes one of compressing neural networks. We have developed compressed neural radiance fields for scene representation based on novel concepts of low rank constrained distillation networks. Beyond compression, this approach allows us to synthesize any view of the light field. The method applies to more general neural network compression problems outside the light field context.

Light field capture suffers from technological limitations with implications on the data noise level, or spatio-angular resolution. The project has developed denoising and super-resolution methods, starting from an anisotropic regularization framework in the 4D ray space. Our methodology has then evolved towards deep learning coupled with low rank priors. A view extrapolation method has been proposed for enhancing the re-focusing precision, i.e. the ability to distinguish features at different depths by refocusing, which can be important for light field microscopy.
The multi-mask model for compressive light field acquisition and the deep learning recovery method pave the way towards novel coded aperture light field camera designs. The project has also leveraged low rank models and graph signal processing methods in the domain of 4D light field processing. By proposing methods to overcome the complexity issue, the project has contributed to make these methods tractable for high dimensional data. Although scene depth, scene flow estimation, and view synthesis were not new, the proposed solutions yield more reliable estimates, or synthesized views of better quality compared to existing solutions. They have shown to be efficient for light field compression.

We have introduced a novel paradigm for light field compression based on neural network compression. The approach learns a neural scene representation which is compressed instead of compressing the data. We have introduced a low-rank constrained method for learning the neural representation which is converted to a smaller model using concepts of network distillation. The approach can find applications outside the light fields context, for neural network compression, and can contribute to the practical use of neural networks on resource limited devices for a variety of applications.

Removing noise or enhancing the resolution are well-known problems in 2D imaging but challenging for light fields. Solving inverse problems requires a good understanding of the data space structure, We have thus developed regularization priors in the 4D ray space for anisotropic diffusion that can solve a variety of problems. We have pushed further these solutions using machine learning in spaces of reduced dimension, and by leveraging unrolled optimization methods, combining advantages of iterative optimization with those of learnable priors, in light field imaging.
HDR light field camera design using a single sensor with spatially varying ISO gain and coded mask.