Skip to main content
CORDIS - Forschungsergebnisse der EU
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary
Inhalt archiviert am 2024-06-18

Light Field Imaging and Analysis

Final Report Summary - LIA (Light Field Imaging and Analysis)

Light fields resemble collections of different images of a scene with very densely sampled view points. Compared to conventional photography, they are known for allowing interesting effects like after-capture refocusing. However, they also have an intricate structure which allows us to reason about certain aspects of the scene, in particular the geometry, surface materials and illumination. Just like with stereo images, it is possible to easily find the distance of points to the observer, as long as the scne is Lambertian, which means that 3D points look the same no matter the view point.

However, in contrast to just a stereo pair, light fields have a more intricate structure due to the dense sampling of views, which allows a deeper analysis of the scene. By means of a detailed study of the light field structure, we have made substantial progress in a number of problems which are near impossible to solve with conventional images. One of these is analysis of multi-layered scenes, which contain for example reflections and semi-transparent objects. For this type of scenes, we have developed in a series of papers the first piepeline to obtain dense 3D geometry from multiple light field views. In particular, we have developed methods to robustly separate a single light field into multiple reflection or transmission layers, align multiple light fields with complex structure, compute depth for multiple layers simultaneously, and recover optimized scene surfaces for multiple layers simulaneously. There currently is no comparable system which can achieve similar 3D reconstruction for this type of complex, non-Lambertian scene.

In addition, we have been working on reconstrucing reflectance and lighting properties of the scene. The problem of separating an image into illumination, albedo and shading is known as intrinsic image decomposition. We have extended this theory to light fields, and developed the first intrinsic light field model. Due to the nature of the light field, we can also recover specularity in addition to the other layers, and achieve a more robust separation result compared to traditional images. The system is based on an improved reconstruction method for scene surface normals from light fields. Based on ideas from photometric stereo, we have also developed material classification systems, moving towards the goal of joint geometry, reflectance and illumination estimation from single shots.

Finally, we have explored deep learning pipelines for light fields, in particular encoder-decoder networks with multiple output pathways. This architecture facilitates multi-task learning for intrinsic decomposition and depth reconstruction, and somewhat lessens the need of ground truth decomposition data for real-world light fields, which is very hard to obtain. The methods are applicable to a variety of other inverse problems, such as light field super-resolution.

As a service to the community, we have developed and published a new benchmark database for depth estimation on light fields, and provide an extensive web page with submission and evaluation systems. We also provide code and datasets for our publications, in particular deep learning training data and trained networks. In addition, we bring together the research community and industry representatives in a regular workshop series held at major computer vision conferences, where we discuss and advance state-of-the-art and explore avenues for further progress in our field.