Periodic Reporting for period 2 - UNRAVEL (Unraveling the Physics of Light at Scale)
Berichtszeitraum: 2024-03-01 bis 2025-08-31
world. Visual data is our richest source of information, and we depend upon it
to understand the nature of our surroundings. In computer graphics, *rendering*
simulates the propagation of light to generate photo-realistic images of
virtual environments. However, in scientific applications, the goal is often
the reverse: not to create images, but to extract meaning from the ones we
already have.
The UNRAVEL project explores *Physically Based Inverse Rendering* (PBIR), a
computational approach that flips the traditional process of rendering on its
head to address this latter problem. Instead of producing images of a virtual
environment, PBIR reconstructs the environment constrained by physical laws and
visual data (e.g. photographs or other types of measurements). By simulating
how light must have interacted with the environment to produce observations,
PBIR infers a principled physical description of the world.
Take, for example, medical tomography scans or satellite-based observations of
the Earth. The underlying measurements contain valuable information about the
composition of tissue or the atmosphere, but that information is not directly
accessible from the raw sensor data. Subject experts like doctors or climate
scientists require structured, quantitative 3D models that express relevant
physical quantities like absorption or aerosol concentration. Producing such
models from the raw sensor data requires complex computational processing.
Traditional methods for solving such reconstruction problems rely on highly
simplified models. Tomographic reconstruction, for instance, normally assumes
that monochromatic X-rays pass straight along a straight line, neglecting
scattering and spectral effects. Likewise, established satellite image
reconstruction methods treat each pixel as an independent vertical column,
which breaks down in the presence of clouds or slanted viewing angles.
PBIR offers a powerful general alternative. By simulating the full physics of
light transport within a virtual environment—including scattering, absorption,
and complex geometry—PBIR can render synthetic images that mimic what a scanner
or satellite would observe. These synthetic images are then compared to
real-world data, and the virtual model is iteratively adjusted until the two
become consistent. This process offers an unprecedented level of generality and
produces physically interpretable answers. Despite this potential, PBIR is not
yet practical in most real-world applications. Current methods are
computationally expensive, lack robustness, and do not scale to large datasets.
The UNRAVEL tackles these challenges through two avenues. First, it seeks to
dramatically improve the scalability and robustness of PBIR methods, developing
algorithms that can operate efficiently on complex, real-world data. Second, it
aims to bridge the gap between PBIR and scientific applications, building
proof-of-concept systems that demonstrate the use of PBIR to solve concrete
inverse problems in several scientific fields. We selected several suitable
applications, specifically in the area of tomography, 3D printing, remote
sensing, and architecture.
and robustness concerns in conventional PBIR methods. This is a prerequisite
for the planned scientific applications. This led to several important
advances.
Surfaces are an essential element of any 3D scene description, but at the same
time they also cause considerable difficulties in prior PBIR methods that all
rely on the computation of mathematical derivatives. The problem is that
surfaces introduce visibility discontinuities, which require special treatment
during this process. This is not only costly but also makes the optimization
fragile.
We first developed an approach to detect and process these discontinuities with
dramatically increased efficiency (Projective Sampling, SIGGRAPH Asia 2023). To
make optimizations robust, we diffuse the derivative computation into the space
surrounding surfaces (A Simple Approach to Differentiable Rendering of SDFs,
SIGGRAPH Asia 2024), and we simultaneously consider multiple conflicting
possible explanation of a tentative solution (A Radiance Field Loss for Fast
and Simple Emissive Surface Reconstruction, SIGGRAPH 2025). We also
incorporated the ability to share information across optimization iterations,
which can significantly improve convergence speed in challenging settings
(Recursive Control Variates, SIGGRAPH 2023).
Besides publications, a portion of the time was spent developing and extending
two open source frameworks (Dr.Jit and Mitsuba) that serve as the computational
foundation of work done on the UNRAVEL project. Several parts of these projects
underwent complete rewrites, and we incorporated data structures and
fundamental building blocks to support the published algorithms in downstream
applications.
The second conceptual part of the proposal targets applications in diverse
scientific domains, which we pursue in collaboration with domain experts. So
far, this entailed the following work:
We teamed up with EPFL's Laboratory of Applied Photonics (LAPD) to apply
inverse rendering to tomographic volumetric 3D printing (TVAM), which refers to
a 3D printing technique that creates solid objects in seconds by projecting
illumination patterns onto a rotating vial of resin. The advantage of inverse
rendering is that it can compute higher quality patterns than was possible with
prior work, and that this generalizes to a wider range of experimental setups.
This research was published at SIGGRAPH (Inverse Rendering for Tomographic
Volumetric Additive Manufacturing, SIGGRAPH Asia 2025). We also presented it at
the SPIE Photonics West conference (An inverse rendering framework for
tomographic volumetric additive manufacturing, OPTO 2025) and showed extensions
to customized printing setups (Novel printing geometries for tomographic
volumetric additive manufacturing, OPTO 2025). Finally, we released the Dr.TVAM
open source project, which provides an open research platform for tomographic
volumetric additive manufacturing using inverse rendering.
For the tomography direction, we began a collaboration with Prof. Amirhossein
Goldan's team at the Cornell Weill Medical College in New York City, which
specializes on the development of novel imaging technology based on positron
emission tomography (PET). In PET imaging, a patient is injected with a
radioactive tracer that binds to tissue (e.g. a tumor). The decay of this
tracer generates a positron, whose annihilation in turn produces two X-rays
that can be detected by pairs of specialized sensors. The benefit of inverse
rendering in this context is that it permits more accurate modeling of the
underlying physical process, which improves reconstruction quality. We
demonstrated this work in three presentations at the 2024 IEEE Symposium on
Nuclear Science (Inverse Rendering for PET Image Reconstruction, NSS 2024;
Inverse Rendering for PET Scanner Calibration, NSS 2024; Prism-PET II: Second
prototype of the TOF-DOI Prism-PET brain scanner with tapered crystals and
inverse rendering reconstruction, NSS 2024).
while refining the underlying computational approach to fully support their
needs.
Work on two additional directions is ongoing: first, we are developing a
differentiable simulator of atmospheric light transport with the objective of
inferring atmospheric parameters from satellite imagery. The second direction
targets architectural simulations. The goal here is to compute derivatives of
lighting within an architectural model, and to use them to guide an architect
towards achieving desired lighting goals by modifying geometry and materials.