Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS

A New Foundation for Computer Graphics with Inherent Uncertainty

Periodic Reporting for period 4 - FUNGRAPH (A New Foundation for Computer Graphics with Inherent Uncertainty)

Reporting period: 2023-04-01 to 2024-09-30

Three-dimensional (3D) Computer Graphics (CG) images are omnipresent; everyone is familiar with CG imagery in computer games or in film special effects. In recent years, CG images have become so realistic, it is hard to distinguish them from reality. CG is now used in many domains, such as architecture, urban planning, product design, e-commerce, advertising or training and of course Virtual & Augmented Reality.

However, creating digital assets for CG is time-consuming, requiring hundreds of artists who painstakingly create 3D digital content. Image generation using these assets – known with the technical term of rendering – involves complex and expensive computation. Recently, several techniques have been developed, using simple photos and videos to directly create 3D assets. Traditional CG rendering techniques cannot handle this captured data because it is inaccurate, or in more technical terms, suffers from uncertainty, and it is hard to manipulate since lighting and appearance are “frozen” to those in the photos or video.

The overall objective of FUNGRAPH is to address both the difficulty of creating assets and the complexity of rendering, by explicitly handling uncertainty in the data and in rendering. Our goal is to make creating, manipulating and rendering 3D assets much more accessible, with far-reaching implications in applications, vastly broadening the usage of 3D technologies in society. This will contribute towards the larger goal of making 3D as accessible as photos and video have become in the last few decades.

Achieving our objective requires us to provide a new foundation of CG rendering, with uncertainty playing a central role. We developed new methodologies that can handle uncertainty in the data and the rendering process, building heavily on modern machine learning techniques. The new methodologies we developed constitute a major advancement for CG, notably for 3D content created from photos.
All publications referenced as (Author, year) can be found at https://project.inria.fr/fungraph/publications/(opens in new window)

In FUNGRAPH we developed several novel research solutions in traditional rendering, appearance capture, novel view synthesis and neural rendering and multi-view relighting.

We investigated traditional rendering algorithms using artist-generated content. Most CG images in film are rendered using the path tracing algorithm that simulates the propagation of light from the light sources to the eye along a set of paths, bouncing off surfaces with different materials. This process is often called global illumination (GI). We developed a method that pre-computes GI especially for shiny materials, allowing fast lookup at runtime (Rodriguez et al. ‘20a). We later investigated how deep learning techniques can be used to precompute GI efficiently, allowing interactive display at runtime (Diolatzis et al. ’22, Rainer et al. ’22); these are now called “neural rendering” methods for traditional assets.

We studied the problem of estimating material properties of real objects for use in traditional rendering. We use a few photos as input, providing a simple way to model materials for CG assets, that otherwise requires significant effort from trained artists. We separate photographs into layers of appearance: a “base texture", and separate layers explaining shiny appearance. We train a neural network using artist-created assets that provide “ground truth” layers. The novelty is to combine multiple copies of the network allowing use of several photos of a material patch to improve the estimate (Deschaintre et al. 19). We provide additional artistic control to allow capture at different scales (Deschaintre et al. ’20, cf image).

As an alternative to artist created assets, 3D content can be directly created from photos using Image-Based Rendering (IBR) or Novel View Synthesis (NVS). We worked on traditional IBR methods, developing solutions for the hard cases of reflections (Rodriguez et al. ‘20b) and video-based reconstruction of repetitive motion (Thonat et al. ’21). As the efficiency of deep learning approaches improved, we developed innovative solutions based on convolutional neural networks (CNN’s) for novel view synthesis (Philip et al. ’21). We also developed methods for relighting captured scenes, using synthetic data and deep learning (Philip et al. ’19, Philip et al. ’21, cf image) and more recently generative models (Poirier-Ginter et al. ’24).

FUNGRAPH introduced a new paradigm building on point-based neural rendering (Kopanas et al. ’21) demonstrating that explicit, primitive-based representations, coupled with a CNN produce novel views more efficiently and with higher quality than implicit learning-based solutions (cf image), including for reflections (Kopanas et al. ’22). These results led to the development of the most significant achievement of FUNGRAPH, i.e. 3D Gaussian Splatting (3DGS), (Kerbl, Kopanas et al. ’23, https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/(opens in new window) cf image). We also solved two limitations of 3DGS, treating very large scenes using thousands of photographs (Meuleman et al. ’24) and reducing the memory requirements by up to a factor of 27 (Papantonakis et al. ’24).

3DGS is a truly disruptive methodology; Our paper has been cited over 1900 times in 16 months, and many researchers have built on our software. 3DGS has unprecedented success in technology transfer. We released the code (https://github.com/graphdeco-inria/gaussian-splatting(opens in new window)) as mixed license open source, free for research & evaluation and paid for commercial use. The code has been downloaded hundreds of thousands of times and Inria has sold multiple commercial licenses to companies in high-end visual effects, e-commerce, generative AI for 3D objects, social media, virtual reality & telecommunications. The method has been adopted in products from Meta (https://bit.ly/3AUae0m(opens in new window)) Adobe (https://bit.ly/4i1eKec(opens in new window)) Amazon (https://bit.ly/3CAy0io(opens in new window)) etc.
FUNGRAPH demonstrated progress beyond the state of the art (SOTA) in several areas.

In neural rendering, we introduced novel solutions showing that deep learning can compactly and efficiently represent global illumination in scenes modelled by traditional 3D assets, e.g. (Diolatzis et al. ’22, Rainer et al. ’22).

The most significant advances of FUNGRAPH were in novel view synthesis (NVS), where we introduced a new primitive-based paradigm for NVS. We showed that point-based neural rendering provides better quality and speed (Kopanas et al. ’21) than neural radiance fields, also for reflections (Kopanas et al. ’22). The most significant progress beyond the SOTA was 3D Gaussian Splatting (3DGS), (Kerbl, Kopanas et al. ’23).

3DGS is SOTA in training time, rendering speed and visual quality, and it has revolutionized the field of NVS and 3D reconstruction, with follow-up work in animated scene capture (especially for humans), 3D generative models, surface reconstruction, real-time SLAM, medical research etc. The industrial uptake of the method is equally impressive as discussed earlier.

Finally, FUNGRAPH also significantly advanced the SOTA of relighting methods, showing feasibility of using synthetic data and traditional rendering for plausible relighting for outdoors (Philip et al. ’19) and indoors (Philip et al. ’21) scenes. We also showed that generative models (Poirier-Ginter et al. ’24) can be used to provide unprecedented levels of realism for relighting scenes captured with 3DGS.
Image illustrating point-based neural rendering (Kopanas et al. 21) please see text "Work Performed"
Image illustrating relighting method of (Philip et al 19); please see text "Work Performed
Image illustrating deep material estimation (Deschaintre et al. 20) please see text "Work Performed"
Image illustrating 3DGS method of (Kerbl, Kopanas et al. 23) please see text "Work Performed"