European Commission logo
polski polski
CORDIS - Wyniki badań wspieranych przez UE
CORDIS

Intuitive editing of visual appearance from real-world datasets

Periodic Reporting for period 3 - CHAMELEON (Intuitive editing of visual appearance from real-world datasets)

Okres sprawozdawczy: 2019-11-01 do 2021-04-30

PROBLEM BEING ADDRESSED:
Computer-generated imagery is now ubiquitous in our society, spanning fields such as games and movies, architecture, engineering, or virtual prototyping, while also helping create novel ones such as computational
materials. With the increase in computational power and the improvement of acquisition techniques, there has been a paradigm shift in the field towards data-driven techniques, which has yielded an unprecedented
level of realism in visual appearance. Unfortunately, this leads to a series of problems: First, there is a disconnect between the mathematical representation of the data and any meaningful parameters that humans understand; in other words, the captured data is machine-friendly, but not human friendly. Second, the many different acquisition systems lead to heterogeneous formats and very large datasets. And third, real-world appearance functions are usually nonlinear and high-dimensional. As a result, visual appearance datasets are increasingly unfit to editing operations, which limits the creative process for scientists, engineers, artists and practitioners in general. There is an immense gap between the complexity, realism and richness of the captured data, and the flexibility to edit such data.

IMPORTANCE FOR SOCIETY:
Simulation and editing of visual appearance is a core area in the scientific field of computer graphics, involving aspects of computer science, mathematics or physics. It is not only a fundamental aspect of digital
content creation, but an inherent part of our lives as well: Our society depends on computer-generated imagery for entertainment, education, culture, medical imaging, architecture... while many industrial processes including manufacturing, engineering or virtual prototyping depend on correct simulations to convey the desired visual information. Moreover, developing proper design and editing algorithms for visual appearance is also a key feature for the success of novel fields at the interface between engineering, physics and graphics, such as computational materials or fabrication. However, editing the visual appearance of computer generated objects is a challenging goal.

OVERALL OBJECTIVES:
1. To develop human-friendly parameter spaces for material modeling and editing, which hide the complexity of their underlying mathematical representations
2. To develop predictable editing algorithms based on such parameter spaces, so the user can use high-level commands such as "make this a bit more papery, and a tad less shiny"
3. To develop interactive feedback and efficient simulations
The CHAMELEON project is structured in three Work Packages (WPs). All of them are advancing at great pace, with results published in many top venues and journals (as described in the Dissemination and outputs section). The research and technological achievements for each one during the first half of the project are as follows:

WP1. Human-friendly parameter spaces: we have developed high-level, intuitive parameter spaces for material editing, based on gathering subjective information through platforms like Mechanical Turk, then applying statistical analyses and deep learning methods. We have made all our data public. Our framework has been extended beyond materials to more general concepts of visual similarity.

WP2. Predictable editing algorithms: From the parameter spaces defined in WP1, we have developed predictable editing algorithms for many common CG material representations (both measured and analytical). In addition, we have developed a fully image-space approach for interactive image editing of 3D characteristics such as depth.

WP3: Interactive feedback and efficient simulation. This WP represents an ongoing effort to improve the rendering capabilities of our simulators, both in terms of physical accuracy, and speed. Since this topic (simulation of light transport) was one of the main areas of knowledge of the PI and its group, the proposal dared to proposed more high-risk ideas than maybe in the other two WPs, including the possibility of capturing and editing real-world materials of hidden objects (not in the line of sight of the camera) for use in CG applications. The development of this idea has so far led to one paper recently accepted to Nature.
Progress is still ongoing. So far, we have successfully demonstrated how CG materials can be expressed in an intuitive, high-level parameter space (defined by words like "roughness", "shiny", "plasticky", etc), and how this leads to very easy editing algorithms, without the need to know the particular mathematical representations of each material.

One very important achievement has to do with the development of a novel light transport framework, accepted in Nature and to be published June 2019, from which we expect to extend capture and editing capabilities to real-world materials not in the line of sight of a camera.