Skip to main content
European Commission logo
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

An object-oriented approach to color

Periodic Reporting for period 2 - Color3.0 (An object-oriented approach to color)

Période du rapport: 2022-03-01 au 2023-08-31

There have been tremendous advances in color science related to human cone photoreceptors and retinal color opponency. Yet, this knowledge is based on extremely restrictive assumptions with a colored lights in the dark or flat, matte surfaces in uniformly colored contexts. But which mechanisms mediate perception of colors in the real world– when looking at a field of flowers or searching for a certain product in the supermarket?

Arguably, the most important function of color is the processing of information about objects in scenes. It is the tight link to objects through which color helps us see things quicker and remember them better. This proposal, Color 3.0 is based on an active observer dealing with three-dimensional objects in natural environments. It deals with the dimensions relevant for the main purpose of color perception – intensity, hue and saturation. The goal is to fundamentally rethink color science around real world objects and natural tasks.

We aim to gain a deep understanding of the circuitry underlying color perception in real and virtual worlds, a Deep Neural Network model of color processing that can be traced through the brain, a new colorimetry based on natural object colors rather than flat, matte patches of light, and last but not least an improved measure for luminous intensity. This could lead to a revision of how we study the early visual system, better color reproduction and better lighting systems. Our use of real-time raytracing in VR could cause a paradigm shift in vision science, away from a passively viewing observer pushing buttons, towards an active observer situated in a virtual world and performing a natural task.
We made enormous progress and laid the foundation for using VR headsets in fully calibrated color vision experiments. We implemented a calibration process that allows full control over complex natural scenes, and then used this environment to test the contributions of various cues to human color constancy. Using natural meaningful environments, we could show that the human visual system goes beyond simple pixel statistics and instead considers an interpretation of the whole scene and its segmentation into background and foreground.
In line with these experiments, we discovered that the perceived saturation of objects goes beyond a merely colorimetric characterization of the pixel distribution. Observers seem to be able to perceive the causes underlying such pixel distributions with respect to environmental factors such as shading, inter-reflection or translucency.

At the same time, we built Deep Neural Network (DNN) models of color appearance under diverse lighting environments. We first trained a DNN to successfully recover the reflectance of a single object in a computer rendered scene. Then we extended the model to recover the reflectance of every single object in the scene.

For our work on perceived light intensity, we built a carefully calibrated setup with multispectral lamps. This allowed us to compare the predictions of different models for perceived light intensity with the judgments of human observers. Our results indicate a need to improve currently established standards for lighting evaluation.
The development of robust methods for color calibration in Virtual Reality might have a transformative effect on research in vision science. It allows full control over visual stimuli while at the same time reaching a degree of realism that is close to indistinguishable from the real world. Our work on measuring the perceived intensity of light has shown the need for extending and potentially revising well established lighting standards.

Until the end of the funding period, we expect (1) to obtain a deep understanding of the circuitry underlying the active color perception of objects in natural scenes, (2) to implement a DNN model of color that can be mapped onto neural stages of color processing, and (3) to pave the way towards a new colorimetry based on natural object colors rather than flat matte patches of light. This will provide a better way to predict color appearance when natural images are passively viewed on screens, and for an active observer situated in the real world.
repimage.png