CORDIS - Risultati della ricerca dell’UE
CORDIS

Perceptually-Driven Optimizations of Graphics Content for Novel Displays

Periodic Reporting for period 3 - PERDY (Perceptually-Driven Optimizations of Graphics Content for Novel Displays)

Periodo di rendicontazione: 2022-02-01 al 2023-07-31

The project addresses the problem of efficient synthesis and optimization of graphics content for novel display devices. It focuses on new devices, such as virtual or augmented reality headsets, which offer new ways for interacting with digital content and creating engaging experiences, but at the same time have very high visual quality and computational efficiency requirements.

The goals of this project are critical across many disciplines since display devices are omnipresent. From cellphones, desktop screens to more sophisticated displays setup, we use them to visualize, interact, and understand digital information. Especially the new types of devices, such as augmented and virtual reality headsets, have numerous potential applications. The examples include medical procedures and rescue operations, where information from external sources can be easily provided on augmented reality glasses, industrial visualizations and operations, which can be performed remotely, or training in tasks that normally involve significant risks or costs, e.g. plane operation. By providing new content generation techniques which provide high visual quality and computational efficiency, this project will significantly contribute to the broader adoption of these display devices.

This project aims to combine expertise in hardware design, computation, and visual perception to create new computational techniques for future displays. More specifically, the project will provide new insights on the aspects of human perception that are crucial for novel displays. Based on these findings, we will develop perceptual models that will guide the graphics content generation and optimization. The developed techniques will leverage both the abilities and limitations of the human visual system to provide a perfect balance between visual quality and computational performance.
The research conducted in this project involves two main parts. In the first one, we focus on studying and modeling visual perception. Later, in the second part, we apply the gained knowledge to develop new graphics content generation and optimization techniques.

We conducted a series of different perceptual experiments to investigate the sensitivity of the human observer to spatio-temporal changes in visual content. The experiments were tailored towards the needs and capabilities of the novel displays but also novel content generation techniques that utilize new artificial intelligence (AI) techniques. The unique feature of our experiments also lies in the fact that they investigate the properties of the human visual system across a wide field of view. This property enables correct characterization of the visual quality required in displays such as new AR and VR headsets. In our experiments, we also went significantly beyond what is considered a commodity in the display devices. We performed our experiment using multilayer and holographic displays, which extend the display capabilities by enabling the reproduction of accommodation cues. Our experiments and modeling also included the investigation of simulator sickness in VR applications.

We demonstrated the utilization of the new perceptual findings and the models for new graphics content generation and optimization techniques. In particular, we developed new methods that generate content of the visual quality that aligns well with human perception requirements. Our validation experiments demonstrated that our methods provide better quality using the same computational resources or save on the computational resources while providing comparable quality to the state-of-the-art techniques. We also developed a novel method for optimizing graphics content to minimize simulator sickness. We successfully demonstrated the performance of the above techniques on various display devices, such as VR, holographic, and multilayer displays.

We also presented our initial results on using novel displays to improve human performance in real-world tasks. More specifically, we hypothesized that the flexibility regarding optimizing and adjusting the content presented by novel displays opens new possibilities for improving human performance. In this particular case, we demonstrated the feasibility of speeding up eye movements.
We already have conducted several experiments which significantly broaden the current knowledge regarding human perception, especially in the context of the novel, wide-field-of-view displays such as VR and AR headsets. We also have applied the new findings to improve upon the state-of-the-art techniques for generating and optimizing content. We have demonstrated significant gains in terms of visual quality and computational performance.

So far, we have focused our research on studying the sensitivity of the human visual system to spatio-temporal luminance signals. However, we want to complement our models and methods by considering higher-level graphics components such as structure, motion, as well as accommodation cues. Including them will provide a more accurate perceptual model for synthesizing and optimizing graphics content.

We expect the results of this project to be a comprehensive and modular perceptual model, which captures aspects of human perception that are critical for future display devices, and a set of techniques that utilize it to provide the best tradeoff between visual experience and performance. Furthermore, we hope that our perceptual experiments will reveal the most promising directions for future display development.