Service Communautaire d'Information sur la Recherche et le Développement - CORDIS


REALITY CG Résumé de rapport

Project ID: 256941
Financé au titre de: FP7-IDEAS-ERC
Pays: Germany

Final Report Summary - REALITY CG (Computer Graphics of the Real World - Realistic Rendering, Modelling, and Editing of Dynamic, Complex Natural Scenes)

The visual world around us abounds with inextricable complexity and bewildering beauty. For a long time, computer graphics tried to emulate the intricacies and aesthetics of the real world by manually building digital, virtual worlds from the ground up, failing time and again to achieve truly authentic realism. In essence, nature is just too complex and detailed for us to successfully imitate her by hand.

The project “Reality CG” set out to overcome this barrier of un-realness in computer graphics. But instead of trying to build a digital copy of some complex, real-world scene by explicitly modeling all necessary details, our research team worked on finding more efficient ways by creating digital worlds directly from the “real thing”. Our approach was motivated by the success of movies and TV: just as we humans accept a simple video of some natural, dynamic scene as an authentic rendition of the live action, our approach was to teach our computers how to create realistic digital models of real-world things from conventional video recordings.

Our work started at the end: How do we know when we are successful ? When is a computer model visually authentic, i.e. as realistic as its real-world counterpart ? To answer this question, we looked at brain waves. We used EEG to see how our brain waves change when we look at the photograph of a real object versus when we look at a computer-generated image of its digital model. One of the results of our research was that we are now able to reliably detect whether an image or a video appears visually authentic to us or whether artefacts mar its realistic impression.

Nothing having to do with computers is ever completely error-free, and so it is as well when reconstructing digital models from video recordings. But that doesn’t mean that model errors must be annoyingly visible later on. Just as in real life, unfavourable parts of a model may be kept concealed, or errors can be successfully disguised during image computation. For both cases, our team developed suitable algorithmic strategies. As one application scenario, take an arbitrary movie scene depicting some actor or actress whose dress you don’t approve of (maybe because of fashion considerations, maybe because of youth protection). With our algorithms, the director (or someone else) can take the finished movie and still change the clothes of the actor so realistically that you won’t see that the new dress has been digitally augmented.

Yet another frequent visual limitation is the resolution of the display. In real life, our eyes can see much better than what a computer monitor or a TV set is able to provide. But there is a trick to enhance the apparent resolution of displays. By exploiting how our eyes automatically follow and track moving objects on the screen, we have devised a method to enjoy movies in professional cinema resolution on conventional TV sets. We filed for a patent for this invention.

The technology we have developed constitutes a powerful system for creating visual realism in computer graphics applications. We call it the Virtual Video Camera ( It has been used, for example, to produce the music video to the Symbiz song “Who Cares?”. By now, the video has been watched almost threehundredthousand times on YouTube ( We are in contact with several companies that wish to use our technology.

The next big thing in visual electronics are head-mounted displays for immersive experiences. So far, however, two major obstacles prevent their widespread success: mediocre image quality, and missing real-world content. Towards the end of the project, we have begun addressing both challenges based on the expertise we have gathered. To improve display quality, we have developed a head-mounted display with an integrated eye tracker. This way, the computer can continouusly adapt the displayed view to the user's current gaze direction and make use of our visual perception characteristics. In 2015, our prototype has won awards both at the IEEE Virtual Reality conference as well as the ACM Multimedia conference. To tackle the second problem, we are extending our Virtual Video Camera technology towards panorama videos and 360 degree rendering. Both new lines of research owe their success to our sustained work on "Reality CG".


Juergen Hesselbach, (University President)
Tél.: +495313914112
Numéro d'enregistrement: 183313 / Dernière mise à jour le: 2016-05-26
Source d'information: SESAM