Skip to main content
European Commission logo print header
Content archived on 2024-04-19

REALISTIC, AUTONOMOUS 3D VIRTUAL HUMANS AND THEIR INTERACTION IN MULTIMEDIA APPLICATIONS

Exploitable results

Visual feedback, in a typical computer graphics application that requires items to be positioned or moved in 3-dimensional space, usually consists of a few orthogonal and perspective projection views of the same object in a multiple window format. This layout may be welcomed in a computer aided design (CAD) system where, in particular, en engineer might want to create fairly smooth and regular shape and then acquire some quantitative information about his design. But in 3-dimensional applications where highly irregular shapes are created and altered in a purely and aesthetic fashion, like in sculpting or keyframe positioning, this window layout creates a virtually unsolvable puzzle for the brain and makes it very difficult (if not impossible) for the user of such interfaces to fully understand his work and decide where further alterations should be made. A sculpting approach with a graphical interface based on the 'ball and mouse' metaphor (involving a mouse for selecting 3-dimensional primitives performing interactions, and a SpaceBall for controlling the location and orientation of the modelled object) allows to overcome the limitations of traditional modelling software. The main software components include: SCULPTOR, a software package for building 3-dimensional objects and scenes and simulating physical interaction; FACE, a software package for the simulation of facial expressions; TRACK, software for the simulation of virtual humans; and COLLISION DETECTION, a set of software algorithms for collision handling of 3-dimensional objects within virtual scenes. To create more realistic virtual actors for use in the film, television and games industries agents are being developed which enable virtual actors to perform scripts, including moving within a virtual scene, and interacting with virtual objects and other virtual actors. The agents also allow virtual actors to improvise actions appropriate to their roles when they are not performing an explicitly scripted action.
Because the human face plays the most important role for identification and communication, realistic construction and animation of the face is of immense interest in the simulation of humans. Computer simulation of human facial expressions requires an interactive ability to create arbitrary faces and to provide a controlled simulation of expressions on these faces. The FACE software presents the interactive facilities for simulating abstract muscle actions using free-form deformations (FFD). The particular muscle action is simulated as the displacement of the control points of the control-unit for an FFD defined on a region of interest. One or several simulated muscle actions constitute a minimum perceptible action (MPA), which is defined as the atomic action unit to build an expression. In the FACE software, the skin surface of a human face, an irregular structure, is considered as a polygonal mesh. Muscular activity is simulated using rational free-form deformations. To simulate the effects of muscle actions on the skin of a human face, regions are defined on the facial mesh corresponding to the anatomical description of the regions of face where a muscle action is desired. A control lattice can then be defined on the region of interest. The deformations obtained by actuating muscles to stretch, squash, expand and compress the inside volumes of the facial geometry are simulated by displacing the control points of the control lattice. The region inside the control lattice deforms like a flexible volume, according to the displacement and the weight at each control point. The resulting software is subdivided into the following layers: Abstract Muscles, Minimum Perceptive Actions, Phonemes and Expressions, Expressions and Words.

Searching for OpenAIRE data...

There was an error trying to search data from OpenAIRE

No results available