Skip to main content
Go to the home page of the European Commission (opens in new window)
English en
CORDIS - EU research results
CORDIS
Content archived on 2024-05-27

Modelling Expressions and Shapes of Human heads

CORDIS provides links to public deliverables and publications of HORIZON projects.

Links to deliverables and publications from FP7 projects, as well as links to some specific result types such as dataset and software, are dynamically retrieved from OpenAIRE .

Deliverables

HeadIntegrator allows to intuitively define and parameterise a complete model of 3D character, from head to toes, based on a bare face issued from FaceGenerator, and possibly wearing a hair from HairStyler. HeadIntegrator automates a whole process of insertion and combination of elements and tasks compulsory to computer graphics artists, which were so far repetitive and tedious to fulfil. The approach underlying the HeadIntegrator process comprises two aspects: - It allows to specify once and for all the technical constraints linked to each market, regarding eyes, models of mouth, neck, body and associated textures. For the three resolution levels, a database of importable geometric and texture elements is pre-defined once and for all. - It allows the automation of the insertion of these various elements in a hierarchical head model (eyes in the sockets, inner mouth in place) and their actual junction altogether. This complete head is then associated to a body model that is automatically built on an anatomic skeleton fit for animation. Indeed, the constraints of its anatomic junctions have been precisely defined thanks to the analysis of real motion capture sequences. A set of high-level tools then allows the user to refine the position, to parameterise any elements, and to choose an appropriated texture for each of them. Textures can be selected from a predefined database (eye colour, texture of tongue and teeth) or generated based on the texture already associated with the face in question.
Expressions and Visemes Generator Plug-in is a Maya plug-in allowing to bring the sparkle of life to 3D characters. To create a vivid facial animation, a series of faces (called target shapes) performing the various movements occurring during pronunciation of syllables (visemes) and several different emotional expressions must be drawn. They are usually created manually from scratch for each character. The MESH Expressions and Visemes Generator plug-in automates the creation of convincing target shapes. It uses the morphologic attributes data parameterised with the FaceGenerator plug-in to automatically adapt the relevant target shapes according to the actual morphology of the model created. It corresponds to: - The set of the eight most common emotional expressions. - The set of visemes necessary for a realist simulation of the lips movements during pronunciation. - A face with closed eyelids. In other words, Expressions and Visemes Generator enables the automatic definition of a set of deformation/displacement fields corresponding to visemes and emotional expressions adapted to the morphology of the considered face. These maps are not generic, as it is usually the case. In order to ensure a maximum realism regardless of the animation approach used, they are defined in function of the morphology of the face in question. Each displacement map is directly encoded as a geometric face model. The user can also generate faces corresponding to a weighted mixture of the various emotional expressions and visemes. The faces generated (and the displacement maps corresponding to the difference between these faces and the original face) can be used directly to define a realistic animation of faces using a KeyShape/KeyFrame approach in Maya, or in any offline or online application, games or web3D engines. What is more, the results obtained thanks to Expressions and Visemes Generator might be compatible with current voice analysis systems (text to speech or voice to speech).
FaceGenerator is a Maya plug-in enabling automatic and intuitive generation of 3D bare faces hardly distinguishable from real ones, in a method resembling profiling (as in Photofit). Statistical analysis of existing human faces - acquired by 3D scanning devices - allowed to build a continuous representation space (the FaceSpace) and to put them in correspondence with a generic 3D model (the face mask). Thanks to FaceGenerator's normalised sliders, computer graphics artist can then generate a new personalised face by choosing specified morphologic attributes either global (gender, age, skin colour) or local (shape of nose, chin, forehead or eyebrows). Technically, the modelling of a face amounts to determine the model of the FaceSpace that matches the morphological description. As the data on which is built the FaceSpace correspond to real ones, the virtual faces created will therefore obviously be realist and convincing. In addition, all the properties of the generic model are implicitly reported to the created face: - The created face can be used at any definition level, high, medium or low, in order to comply with the respective constraints associated to each market. - The created face is MPEG-2 compliant. - Vertices density and distribution have been defined in order to encompass all human morphological differences and allow later a "natural animation". - The created face is ready to be animated since it does inherit automatically from the generic model the basic facial postures corresponding to pronunciation and emotional expressions (target shapes). However, these target shapes do not correspond precisely to the morphology created yet. Each of the face attributes can be perfected separately and the result viewed in real time. Based on an initial draft obtained in a few minutes, users can manipulate and refine his model using a high-level interface that allows direct transcription of the impression/visual perception sought (increased masculinity, increased or decreased age, etc.).
HeadAnimator helps bypassing the difficulties traditionally presented by the creation a realist animation of a 3D body in which all elements move in a coherent way altogether. This Maya plug-in consists in an innovative system based on dynamic representation. It allows rapid animation of 3D virtual characters through the intuitive definition of the animation of all the elements composing the character: facial animation, animation of the hair and animation of the body. Thanks to HeadAnimator: - Facial movements corresponding to computer generation of both emotional expressions and visemes can be intuitively produced dynamically (taking into account temporal non-linearity of the vertices displacements). The resulting animation is based on the specification of high-level characteristic keys; for example, the character says enthusiastically "congratulations" then close his eyes with a smile. The computer graphics artists control the timeline to speed up or slow down the several muscle movements involved. The statistical analysis of several dynamic acquisitions allows to widen the continuous representation space (the FaceSpace in the FaceGenerator plug-in) by adding the notion of "Dynamic FaceSpace". - The whole body positions are defined across the timeline, thanks to an inverse kinematics engine, which comprise the precise anatomic constraints, integrated in the skeleton coming from HeadIntegrator. With this technology, character animation amounts to the simple definition of several key attitudes at different moments (the animation in between being consistently interpolated according to the human anatomic constraints). - Hair movements are automatically linked to rigid displacements of the head (translation and rotation of the neck constrained by the animation skeleton). These movements (being dependent of the neck junctions) are also transcribed as a set of keys in such a way that the user can modify the resulting animation. Each animation sequence is transcribed as an editable timeline (fully compatible with traditional standards of animations). They can be then modified or rendered in Maya - in relation with the animation of other elements of the scene - or exported as key frames.
The face generator is a Maya plug-in that allows to create virtual face models using a number of intuitive sliders. The generator is based on the statistical analysis of real human faces. Using Eyetronics' technology a set of faces have been scanned and analysed by UFR to create a static face space using principal component analysis. Further analysis is performed as to allow to navigate through the face space (i.e. create new faces) using intuitive sliders. As such, sliders are provided in the plug-in interfaces to control age (young/old), gender (male/female), race (caucasian/asian/african), size, and also more local features such as the lips or the nose. Furthermore, during the analysis texture and shape have been separated allowing the user to control shape and texture separately.
The Expressions- and Viseme generator plug-in is a tool that allows to create expressions on a static virtual face model. The tool is based on a dynamic face space that has been created by analysing a set of 3D dynamic sequences: using Eyetronics' technology, the performance of several actors has been captured at video rate in 3D. These performances contained both visemes as well as expressions. Using principal and independent component analysis a dynamic face space has been constructed. The plug-in allows to import a static face model from the static face generator plug-in and calculate the optimal dynamics, i.e. blend shapes for that face. Using a set of sliders, the resulting blend shapes can be fine-tuned as to give the animator the flexibility to shape the blend shapes depending on the requirements. Furthermore, the plug-in also allows to import a data file containing the timing of the phonemes related to an audio file. The plug-in converts that data file then automatically in a set of function curves that operate on the blend shapes chosen for the virtual character. Also in this case the function curves can be fine-tuned in the graph editor.
HairStyler achieves the prowess of allowing the modelling (and the animation to come) of hyper-realist hairstyles in a few mouse clicks. This Maya plug-in enables users to simulate and control hair growth; then they can comb and dye, curl and brush it in order to create the expected hairstyle with hairdressers' virtual tools. 3D characters can now wear extremely realist fashion hair. HairStyler assimilates hair to a fluid flow. After this revolutionary approach, the definition of an hairstyle simply amounts to define the stream flowing around head and shoulders (whatever their shape is) in accordance with the fluids mechanics theories. The technology on which is based Hairstyler consists in mathematic models of fluid flow, independent from the type of geometric model considered for the hair representation. This enables to represent the hairstyle: - As a homogeneous streamline-based model for television and cinema markets. - As a less detailed model consisting in a set of polygonal layers (stream-surfaces) with semi transparent textures, for video games and Internet markets. Users first define the hair itself by indicating its density on the different parts of the scalp and selecting a rough definition of the desired length, thickness and colour. They can then simulate the hair growth and prevent hair/head interpenetration by positioning a set of sources and specifying the operators that influence the flow around the head. Finally, the hairstyle is refined and associated to various effects and finalised thanks to virtual hairdressing tools. The hairstyle obtained in a few hours only is hence extremely realistic and fashionable. Innovation brought by HairStyler allows to shift from the interactive hair modelling realised each time for each character, to an exportable style of ultra-realistic hair that can be matched with a head which can either have a high definition (cinema, television) or a low one (suitable for web 3D and video games). This process opens new prospects for markets on which such an ultra-realist hair definition was impossible or totally prevented budgets-speaking.

Searching for OpenAIRE data...

There was an error trying to search data from OpenAIRE

No results available

My booklet 0 0