The purpose of this project is the design and development of a virtual person animation system in controlled environments, which enables the modelling, analysis and simulation of human motion. This project could contribute to the use of advanced image analysis for animation creation.
Our principal aim is to obtain, in a reasonable time period, a realistic animation of a person using a sequence of images from different views. This project is the basis of a system, which combines the most modern techniques in order to carry out the processes of analysis and synthesis of human motion in a common environment.
In this project we propose the use of a biomechanical model constructed using a hierarchical and articulated structure in order to establish a correlation between each one of the structural elements of the biomechanical model with the analytical characteristics of the images obtained using different views suitably calibrated and synchronised.
In order to ensure a wide range of applications the individual will not wear any type of marker or special suit, although the environment of the movement will present certain usual constraints (a fairly uniform or static background, unchanging lighting, etc.). Given that the main application of the system will be the television production of scenes and animation, the filming environment (studios) meets these conditions perfectly. In any case these constraints will be minimised as the tracking and matching criteria and algorithms are improved although the relative benefits and costs of automation must be taken into account.
To complete the process, information regarding the colour of the scene, scale of greys and multiple cameras or synchronised views will be included. The stage will also be modelled, providing a virtual model of both person and stage.
Using different algorithms and digital processing techniques of image sequences on different levels, a set of analytical entities will be obtained (dots, segments, areas, etc.) which in combination with the biomechanical constraints will enable us to track, identify and correlate the parts of the synthetic model with the position of the body shown by the different views.
From the synthetic point of view, the system will generate scenes that have not been previously filmed. Basic modelling processes and animation of different elements of the human body will be possible. Special emphasis is placed on modelling, the defining of deformable models and the analysis and synthesis of mechanisms applied to the human body.
The graphic environment and programming methodology will be perfectly defined so that a sequence of movements or simulation either real or synthetic can be reproduced interactively using high level control and specification mechanisms. At the same time the routines and/or libraries generated should be portable and easily reusable as befits an object-oriented programme. The participation of companies working in this field will allow us to introduce the results of this new system in commercial products, which at present lack reliable and efficient capture techniques.
Humodan is an exploratory award to prepare for the project described above. It will be submitted in a future call for proposal.
Funding SchemeEAW - Exploratory awards
1017 BX Amsterdam