Skip to main content
European Commission logo print header

The Virtual Man

Deliverables

Glasgow University was in charge of the creation of the VMEN based on data generated from photographs. The main result is the design and implementation of a complete workflow going from the data capture from real humans to their integration in virtual environments. The key innovative features are the following: - Accurate conformation of real scanned data to any mesh topology and resolution. - User-friendly software allowing any type of user to texture human bodies using a couple of photographs. - 3D Studio Max plug-in for segmentation and the automatic skinning of a human 3D model based on a set of postures created by the user. - Generation of 3D garments by scan subtraction. An exhaustive report about legal issues related to the creation and the usage of virtual character 3D scans was also produced (“Legal Assessment of Issues Regarding the Scanning of Individuals for the V-Man Project”). Finally, papers were published at international conferences. With the future commercialisation of the VMAN SDK, Glasgow University will be the ideal partner to provide the required virtual human body library, most likely, through its spin off company "Virtual Clones". Moreover, the results will be used at starting points for further research within the laboratory. More information on the Virtual Man project can be found at http://vr.c-s.fr/vman/
Results of the The Virtual Man (VMAN) project will be used by CSTB as a support for design reviews and evaluation studies. Indeed CSTB has long been involved in sharing and disseminating information to construction companies on both national and European levels. CSTB is a major player on the AEC sector and delivers specialised information to the building professionals in particular through regulations, innovative methods and technologies aiming to improve their business processes. In terms of dissemination and exploitation of the results of VMAN, CSTB aims at: - Experimenting, evaluating and demonstrating new Virtual Reality technologies in terms of exploitation perspectives for the construction sector. - Disseminating and transferring technological advances and best practices solutions towards the whole construction profession which is currently just starting to use Virtual Reality applications but has not yet used digital models from real sources to produce interactive scenes. Use of the VMAN prototype by the CSTB to support dedicated services for urban planners and architects, in particular the ability to evaluate their design from different points of view (technical, environmental, etc.) and to interact with the design in order to compare different scenarios. These services will benefit from the Reality Centre™ that CSTB has set up (this centre is now operational and is dedicated to design reviews of urban and architectural scenes). The center will host design reviews for - Building projects and - Urban (re) development actions. A consultancy offer now allows stakeholders (local authorities, users, planners, etc.) to use the center along with the software tools developed in order to evaluate different scenarios for urban (re) development. Furthermore, it is envisaged to use the Reality Centre in the process of public inquiries. Indeed, these inquiries are usually dealt with in a very summary way since citizens have rarely access to clear and readable information (this information uses generally 2D drawings for which special skills are required). Therefore, public presentations using CSTB’s Reality Centre and VMAN tools would allow to deliver intuitive information about the planned actions and thus increase citizens implication in actions having direct impact on their quality of life. The VMAN SDK will be used to populate construction projects (buildings and urban scenes) in order to make the design reviews more realistic and convincing for stakeholders and therefore solving a major drawback of the current set of services offered: the built environments projects are very realistic but lack of animation and life. VMAN SDK will also be used to develop specific applications for testing evacuation procedures from buildings and public spaces in case of emergencies (fire, terrorist attacks, etc.) More information on the Virtual Man project can be found at http://vr.c-s.fr/vman/
The character animation toolkit is the main output of the project. It has been developed by CS and is linked to optional modules developed by project partners such as the University Glasgow body scanning system and Sail Labs Multimodal control system. In terms of innovation, it is one of the first attempts in Europe to propose a cutting edge solution for virtual and interactive character development but however suitable for industrial development. V-MAN has recently (May 2004) won 2 awards at the Laval Virtual conference in France. The jury composed of people from the scientific committee, industry and specialised journalists, has warmly receiving the demonstrations, acknowledging its scientific and industrial innovation. Since the writing of the proposal, a Dutch competitor appeared (http://www.mysticgd.com), but is positioned on the game market and commercially speaking, the real competition is clearly located in the US with Di-Guy software (www.bdi.com). This product is well introduced into the industry and above all, the US defence. It is less advanced technologically but owns, in addition to its tens of customers, a large library of characters and moves and an integration into Multigen simulation platform. The V-MAN system allows developers of 3D interactive applications to populate their simulation with virtual characters thanks to: - Authoring applications to create the V-MAN skeleton, skin, clothes, moves, physics properties and accessories; - A Software Developer Kit that gives life to the V-MAN thanks to 256 C++ functions and libraries of characters, clothes and moves. Here is a description of the authoring tools and the SDK developed during the project: Body Authoring Tool: The tools support the production of the mesh file (the V-MAN representation) that will be animated by the SDK. The user defines a skeleton that fit to his 3D character representation, places the skeleton according to the character, links the skin representation to the bone and finally, set hotspots on the body to allow V-MAN interaction with the environment. Physics Editor: The tool helps to define objects and virtual characters collision volumes in order to apply dynamics principle to them. Moreover, it allows defining the limits of each virtual character joint, in order to produce a realistic collision response. Object Editor: The tool aims at defining all the actions a V-MAN can do on a specified object and how it can do them. The user may define object hotspots, for the V-MAN to interact with the object. It may be, for instance, the position where the object should be grabbed. One may also define editing parameters to improve the interaction with the object. For instance a move is defined to sit a character on a specific chair and the user wants to use this move to sit the character on another chair. He can then specify where the V-MAN has to sit (i.e. match the sitting hotspot of the object with the sitting hotspot of the V-MAN), when it will be in contact with the object. Graph Editor: The graph editor allows to build motion graph which is a state graph allowing to better chain different animations. A state corresponds to a motion capture file associated with a key parameter and a cyclic flag. A directional transition between two states represents a possible transition from a movement to another. Modular SDK: A great feature in the V-MAN system is to let the developer use one part of the system or the whole system. For instance, the user may decide to use the system without the physics engine or the voice system and it should not impact his development. The developer may also use different entries in the SDK, that is to say that it can use it at high level by controlling his V-MAN with high level behaviours or at low level by adding directly animations on the V-MAN. We also offer the user the opportunity to tune his system, for instance he can change the rendering engine, or the physics engine he wants to use in his simulation. Here are the available tunes with the V-MAN system: - Change rendering engine. In the V-MAN SDK the rendering methods are implemented for 2 rendering engines: Vertigo3 and Performer. Some others rendering engine can be added very easily. - Change physics engine. In the V-MAN system the use of 3 different physics engines is possible. The 3 available physics engines are: Karma from Mathengine, ODE, and Tokamak. - Change path-planning engine. 2 path planners are available but the user can develop is own path planner to plug it in the system. Moreover, the SDK is easily extendable, the user can for instance add his own behaviour in the system and apply it to a V-MAN. The libraries of characters, clothes and moves are under development and will count 30 characters / clothes / accessories together with 50 moves. The commercial launch is planned for fall 2004. More information on the Virtual Man project can be found at http://vr.c-s.fr/vman/
As an independent producer, HD Thames is interested in a broad spread of uses for virtual humans in television, film and interactive cross-media applications across a wide field: animation, virtual stuntmen, virtual presenters and virtual storytelling. As the focus in film and television has shifted increasingly towards post-production, the door has opened for user-friendly software toolkits that enable creation to take place at the keyboard. HD Thames' principal interest in the V-Man project has been the area of virtual storyboards, developed during two earlier IST projects, VISIONS and VISTA. Work on these two projects highlighted the need for a new generation of virtual people who could be easily created, customised, clothed and directed. The need here was not for a photorealistic style, but for a credible level of human representation that could evoke emotion and stir the imagination of the viewer. The V-Man project has delivered the means to create these virtual people, through a Body Authoring Tool (BAT), and a Software Development Kit (SDK). It is now possible, working from a small basic library to create and customise a skeleton, skin it to represent human types, and clothe it. For efficiency, clothes replace the skinning of covered parts of the body, rather than becoming an additional layer. Motion capture can then be applied to the virtual human to make it move or behave. For actions such as walking, footsteps can be determined by path planning, and the behaviour of the body further modified by inverse kinematics and a physics engine. Motion blending enables smooth transitions between different motions captures, while hot spots enable the virtual humans to interact meaningfully with objects such as chairs and doors. Cloning makes it possible to have a number of V-Men working on a virtual set in real time. Professional workers in film and television have been impressed with the quality of these figures, however they need to be incorporated into a storyboarding shell with substantial libraries and the facility to create sets, add props, direct and "film" the action to make them widely usable in the production community. We intend to pursue the commercial possibilities for this in the coming months. V-Men have a key role in new fields such as interactive storytelling or cross media content delivery where artificial humans can provide automated interfaces for voice and data exchange. They may come to play an important role in speeding up the creation of conventional linear animation. More extensive use in film and television will require V-Men to develop from symbolic "cartoon" representation" into a fuller realism, both in terms of appearance and behaviour. There are clear needs for V-Men to do difficult or dangerous tasks currently undertaken by stunt-men, as well as for the creation of fantasy creatures, but these are major challenges for the future. V-Man has shown that it is possible to take a highly complex task, and simplify it so that it becomes more accessible for the non-expert. The creation of virtual people has clear benefits for those working in the audio-visual industries. Further work may extend access further, to the ordinary European citizen. More information on the Virtual Man project can be found at http://vr.c-s.fr/vman/
The Sail Labs Conversational System is a man-machine dialog system for spontaneous, speaker independent, natural dialog interaction. The result of the V-MAN project is the integration of the Conversational System into the V-Man 3D core, so that V-Man avatars can interact with the user using voice commands. The integration into the V-Man 3D core has been implemented in terms of a plug-in. An XML-style information exchange has been specified and implemented to: - Notify the dialog system about objects, persons and locations in the 3D space. - Notify the 3D code about user commands. In the event of ambiguous commands the dialog system is capable to resolve the conflicting information through asking disambiguating questions. These questions are asked using text-to-speech technology. Example: User: "John, go to the chair" System: "The red chair or the blue chair" User: "The red one" In this scenario the dialog system notifies the 3D code about the request of the user to move and sit avatar "John" on the red chair. Potential offered for further dissemination and use: - Reduce cost of customer interaction. - Reduce waiting time on hotlines. - Free human resources from repetitive work. - Enable life-like virtual worlds. More information on the Virtual Man project can be found at http://vr.c-s.fr/vman/

Searching for OpenAIRE data...

There was an error trying to search data from OpenAIRE

No results available