Exploitable results A 1-shot 3-dimensional sensor A simple configuration with corresponding software has been devised for the acquisition of 3-dimensional data from dynamic scenes. The system is based on the projection of a regular pattern, of which a single image is taken. It is cheap, requiring only 1 projector and 1 camera. It produces both a range image and an intensity image of the scene, without the need to align the 2, thereby yielding realistically looking reconstructions. Furthermore, the system is easy to use and calibrate and the associated software is available. Because both the range information and the intensity information are obtained from a single image, the system can be used in dynamic environments, allowing robots to navigate during data acquisition and to model or inspect moving objects, (eg for virtual reality purposes). This '1-shot' characteristic is quite unique for an active system. Furthermore, avoiding mechanically driven parts for scanning gives the system increased robustness and lowers the price substantially. This system has been developed under the ACTS project VANGUARD, as part of the programme's investments in advanced multimedia and user interactions in telecommunications. The system is equally applicable to applications in the area of production automation and inspection. Visualization across networks Visualization across networks based on graphics and the uncalibrated acquisition of real data (VANGUARD) is a project involving flexible methods for 3-dimensional scene modelling. The first central issue is the user-friendly acquisition of models of real objects and their surroundings. Traditionally painstaking calibration processes are involved, to obtain the required precision of reconstruction. Such calibration is not straightforward and renders the approaches unattractive for the user. VANGUARD replaces these techniques by reconstruction from uncalibrated image sequences. What this means is that a single camera can be carried around without further requirements from the user who wants to model a scene. Reconstructions are formed from the sequence, although no information on camera parameters or the camera motion is available. A second focus is the realistic rendering of the real data from arbitrary viewpoints (ie including those not seen by the camera that acquired the data). Work on this topic includes the extraction of realistic surface reflectance models from the sequence (eg different colouring of diffuse and specular reflections), the creation of shadows if light sources are virtually moved, the introduction of artificial field of view and unsharpness, etc. A third topic is the integration of real and synthetic data. Again, issues such as shadowing, interreflections, etc are raised, but also natural type of interactions between real and synthetic shapes are being investigated. Results so far include the following: theoretical foundations for uncalibrated reconstruction; algorithms for software steadycam; visualization of generated 3-dimensional models on autostereoscopic display; automatic facial feature extraction from dynamic 3-dimensional face reconstructions; submission to Moving Picture Expert Group (MPEG)4 Synthetic/Natural Hybrid Coding for 3-dimensional geometry-based texture mapping; software for image mosaicing. Searching for OpenAIRE data... There was an error trying to search data from OpenAIRE No results available