Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS
Content archived on 2024-05-24

Personalised, Immersive Sports TV Experience

CORDIS provides links to public deliverables and publications of HORIZON projects.

Links to deliverables and publications from FP7 projects, as well as links to some specific result types such as dataset and software, are dynamically retrieved from OpenAIRE .

Deliverables

The PISTE 3D Template Editor is a visual tool for creating parameterised templates of three-dimensional MPEG-4 BIFS contents. The templates are encoded in a high-level XML-based language called X-VRML. The template editor has been designed for use in a broadcasting environment. The templates are prepared off-line prior to the actual broadcast and can be quickly instantiated during the on-line content production. The use of X-VRML templates in content production allows to speed-up the on-line production process while preserving high quality of the generated content. The production based on templates is also much less error-prone than manual methods. The 3D Template Editor is implemented a set of plug-ins to the 3D Studio Max package. The 3D Studio Max package is one of the most popular animation tools available for the creation of 3D contents. The use of an advanced and widely accepted tool for template authoring ensures high quality of the contents as well as the availability of competent animators. A template created by the template editor may contain both standard objects which are available in the 3D Studio Max package and additional, X-VRML specific, parameterised objects provided by the new plug-ins implemented specifically for the template editor. The X-VRML specific objects can be categorized into two main groups: scene objects and scene modifiers. The scene objects are used to represent parameterised geometrical elements or materials in the scene template. The following scene objects are available: PISTE Text, PISTE Camera, PISTE Avatar, PISTE Reconstructed Environment 3D, PISTE Template 3D, PISTE Object 3D, and PISTE Parameterised Material. Some of the scene objects inherit functionality of existing 3D Studio Max objects and add features specific to the BIFS/X-VRML standards. The new features include attributes specific to BIFS, X-VRML parameterisation, access to a database, and additional information for the operator of the on-line production tool. Other scene objects are used to insert specific elements retrieved from a database into the contents. Examples of such objects are PISTE Avatar, PISTE Environment 3D, PISTE Object 3D and PISTE Template 3D. As opposed to the scene objects, the scene modifiers - although are represented in the 3D Studio Max interface and X-VRML templates - they do not have representation in the final content. They are used to parameterise the structure and contents of the virtual scene. Six scene modifiers are supported by the template editor: PISTE Object Selector, PISTE Positioner, Pin Tool, Ruler 1D Tool, Ruler 2D Tool, and Ruler 3D Tool. The 3D Template Editor may connect to the PISTE database to retrieve information about objects in the repository. This feature is used to select existing Environment 3D, Avatar, Embedded Template, Object 3D, Text and textures for Parameterised Material. The 3D Template Editor uses the standard 3D Studio Max format (*.max) to save and load templates. The 3D Template Editor allows exporting templates to the X-VRML format using X-VRML/BIFS Exporter. The 3D Template Editor is also fully integrated with the PISTE Database Manager for creating new and editing existing X-VRML templates.
Sports broadcasts are usually covered by external crews using dedicated Outside Broadcasting Vans (OBVan). In the workflow model of PISTE, we assume that there will be one director/producer in the OB-Van of the broadcaster, who will be communicating with the person in charge in the PISTE van and asking for specific enhancements for specific attempts. The main focus of the application scenarios of PISTE was to address events related mostly to athletics. Hence the overall idea would be to have the producer ask for visual enhancements after a race or attempt of an athlete and display that in form of a replay clip. Additionally, it is possible to ask for visual enhancements during an event, such as the graphical display of virtual athletes competing against the actual ones. A dedicated software tool was developed for this person (director or producer) to be able to quickly: - View the available possibilities (which visual enhancements are available for this event), - Decide for the appropriate one (each director will develop their own feeling for the best, or at least their favourite type of enhancement for each sport etc.), - “Issue the command” to carry out the necessary activities (communicate simultaneously with all involved operators) and finally - Control the progress of the work carried out by operators related to the visual enhancements. One of the main assumptions of PISTE is that there will be a team of operators working on the visual enhancements preparation prior and during the event. The operators all use the same authoring environment. Hence it is fully up to the producer to do the assignment of the work. One possible combination would be to assign to each of the involved operators specific types of visual enhancements, and when it is decided to actually broadcast according to one type of template, each operator would be responsible for the preparation of one part of the template instantiation, and hence would use only a specific subset of the authoring tool. Another option would be to make the task assignments according to the attempts. In the latter scenario each operator would use the authoring tool to cover for all necessary tasks and do the complete processing for an assigned attempt. Within PISTE the above-mentioned workflow was implemented in a subsystem, which was developed in form of Java component ware (beans). The developed components were integrated into tools developed within the project in order to demonstrate the possibilities of task assignment, progress monitoring, and information flow between different entities in a distributed working environment.
MPEG4 encapsulation in MPEG2 transport in the live and carousel modes. Two types of encapsulation are used: -The first one is close to the broadcasting of live MPEG2 digital television events where the elementary streams are packetized in PES packets (variable length), then in Transport packets. This mode enables the broadcasting of live programs. -The second one is using MPEG DVB sections for the download of MPEG4 "clips" in the decoder. PISTE developed additional requirements concerning the downloading of clips. -- Live programme (The MPEG2 PES encapsulation): The FlexMux stream is encapsulated in a single Transport Packet channel and then identified by a single PID. PAT and PMT tables are also generated. If needed, PCR time stamps may be inserted. The MPEG2 system encapsulation software has an off-line version and a real time version. -- Downloading: Clips may be downloaded. This can be achieved using a carousel defined by DVB. The DVB Specification for Data Broadcasting (EN 301 192) proposes 5 models ; among these protocols Data Carousel and Object Carousel are the only ones that support a cyclic transmission by the sender. And the Data Carousel is preferable to the Object Carousel because of its simplicity, especially in its one level configuration. The PISTE's application doesn't also need the transmission of structured groups of objects like directories but only of sets of files. MPEG2 transport stream generation: The equipment is built on an industrial PC platform equipped with fast access and high capacity hard disk. It can be used to store and read out SPTS and MPTS. This equipment can easily evolve towards a possible re-multiplexing with live MPEG2 streams.. Another interesting feature is possibility to insert an input board to take into account re-multiplexing with a live MPEG2 multiplex. DVB section Object Carousel encapsulation and insertion in the MPTS stream: The carousel generator detects the repository of the files, analyses the description file and encapsulates the data upon the DSMCC Data Carousel protocol into a DVB section format. It generates also the associated signalling tables (PAT, PMT, SDT and NIT).
This result is a software library, which is able to create and to handle the poses and movements of the human body based on 3D joint positions. The human skeleton is described as a set of 18 predefined joints. The kinematic model is based on a principle component analysis (Point Distribution Model) and allows a sports type specific representation of possible human body poses. In an offline training phase a mean skeleton and its modes of variation are calculated from a set of training data and subsequently used online for: - 3D pose estimation - 3D pose verification - 3D pose correction - 3D pose prediction This library is used within the PISTE computer vision pipeline module. Additionally a stand-alone software tool has been implemented to create a sports type specific model by semi-automatic generation of 3D skeletons from a pre-calibrated video sequence. Keywords: Motion Analysis, Motion Prediction, 3D Pose Analysis, Principle Component Analysis.
This result is achieved through a stand-alone software tool, which uses photogrammetric techniques in order to model a 3D scene from multiple uncalibrated photographs. Three steps are unified within one application, thereby reducing the required user interaction: - Calibration of cameras - Polygonal modelling of the relevant surfaces - Texturing of the resulting 3D model from multiple photographs using advanced image-blending techniques - Hierarchic semantic description of subsets of the model These steps are based on user-supplied point correspondences, which can be tracked automatically through image sequences acquired by TV cameras (field interlaced half images). Additionally, image sequences with constant camera position can be summarised in a 3D calibrated panoramic view.
3D pose adaptation fills the gap between the pose prediction, provided by the Zentrum fur Graphische Datenverarbeitung (ZGDV), and the actual segmentation results for the current observation provided by the University of Crete (UoC). From the predicted 3D joint positions in space a simple, generic human body model is created. With respect to the camera parameters for each video sequence, provided by the Fraunhofer Institute for Computer Graphics (IGD), synthetic views of this body model are created. Exploiting knowledge about the human body articulation, the 3D pose is then modified so that the synthetic views fit best to the segmentation results. Since optimal congruence of synthetic view and segmented silhouettes cannot be achieved in only one step, the analysis-synthesis chain needs to be executed iteratively. To ensure convergence, the pose adaptation is performed in a hierarchical manner. The underlying 3D model allows overcoming difficulties caused by self-occlusions of human body parts. The proposed approach exploits the camera estimation, and the segmentation results of all available views to explain all observations in the best possible way, i.e. maximum consistency of this information is achieved. The resulting 3D pose is used as input of the following pose prediction step.
PISTE developed a complete end-to-end chain for the broadcasting of visually enhanced interactive content. The infrastructure that was developed for the broadcasting side, is built around a central repository. The current implementation of this repository consist of a database implemented in Oracle 8i (ported to 9i directly after the end of the project) and an abstraction layer (in C/C++ and Java) providing software components with a transparent access to the database and encapsulating all database management system potential specifics. The schema of the database can be conceptually divided into four main interconnected parts: - Templates, - Interactive Content Objects, - Source Audio/Video, - Sports and Broadcast Metadata. The genericity of the database schema, especially as far as the PISTE-specific parts are concerned (namely the interactive content descriptions) is very high, to allow for extensions and easy adaptation to other types of interactive content production in sports broadcasting. The system may be used not only for the production of interactive sports broadcasts, but also as the ground for a specialized content management system, a database of sports information etc.
Dynamic modelling of virtual reality is a novel technique that allows to build advanced applications of virtual reality by creating parameterised templates (models) of virtual scenes constituting the application and dynamic generation of instances of virtual scenes based on the templates and current values of template parameters, query provided by a user, data retrieved from a database, user privileges, user preferences, and the current state of the system. To enable dynamic modelling of virtual reality the X-VRML language has been developed. The Template Editing Plugins for 3D Studio Max provide ability to visually design parameterised X-VRML templates of virtual scenes. The 3D Studio Max package is one of the most popular animation tools available for the creation of 3D contents. The use of an advanced and widely accepted tool for template authoring ensures high quality of the contents as well as the availability of competent designers. A template created by the use of 3D Studio Max equipped with the Template Editing Plugins may contain both standard objects which are available in the 3D Studio Max package and additional, X-VRML specific, parameterised objects provided by the Template Editing Plugins. The X-VRML specific objects can be categorized into two main groups: scene objects and scene modifiers. The scene objects are used to represent parameterised geometrical elements or materials in the virtual scene template. Some of the scene objects inherit functionality of existing 3D Studio Max objects and add features specific to the X-VRML language such as parameterisation and access to databases. As opposed to the scene objects, the scene modifiers - although are represented in the 3D Studio Max interface and X-VRML templates - they do not have representation in the final virtual scenes. They are used to parameterise the structure and contents of the virtual scenes. Depending on the availability of exporters the 3D Studio Max and Template Editing Plugins may be used to produce X-VRML templates of VRML, X3D, and MPEG-4 BIFS-Text contents.
This result is a software library for the semi-automated capturing and modelling of an athletes' motion from multiple video sequences. Sequences are captured and subsequently processed by a chain, which incorporates a number of computer vision techniques. At first, the athlete's silhouette is determined in each view by a segmentation unit. Then an initial 3D pose is adapted to these observations. Therefore, a 3D body model is moved into the respective pose and projected into each view. Differences between segmented and synthetically created silhouettes are evaluated in order to determine the pose, which explains the observations best. From the 3d joint positions of the adapted body model, rotations for the joint angles are calculated to derive both VRML97 animations and MPEG-4 body animation parameter (BAP) used to animate the avatar at the receiver side. To overcome measurement errors like e.g. flickering of the athlete's movements, smoothing splines are used to reduce jittering effects within the completed animations. In order to perform all these steps iteratively, automatically, and reliably, the initial pose is obtained by the prediction of the deformation parameters from previous poses. The prediction is discipline specific with respect to a kinematic model and therefore able to detect a pose untypical for the specific kind of sports as an outlier, which requires confirmation or correction automatically.
Within the PISTE project a complete description of sports events was developed. Although the application area of the project demanded an orientation towards broadcasting environments, the metadata ontology developed within the project is complete and can be reused in a series of other application areas. The database schema contains a structured description for sports events (dates, schedules, records within previous events etc.), stadiums (capacity, areas, sub areas, accreditation information etc.), broadcasters (equipment, position of equipment in stadiums, crews, crew assignments, etc.), athletes (demographic data, record history, injuries, accomplishments, lifestyle and many more), sports types (rules, description, history, types, equipment used, etc.) and all possible interconnections of the above mentioned categories (e.g. registration of athletes to a given event). For this purpose the schema developed within PISTE was used with no alterations. The description language is XML, and the modelling is fully compliant to and makes use of MPEG-7 descriptors. The initial implementation took place in Oracle 8i and 9i. A database abstraction layer exists for the programming languages C/C++ and Java. An initial exploitation of this result included the development of a database containing information about athletes and theirs CVs.
This result provides 3d calibration information for single fields and sequences from a TV camera using pre-computed calibrated panorama images. Based on an initial estimation of the calibration parameters, which is typically retrieved from the previous field in the sequence, the relative orientation of the video field in reference to the panoramic image is computed. From this information, the actual calibration parameters are inferred. In contrast to conventional calibration methods based on views from multiple camera positions, the approach employed here is based solely on the pre-computed calibrated panorama image. This reduces the computation time and the robustness of the automated calibration process significantly. Keywords: Photogrammetry, Camera Calibration, Panoramic Imaging.
This tool is designed to provide the experienced 3D Studio Max content creator with the ability to create enhanced animated MPEG-4 content. This tool is developed as a plug-in and thus integrates seamlessly with other existing 3D Studio Max tools. The Tool does not hinder or enhance the content creators ability to create animated content it provides the necessary functionality to access the internal representation of the scene description in 3D Studio Max and convert this to the equivalent MPEG-4 representation, namely BIFS. The tool provides several options for the content creator depending on the perceived application of the content. It provides three options for the animation of the content and allows enhanced coding of the floating-point numbers that can be decoded according to the systems part of the MPEG-4 standard. The tool also provides that content creator with the ability to add interaction elements to scenes, while this does not facilitate interaction within 3D Studio Max the Tool interprets these elements and generates the equivalent interaction elements of an MPEG-4 scene. In addition the tool provides the possibility to create a textual output conforming to the non-standardised BIFS-text format. The TELTEC Exporter has demonstrated that it is possible to use of the shelf packages for the creation of MPEG-4 content. This is an important part in the acceptance of the MPEG-4 standard because the standard specifies the decoder model and this has been implemented in several players but if the tools are not developed to facilitate the creation of MPEG-4 content then the standard will not achieve wide acceptance The tool that was created was designed specifically for use within the PISTE project but it has potential applications within the areas of teaching within the university and also the possibility exists to extend the basic functionality of this tool within the confines of research within DCU. The second possibility is to make the tool available (i.e. placing on a web page) for use by other R&D institutes. This can be used to give feed back.
The Dynamic Scene Generator is a tool for dynamic creation of MPEG-4 contents based on X-VRML content templates. The tool has been designed for use within the on-line production chain in a TV broadcasting environment. The Dynamic Scene Generator uses a set of content templates, contents stored in a database, and parameters provided by a user (operator) during the on-line production to generate the final form of a scene description in MPEG-4 BIFS-Text format. The Dynamic Scene Generator consists of the X-VRML processor that processes the templates and a Dynamic User Interface that allows the operator to provide values for template instantiation parameters. The same interface is used for all templates, but the set of controls presented to the user depends on the selected X-VRML template. To this end, the Dynamic User Interface reads the interface specification of the selected X-VRML template. In case of embedded templates, the Dynamic User Interface displays controls for the main template and all the embedded templates. Various types of controls can be used in the interface specification. The controls range from simple text-fields that allow the operator to enter a simple value to complex components with integrated multimedia preview and database connection capabilities. In the PISTE system the Dynamic Scene Generator and the Dynamic User Interface are integrated within the PISTE Authoring Application. Initial values for controls in the user interface are read from the X-VRML template and can be set by other tools integrated with the PISTE Authoring Application before the Dynamic User Interface is enabled. This allows other tools to pre-select values of parameters and releases the operator from the requirement to enter all values used by the template. When the operator initiates the process of content generation the Dynamic User Interface passes all parameter values to the X-VRML processor, which produces the final MPEG-4 BIFS-Text content. The values of template parameters are also stored in the database. The stored values of template parameters allow to re-generate the transmitted content at any time.
Important advantages can be gained by using a transactional database as a central repository of data in a multimedia content production environment. These include consistency of data guaranteed by integrity constraints and transactional processing, local and remote access to data, concurrent access by several users, and backup and recovery capabilities. However, the use of a transactional database is impossible without proper tools for managing in the database the diversity of data used in the content production environments. The PISTE Database Manager is an integrated tool for managing in the PISTE database all kinds of data used in the process of dynamic content creation. These include all types of multimedia objects such as images, movies, texts, 3D models, and animations. Importantly, the list of multimedia data types is extensible allowing introduction of new data types without modifying the structure of the database or database management tools. In addition to the multimedia data, the Database Manager allows to operate on content templates and content sequences. Content templates enable to quickly create high-quality content during on-line content production. Content sequences correspond to sequences of generated content and consist of sets of parameters for content templates together with descriptive metadata. Content sequences allow to efficiently organize the process of archiving broadcasted content and allow to re-generate any previously broadcasted content sequence or to create a similar one. The PISTE Database Manager contains several management tools for specific types of data. The main tools are Content Object Manager, Content Type Manager, Template Manager, and Content Sequence Manager. The Content Object Manager allows administering all kinds of multimedia data used within the MPEG-4 contents. Examples of content objects are video-clips, images, audio, text descriptions, 3D environments, 3D objects, and BAP avatar animations. Most of the content objects are stored internally in the database. In some cases, however, due to special hardware requirements, it is better to store content objects externally (e.g. on a special disk arrays for high bit-rate video files). In such case, the content object file is stored outside the database, while the metadata and the reference to the content object location is stored inside the database. In most cases content objects stored internally and externally can be used in the same way. The Content Object Manager offers preview for a wide range of multimedia objects. These include text objects, images, video clips, 3D environments, 3D objects, and BAP avatar animations. In addition, if the content object is stored inside the database there is a possibility of editing its contents by the use of an external application. One of the important features of the PISTE database and the PISTE Database Manager is extensibility. New types of objects can be added to the system without the need to modify the database schema or existing tools. This enables adapting the system to the changing content creation requirements. The types of content objects supported by the database and the Database Manager can be administered by the use of Content Type Manager. The user can add new types and modify or delete each of the existing content types. The Template Manager allows administering X-VRML templates stored in the PISTE database. There are four types of templates supported by the Template Manager: 2D scene templates, 3D scene templates, avatar templates and SVG graphics templates. Templates are stored in a hierarchical structure of folders. The Template Manager offers preview of templates stored in the database. A template along with all the data it uses (e.g. content objects, embedded templates) can be exported from the database and saved to a disk archive file. Such file can be imported into the same or another PISTE database. The Template Manager is integrated with 2D and 3D Template Editors for creation and editing of templates. The Content Sequence Manager is a tool designed to administer content sequences in the PISTE database. Content sequences are created during generation of MPEG-4 scenes in the PISTE Dynamic Scene Generator. Content sequences are organized in the database in a hierarchical structure of folders. The Content Sequence Manager offers preview of content sequences stored in the database.
This result is a software library to generate standard animation output from a sequence of 3D skeletons. Skeletons can be converted from 3D joint positions to 3D joint rotations in the following formats: - Arbitrary combinations of Euler angles - Quaternion representations - Axis + radians representation - Matrices Sequences of 3D skeletons can be converted to: - VRML97 with H-Anim 1.1 / 2000 standard output - Body Animation Parameters (BAP) for MPEG-4 use. Complete animations can be post-processed by the use of smoothing splines to avoid jittering effects caused by the low spatial and temporal resolution of standard TV cameras. Keywords: VRML, H-Anim, BAP, MPEG-4.
PISTE 2D Template Editor is a visual tool for creating parameterised templates of MPEG-4 content for use in a MPEG-4 based TV production environment. The templates are prepared prior to the on-line production and instantiated during the on line production by the broadcasting staff. As the result of template instantiation a complete MPEG-4 scene is generated. The templates are encoded in a high-level XML-based language called X-VRML. The use of X-VRML templates speeds-up the on-line content production and makes the process much less error-prone. While designing a template a number of parameterised elements can be used in addition to standard BIFS-Text elements. The final characteristics of the parameterised elements are defined during the on-line content generation. The template editor allows a designer to create templates in a visual way and see what the edited template will look like after the final content generation with the default parameter values. The tool is implemented as an extensible framework with all specific elements implemented as plug-ins. The set of supported objects and their attributes is defined in an XML configuration file. Such approach makes it easy to extend the template editor by changing object definitions or adding new objects. The currently implemented elements are: shape objects (rectangle, square, ellipse, circle), text object, embedded templates (2D and 3D), specific enhancements (e.g. processed video), and interaction elements (e.g., user-selectable appearance, visibility control element). The tool provides also support for creation of parameterised background and appearance definitions. Images, movies and dynamically generated images (SVG X-VRML templates) can be used as textures. The 2D Template Editor allows parameterisation at the level of fields of BIFS nodes. The 2D Template Editor allows the designer to enter additional fragments of code between the fragments that are automatically generated in the result of visual composition. This functionality provides higher level of flexibility in the creation of templates and allows adding advanced template logic (sensors, routes) that could not be specified graphically. The 2D Template Editor uses two file formats: XDR and X-VRML. The XDR format is an internal XML-based format used to save and load templates. The XDR format contains all information that is necessary to generate the X-VRML template and additionally it contains formatting information for the graphical composer. The X-VRML files are generated from the XDR files by the use of XSL. The 2D Template Editor provides both file- and database storage of templates. Templates can be stored in a database in both XDR and X-VRML formats. Also, some object attributes (such as textures or embedded objects) may be related to data stored in a database. Database connectivity allows browsing and using data retrieved from a database. The 2D Template Editor allows the designer to define a list of activators. Activators respond to user (spectator) interaction with the final content via a remote control. The designer can define a visibility condition for each of the scene components. To allow defining visibility conditions, the visibility control element has been implemented. For each of the visibility control elements the user can specify a number of states that can be switched by the use of activators. Each state can be associated with an activator and also with an appearance. In consequence, the visibility control element can have different appearances for its different states at the same time switching visibility of other scene elements. It is also possible to define visibility condition for the control element. If the control element is not visible it is inactive. Consequently, switching invisible control element has no result. Such approach allows creating templates with complex dependencies among scene components that result in highly interactive MPEG-4 contents. The activators can be also used for changing viewpoints in embedded 3D templates. When the designer embeds a 3D template, he/she can specify a list of activators that can be bound to viewpoints included in the embedded scene. This will eventually allow the spectator to change the viewpoint by pressing buttons on the remote control.

Searching for OpenAIRE data...

There was an error trying to search data from OpenAIRE

No results available

My booklet 0 0