Final Report Summary - 3FLEX (Depth enabled workflow for flexible 2D and multiview video production)
The EU-funded project “3FLEX: Depth-enabled workflow for 2D and multiview video production” aimed to enable existing 2D and 3D production workflows to use and take advantage of available depth information for new degrees of flexibility, cost savings and improved efficiency.
The main result of 3FLEX project is an advanced workflow for flexible 2D and 3D production that extends existing post-production platforms with a set of plugins based on cutting-edge computer vision techniques. It covers the whole post-production chain from the extraction of depth information over the use of depth information for visual effects to the rendering of different 2D and 3D output formats.
The 3FLEX workflow and tools can be used for a wide range of 2D and 3D productions including fiction, documentaries and commercials. On the input side the workflow supports various camera setups including stereoscopic and trifocal cameras as well as image+depth sensors. Existing post-production tasks are improved and extended including depth and object based visual effects or depth based colour grading and finishing. On the output side different 2D and 3D formats can be rendered without considerable effort.
3FLEX impacts the post-production industry and market with differentiating technologies taking advantage of video+depth information for live action scenes. 3FLEX improves not only the efficiency and flexibility of common post-production tasks but also extends the creative possibilities. The 3FLEX workflow and tools overcome some of the major issues in current 3D post-production workflows such as native 3D and 2D/3D conversion and improves visual effects for 2D productions thanks to the efficient extraction and use of depth maps.
The 3FLEX project and its results have been presented in several events including the 3FLEX booth at the Future Zone of IBC 2015 where near-to-final results were shown. That allowed the project to generate a strong impact in the audience and foster interest for future exploitation of the results.
Project Context and Objectives:
The 3FLEX project aims to define and develop an advanced workflow for live action video production capable to deal with and take advantage of video+depth information and to develop the building blocks to implement it. This workflow enhances current 2D and 3D production workflows and promises further cost savings and improvements in terms of production efficiency and workflow automation.
The implementation of the 3FLEX workflow requires the development of tools and pieces that cover the whole post-production chain: extraction and enhancement of the supplementary depth information data on the ingest and pre-processing side, development of new video+depth based retouch and processing tools, and adaptation of existing post-production tools in order to take advantage of the existing depth information. The 3FLEX tools to enable the new workflow are delivered as plugins and upgrades, adding value and functionality to the established post-production platforms mocha and Mistika/Mamba. The 3FLEX outcomes are tested in an experimental production and showcased by academic and industrial partners in research and commercial forums.
The technical and operational objectives of the project are:
- O1. To use and re-design existing camera systems delivering the additional images and meta-data in a suitable format and to provide test data captured by related camera systems.
- O2. To define a complete workflow to deal with and take advantage of the depth information associated with the images.
- O3. To implement software tools that cover the entire needs of the innovative workflow: ingest and pre-processing of the enriched data format, depth map generation, depth-image based rendering and their combination with existing techniques like scribbling, clean-plate creation and rotoscoping.
- O4. To use depth maps in the post-production process for enhancing the traditional tasks of 2D and 3D composition, colour grading, depth grading, rotoscoping, VFX processing, stereoscopic correction, 3D re-mastering and others.
- O5. To increase the flexibility of current post-production tools by adding new controls for visual effects and image modifications. The auxiliary information enables the artist to apply manipulations locally based on the spatial, temporal and depth position of individual objects and create sophisticated visual effects in a short time that otherwise would be impossible.
- O6. To implement and demonstrate the 3FLEX results with some pre-defined experimental productions to test and showcase the project innovations in 2D and 3D productions.
- O7. To define the strategies and policies for managing and exploiting the intellectual property derived from the activities of the project and to define and implement an exploitation model and its corresponding business plan.
Project Results:
The main results of the 3FLEX project can be grouped in the following categories:
3FLEX workflow and plugin specifications
----------------------------------------------------------------------
The main objectives in this area were to define the overall 3FLEX workflow and its integration with established 2D and 3D production workflows and to determine the functional requirements and specifications for each individual component to ensure their interoperability with the mocha and the Mistika/Mamba platforms.
Existing 2D and 3D production workflows were analysed and based on that the 3FLEX workflow was defined. The definition included not only the interaction between the individual modules that compose the workflow but also with the different host applications and their existing tools. The functional and technical requirements of the individual modules were also derived in order to fulfil the requirements of the workflow while respecting the technical aspects of the host applications and the plugin interface.
To facilitate the adoption of the 3FLEX workflow in the post-production industry the plugins have been developed based on the established OpenFX API. Extensions to this API have been proposed and its support implemented in the post-production platforms to overcome limitations of standard API when handling multiple views and additional channels (for e.g. depth and disparity information) in order to provide an appropriate plugin interface.
Depth-enabling pre-processing plugins
----------------------------------------------------------------------
A set of pre-processing plugins has been developed to extract and improve depth information from different input formats (stereoscopic, trifocal, video+depth). These plugins use the proposed OpenFX extension and have been implemented for Mamba and Mistika. This set of plugins includes:
• Stereo and trifocal rectification plugins to correct geometrical distortions and perform joint rectification of multiples views.
• Disparity estimation plugins for stereo and trifocal input data that obtain dense disparity maps for all input views.
• A depth filtering plugin that improves the quality of depth data originating from monoscopic depth (e.g. time-of-flight) sensors.
• A disparity to depth conversion plugin that converts disparity to depth information and vice-versa and enables the use of both types of information within the 3FLEX workflow.
• A colour matching plugin that copes with significantly different colours in trifocal setups due to usage of different cameras.
Depth-enabled post-processing tools
----------------------------------------------------------------------
A set of plugins that exploits depth information for common post-production tasks in 2D, 3D and multi-view productions have been developed for the Mistika/Mamba and mocha platforms:
• A semi-automatic depth map refinement plugin for the Mistika and Mamba platforms to detect, identify and correct incorrectly estimated disparities for both stereoscopic and trifocal camera setups.
• Depth scribbling and depth mapping plugins for artistic manipulation of estimated depth maps and recreation if missing depth information.
• An advanced rotoscoping plugin developed for the mocha platform to automatically obtain a refined binary mask of an object, taking into account the colour along with the depth information.
• A clean-plate creation plugin for planar backgrounds for the mocha platform that removes an objects in a scene with planar background by inpainting the selected region. The generated clean-plate complements existing mocha object removal tools.
• An entity labelling plugin for the Mistika and Mamba platforms that segments a frame into different regions based on color and depth information. This region based scene representation can be of great help for the user in order to perform editing operations on specific parts or regions of the scene.
• A clean-plate creation plugin for complex scenes for the Mistika and Mamba platforms which interactively removes objects in scenes with piece-wise planar background by filling the hole with realistic background texture and depth values.
• Finally, the Mistika/Mamba and mocha platforms have been enhanced to support the 3FLEX workflow and plugins. Existing nodes in Mistika/Mamba have been extended to take advantage of depth maps for depth and object based visual effects and depth based color grading and finishing.
Depth image based multiview rendering plugins
----------------------------------------------------------------------
A set of plugins for rendering high quality output from different image+depth representations has been developed for the Mamba and Mistika platforms. They are capable to deliver multiple output formats including 2D, stereoscopic and autostereoscopic 3D.
• Depth-image based rendering plugins for image+depth, stereoscopic and trifocal inputs that render virtual views from depth enhanced input and can fill the disoccluded areas using satellite cameras information.
• An image+depth based hole filling plugin that generates a sequence of clean-plates providing texture for potential areas to be disoccluded before the rendering.
• A post-rendering inpainting plugin that fills disoccluded areas in the synthesized views after rendering.
Interoperability and evaluation
----------------------------------------------------------------------
A complete data set including stereo, trifocal and multiple view material has been collected from other research and industrial projects. Besides, specific shootings were carried out in order to obtain additional test material providing missing scenarios and conditions specifically needed for the development and evaluation of the 3FLEX workflow and tools: a joint shooting with the SCENE project, a joint shooting with a company interested in the 3FLEX technology in Leipzig and the shooting for the experimental production in Cologne.
All the plugins have been evaluated successfully through interoperability tests covering individual plugins and plugin chains on different host applications and on the testbeds set up by different partners. The 3FLEX workflow and plugins have been evaluated in terms of quality, usability, flexibility and efficiency using both objective and subjective evaluation methodologies.
An experimental production has been conducted to evaluate the 3FLEX workflow within a real production scenario. A shooting in collaboration with a 3D production company took place in Cologne, using a trifocal and a stereo camera setup. The production shows a short documentation about the number Pi featuring a lawnmower artist and targets different 2D and 3D screens.
Potential Impact:
3FLEX was born with the aim to develop a depth based post-production workflow for live action content in order to improve not only the quality, efficiency and flexibility but also creative possibilities. At a time when differentiating technologies can have a large impact on the changing post-production market, 3FLEX directly tackled the major issues in current 2D and 3D production processes (i.e. fixing binocular rivalry issues, generation of depth maps, inpainting and rotoscoping).
3FLEX has been highly successful in its efforts to meet the complex and ambitious scientific, technological and industrial challenges, and has produced post-production tools that go far beyond the current state-of the art of the industry, by creating building blocks covering the whole post-production chain (pre-processing, visual effects and finishing) that enable the mocha and Mistika/Mamba platforms to properly exploit the depth channel inside a tailored workflow.
These tools have been developed as a collection of plugins that are integrated in the platforms, following the current trend in the development of post-production software. This facilitates standardization, minimizes the risk of excessive solution diversification and enables an easy and fast adoption by the end users.
The 3FLEX workflow can be applied to all types of 2D and 3D productions (fiction, documentary and commercial), including hybrid 2D or 3D productions combining live action and computer generated footage and 3D productions targeting different displays or formats. In particular, the 3FLEX workflow offers the following benefits:
• It overcomes the limitations of native 3D productions based on stereoscopic cameras and reduced the costs of 2D-to-3D conversions, hence making the productions easier to handle in terms of financial feasibility.
• It allows the re-release of library titles in 3D by applying an optimized 2D-to-3D conversion process. It also improves 2D/3D conversion based 3D productions when native 3D shooting is too challenging, complex and/or costly for the filmmakers.
• 3FLEX can be used in conventional 2D production, increasing efficiency without much impact on shooting. Depth maps can be used for VFX tasks as depth assisted post-production tools (depth based colour grading, depth of field modification, depth based compositing, etc.)
• 3FLEX allows to target a conventional 2D output first and decide later to convert to 3D with reasonable costs and good quality.
Apart from its potential impact in the competitiveness of the post-production industry, 3FLEX also has a positive impact on the scientific community as well as the industry and the market. As 3FLEX is a research and innovation project, the positive aspects amongst the scientific community do not reside particularly in the advance of the scientific state of the art but in the opportunity to convert research results into solutions for the industry and the strong collaborative relationship that is established between ICT R&D organisations and the media industry. The R&D tasks carried out in this project have also improved the internal skills of the involved industrial partners and stimulated the market of innovative accessible ICT resources. The close collaboration with R&D partners and the insight in their cutting-edge technology have encouraged the SME partners to think about new innovative products and how they can be used for their own competitive advantage.
List of Websites:
3FLEX website: http://www.3flex-project.eu/
The 3FLEX Consortium
Monica Caballero - 3FLEX project coordinator, monica.caballero@eurecat.org
Eurecat, Spain, http://www.eurecat.org
Lutz Goldmann - 3FLEX Scientific and Technical Coordinator
imcube Labs, Germany, http://www.imcube.com
Miguel Angel Doncel - CEO
SGO, Spain, http://www.sgo.es
John-Paul Smith - CEO
Imagineer Systems, UK, http://http://www.imagineersystems.com/
Ralf Tanger - Project Manager, Vision & Imaging Technologies
Fraunhofer Institute for Telecommunications Heinrich Hertz Institute HHI, Germany, http://www.hhi.fraunhofer.de