Skip to main content

Live Action Data Input and Output

Periodic Reporting for period 1 - LADIO (Live Action Data Input and Output)

Reporting period: 2016-12-01 to 2018-08-31

LADIO will create a central hub with structured access to all data generated on set. This will enable the post-production team to be part of the on-set dataflow, building on the momentum of editing and color grading that are already initiated directly on-set. It will also support digital visual effects (VFX), whose seamless integration between live action and computer graphics elements is even more demanding. LADIO will bring about a VFX paradigm shift by spending more effort on set to drastically improve efficiency, without intruding onto the work of the live action crew. The LADIO hardware and software will streamline the setup of all devices for data collection and monitoring, and track the location of all recorded data in time and space in a common 3D reference system. 3D acquisition of the set with a dedicated in-house pipeline is already a common practice for high-end productions. LADIO will improve and release new open source software libraries to foster interoperability and collaborations.

Objectives

1. Provide software and hardware to build an on-set pipeline to organize data and metadata gathered during the shooting session for any kind of productions
2. Improve post-production by the on-set acquisition of more data with temporal and spatial consistency organized by narrative units
3. Check quality of data to assess the shooting objectives
4. Provide the film industry SMEs with a new set of tools and services to improve the impact of the shooting process onto the global production costs
5. Provide SMEs and institutions with open source 3D reconstruction technologies
The LADIO project has brought existing state-of-the-art knowledge in computer vision together with the needs of a film production with VFX elements. The project has made essential contributions to the commercial QuineBox, which is already in everyday use on film sets on 3 continents, and LADIO has been the driving force behind the release of the open-source 3D reconstruction pipeline AliceVIsion (https://www.github.com/alicevision/alicevision) and its graphical frontend, Meshroom (https://www.github.com/alicevision/meshroom). Research and development conducted within LADIO has both of these publicly available results possible, and the contributions are explained in detail in the rest of this section.

The QuineBox is a commercial product today, which has been re-imagined in response to interaction with real users during the project duration. It combines the ideas of the LADIO CamBox and LADIO SetBox that existed at the time of the proposal and provides the expected benefits to a distributed film team including the live action film team and the VFX team.

LADIO has demonstrated how the pieces fit together to enhance the productivity of film production with VFX elements. Based on an offline set reconstruction (using AliceVision), it becomes possible to track exactly the location and position of a film camera on set. The QuineBox acts here as the centerpiece, synchronising recordings of several cameras (main camera and witness cameras), streaming it live to a previz system and storing content and metadata for processing in further productions stages. It has been demonstrated that previz allows the photographer to view the live scene combined with a preview of virtual elements in several modes, and to control camera settings in real-time. While all data and metadata is immediately available for post-processing steps on set, it is also immediately uploaded to an arbitrary server (usually a cloud) for conforming and further processing.
The project combines
- new research topics (rigid moving objects in Structure-from-Motion aka “multi-body”, register one image to untextured 3d model)
- new contributions to well-known research topics (uncertainty evaluation of the estimated parameters in Structure-from-Motion)
- high performance research on well-known computer vision (SIFT feature extraction on GPU, GPU feature matching, GPU depth map computation, PCIe-networking for multi-GPU)
- industrial transfer of academic results with engineering refinements to further improve speed and quality (high quality Structure-from-Motion and Multiple View Stereo pipeline, integration of Local Bundle adjustment from SLAM approaches into SfM, integration of 360° cameras and cameras rigs). The industrial and academic partnership has also delivered an open source nodal user interface that enables advanced users and academics to customize their 3D reconstruction pipeline. We hope that it will enable more reproducible research in the field of photogrammetry.

New research topics

New research has been done on rigid moving objects in Structure-from-Motion (aka “multi-body”). This work has delivered a new generic feature matching strategy that can be used beyond the multi-body case and has been integrated into the pipeline. Other work on multi-body are still ongoing. New research was conducted and published on the registration of RGB image to untextured 3d models, new camera solvers that deal with unknown focal length and/or significant distortion, and the selection of image subsets using machine learning as a new solution to improve the current use of the vocabulary tree approach. Also directly determining camera poses from image collection was explored.

New contributions to well-known research topics

A first solution for uncertainty evaluation of large scale 3D reconstruction has been delivered in 2017 and has been integrated into the pipeline. A new publication has been done in 2018 with a new approach that allows to deliver more accurate and faster results. New approaches have been implemented for 3d-3d registration with unknown scales, regarding point cloud alignment of an image-based SfM+MVS to LiDAR or Kinect point cloud.

High performance research

SIFT feature extraction on GPU provides a drop-in replacement for CPU implementations. Feature matching algorithms that win in terms of speed over the best approximate algorithm. GPU depth map computation was revised and low-latency PCIe-networking has been adapted to scale GPU computation transparently to several IOMMU-capable PCs.

Socio-economic impact

The LADIO data collection on set through the QuineBox and automated data delivery into the further production workflows provides an integrated production experience to QUI's customers. The QuineBox has a unique role for automated on-set data acquisition, both from recording devices and other sensors. The LADIO data model establishes relationships and maintains coherence between assets. It enables the development of frontends for assessing, annotating, importing and exporting all kinds of data for a modern production in the media and entertainment industry.

AliceVision (https://alicevision.github.io) is an open source framework that was set up by the LADIO partners to provide a free 3D reconstruction pipeline. By releasing AliceVision in open source, the LADIO partners have set up a collaborative framework with academic and industrial partners. It allows the partners to build a cutting-edge pipeline for Visual Effects based upon a state-of-the-art set of software libraries, and is an enabler for other communities. The platform showcases the LADIO partners' improvements to the state-of-the-art in 3D reconstruction and enables also for the future reproducible research on Structure from Motion and Multiple View Stereo for the entire research community.
LADIO cover page
Maya plugin for previz
LADIO live camera control application
Unity plugin for previz
Meshroom frontend of the AliceVision pipeline