Periodic Reporting for period 2 - MULTIDRONE (MULTIple DRONE platform for media production)
Reporting period: 2018-07-01 to 2019-12-31
A specific novel language has been designed to allow the Director describe shooting missions. A centralized algorithm for planning the mission has been designed and on-drone software modules have been developed for mission execution. The tasks assigned to each drone are executed, when the associated event has been detected. In emergency situations, drones can compute a safe path to the closest landing spot. Drone cinematography has been modeled in terms of more than 20 shooting modes and various shot (framing) types. Furthermore, novel algorithms for formation control, autonomous trajectory tracking and multi-drone collision avoidance have been proposed. The semantic 3D map analysis and enrichment is combined with human crowd detection algorithms to provide semantic information (landing sites, crowd gathering regions etc). Research effort on human-centered visual information analysis focused on deriving novel lightweight deep learning architectures for cyclist detection, football player detection, boat detection, human crowd detection etc. Finally, novel research work has also been performed on visual quality assessment in various sports environments by employing realistic simulated videos and subjective testing.
The drone hardware incorporates on-board sensors and processing capabilities (2 processors) to fly autonomously. The MULTIDRONE platform software, which is the common interface for all modules, consists of ROS services and ROS message typesensuring system interoperability. Furthermore, the necessary flight supervisor and media (artistic) director GUIs have been designed. Shooting camera and navigation camera drone2ground video streaming transmitted over LTE/4G has been achieved successfully in media production scenarios.
The MULTIDRONE system was evaluated in three experimental media production meetings, two of them carried out in Germany (Bothkamp - Berlin) and one in Spain (Seville), in mock-up and real media production scenarios of a bike race, a rowing regatta and a parkour run. Several system integration issues were exposed and addressed on site or during consequent integration meetings. Overall, the results were very good; the system managed to fly autonomously and almost all system functionalities were successfully tested, while the system achieved to produce quality footage suitable for media production purposes.
MULTIDRONE dissemination and communication activities were diverse, multifaceted and can be considered very successful. They include publication (or acceptance) of 51 papers in high quality international scientific conferences and 29 papers in scientific journals, 27 keynote or invited speeches, 6 organized tutorias in prestigious conferences (e.g. ICCV) and a number of events (e.g. ERF, EBU, GMF) and a strong presence in social media. 7 exploitation planning events were held during the course of the project. 19 exploitable products and services (including SW code and binaries (mostly open SW), know-how, educational material, database/XML schemas and the drone platform design) were identified among the ones produced by the project and more than 150 accomplished industrial players were contacted.
In terms of media production planning, the project has developed a specific novel language for the description of AV shooting missions, translated into drone tasks by a mission planner, so that the multi-drone team can execute the shots specified by the media crew. A novel media director dashboard GUI has been developed to this end. Novel algorithms for planning multi-drone task allocation and scheduling have been designed, since the media AV shooting actions are translated into tasks with time windows that are linked to events. Expected sports events can trigger pre-programed actions, while unexpected events (e.g. sports accidents) can trigger re-planning procedures or emergency manoeuvres. Moreover, an extended subset of drone AV shot types from the drone cinematography taxonomy have been implemented to be executed by the drones, using drone motion trajectories and gimbal/camera commands. A gimbal/camera control solution for tracking moving objects of interest has also been implemented, including tracking with offset with respect to the image center, zoom control, and customized auto-focus. Moreover, optimal trajectory planners for the drones were developed considering collision avoidance constraints, in order to improve the quality of the final videostream, by producing smoother camera movements. Finally, field experiments testing the entire system have been performed, including total integration of the different modules.