Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS

MULTIple DRONE platform for media production

Periodic Reporting for period 2 - MULTIDRONE (MULTIple DRONE platform for media production)

Reporting period: 2018-07-01 to 2019-12-31

MULTIDRONE aims to develop an innovative intelligent multi-drone team platform for media production to cover outdoor events (e.g. sports). The drone team to be managed by the production director and his/her has: a) increased multiple drone decisional autonomy, by minimizing production crew load and required interventions and b) improved multiple drone robustness and safety mechanisms (e.g. communication robustness/safety, embedded flight regulation compliance, enhanced crowd avoidance and emergency landing mechanisms), enabling it to carry out its mission against errors or crew inaction and to handle emergencies. Such robustness is particularly important, as the drone team will operate close to crowds and/or may face environmental hazards. The MULTIDRONE project has obtained measurable improvements in the provision of multiple drone autonomy, advancing the development and understanding of new metrics characterizing the operation and scalability of multiple drone systems. Furthermore, it has set new frontiers for TV programme production and drone cinematography, while overcoming barriers/obstacles, due to regulations and improving public acceptance of this technology.
Broadcasters (DW, RAI) cooperated with the technical partners towards finalizing the user requirements for sports shooting. Three different shooting scenarios have been detailed. The user requirements were translated into system specifications in terms of drone hardware, ground station and software functionalities. Furthermore, the general architecture of cooperative mission planning and mission execution has been designed.
A specific novel language has been designed to allow the Director describe shooting missions. A centralized algorithm for planning the mission has been designed and on-drone software modules have been developed for mission execution. The tasks assigned to each drone are executed, when the associated event has been detected. In emergency situations, drones can compute a safe path to the closest landing spot. Drone cinematography has been modeled in terms of more than 20 shooting modes and various shot (framing) types. Furthermore, novel algorithms for formation control, autonomous trajectory tracking and multi-drone collision avoidance have been proposed. The semantic 3D map analysis and enrichment is combined with human crowd detection algorithms to provide semantic information (landing sites, crowd gathering regions etc). Research effort on human-centered visual information analysis focused on deriving novel lightweight deep learning architectures for cyclist detection, football player detection, boat detection, human crowd detection etc. Finally, novel research work has also been performed on visual quality assessment in various sports environments by employing realistic simulated videos and subjective testing.
The drone hardware incorporates on-board sensors and processing capabilities (2 processors) to fly autonomously. The MULTIDRONE platform software, which is the common interface for all modules, consists of ROS services and ROS message typesensuring system interoperability. Furthermore, the necessary flight supervisor and media (artistic) director GUIs have been designed. Shooting camera and navigation camera drone2ground video streaming transmitted over LTE/4G has been achieved successfully in media production scenarios.
The MULTIDRONE system was evaluated in three experimental media production meetings, two of them carried out in Germany (Bothkamp - Berlin) and one in Spain (Seville), in mock-up and real media production scenarios of a bike race, a rowing regatta and a parkour run. Several system integration issues were exposed and addressed on site or during consequent integration meetings. Overall, the results were very good; the system managed to fly autonomously and almost all system functionalities were successfully tested, while the system achieved to produce quality footage suitable for media production purposes.
MULTIDRONE dissemination and communication activities were diverse, multifaceted and can be considered very successful. They include publication (or acceptance) of 51 papers in high quality international scientific conferences and 29 papers in scientific journals, 27 keynote or invited speeches, 6 organized tutorias in prestigious conferences (e.g. ICCV) and a number of events (e.g. ERF, EBU, GMF) and a strong presence in social media. 7 exploitation planning events were held during the course of the project. 19 exploitable products and services (including SW code and binaries (mostly open SW), know-how, educational material, database/XML schemas and the drone platform design) were identified among the ones produced by the project and more than 150 accomplished industrial players were contacted.
MULTIDRONE consortium has produced several novel research results that go beyond the state of the art in many tasks related to the MULTIDRONE objectives. On-drone visual information analysis was focused on proposing novel deep neural networks that are usually fully convolutional with specially designed regularizers, in order to be lightweight and deployable on-drone. Additionally, new research directions have been explored, such as deep reinforcement learning for drone control, new neuron types having a paraboloid decision boundary, nonlinear kernel-based learning machines, deep autoencoders for facial image analysis and automatic pose estimation. Furthermore, research has been performed on real-time target tracking in sports videos using drones, as well as on privacy-preserving technologies, notably on face detection obfuscation and on face de-identification introducing the k-anonymity concept in specialized neural network training. Significant progress has also been performed in 3D world mapping and drone localization. New algorithms for drone localization without GPS are being developed, where the drone fuses information from visual odometry, IMU, LIDAR, and other onboard sensors to estimate the 6D drone pose.
In terms of media production planning, the project has developed a specific novel language for the description of AV shooting missions, translated into drone tasks by a mission planner, so that the multi-drone team can execute the shots specified by the media crew. A novel media director dashboard GUI has been developed to this end. Novel algorithms for planning multi-drone task allocation and scheduling have been designed, since the media AV shooting actions are translated into tasks with time windows that are linked to events. Expected sports events can trigger pre-programed actions, while unexpected events (e.g. sports accidents) can trigger re-planning procedures or emergency manoeuvres. Moreover, an extended subset of drone AV shot types from the drone cinematography taxonomy have been implemented to be executed by the drones, using drone motion trajectories and gimbal/camera commands. A gimbal/camera control solution for tracking moving objects of interest has also been implemented, including tracking with offset with respect to the image center, zoom control, and customized auto-focus. Moreover, optimal trajectory planners for the drones were developed considering collision avoidance constraints, in order to improve the quality of the final videostream, by producing smoother camera movements. Finally, field experiments testing the entire system have been performed, including total integration of the different modules.
MULTIDRONE project overview.