Skip to main content
CORDIS - Forschungsergebnisse der EU
CORDIS

IMage BAsed Landing Solutions

Periodic Reporting for period 2 - IMBALS (IMage BAsed Landing Solutions)

Berichtszeitraum: 2019-09-01 bis 2021-02-28

The approach and landing phases are the most critical ones in commercial aircraft operations and many aircraft accidents have a significant contribution from human error. Hence, automation during these phases is a promising path to increase aviation safety, which is a societal interest. The overall objective of the IMBALS project is to develop, validate and verify a certifiable Image Processing Platform (IPP) and demonstrate it in a Vision Landing System (VLS) that is capable of autolanding a large passenger aircraft based on images supplied by an on-board camera system.
The workload reduction provided by fully automated landing is one of the enablers to transition to single-pilot operations, which would be a workaround for an expected pilot shortage after recovery from the Covid-19 pandemic. This will help to address the societal challenge towards mobility.
The IMBALS project started from the Concept of Operations (CONOPS) for automated approach and landing. From there, requirements are being derived for the IPP equipment and these are driving the IPP development and prototyping. The IPP will be validated and verified in various system test benches (incl. Airbus Disruptive Cockpit bench). The purpose is to reach TRL5 maturity level, and the project will strongly emphasize on safety and certifiability of the system. Particularly the certification of image processing algorithms will require progress beyond the current state of art.
The following work has been performed in the first and second reporting period:

WP1 Detailed project plan (closed)
A master plan has been approved in a formal planning review and serves the project management in WP9.

WP2 Definition of requirements
The functional requirements have been defined by Airbus. Initial equipment level requirements are captured and are still being reworked during the iterative process between requirements, V&V planning, system definition and validation.

WP3 Validation & verification plan
Several iterations were conducted on the V&V plan. Flight tests are canceled and more emphasis is placed on ground test capabilities.

WP4 System definition
KU Leuven demonstrated a first representative algorithm that detects the runway in the images and extracts a pose estimation out of that. Based on this algorithm, (UN)MANNED defined a first set of vision operators inside their Sol Language to support the porting of the algorithm to Sol platform and the integration into their target hardware. (UN)MANNED defined a first end-to-end algorithm using those vision operators. This was the basis for validation work in WP5.
Meanwhile KU Leuven further developed its algorithm toolbox with more capabilities (incl. processing images from a visual wavelength as well as from an infra-red wavelength camera and a tracking algorithm). (UN)MANNED started in RP2 extending its Sol vision library with KU Leuven’s new operators in order to iterate on the current application.
ScioTeq compared different architectures at the system level and image processing platform level. ScioTeq started the preliminary system safety assessment. ScioTeq and KU Leuven have defined the first concepts for establishing a confidence level for the perception by the algorithms. ScioTeq defined various interfaces and the electronics hardware for the validation prototypes to be realized in WP5.
ScioTeq drafted the technical specification for the test bench software being developed by Tekever ASDS.

WP5 System validation
UNM demonstrated the first end-to-end algorithm in Sol (defined in WP4) on their embedded target hardware (SolAero). This enabled a first measurement of the latency, identification of bottlenecks and resolution of the major bottlenecks. The on-target processing latency is a key characteristic being closely tracked throughout the project.
KU Leuven has iteratively evaluated the performance of their algorithms. The performance in terms of accuracy has found to be promising. However, the feasible accuracy at longer distances from the runway need to be traded off against the resolution of the images. KU Leuven also evaluated the sensitivity of the algorithm outputs versus image noise.
ScioTeq prototyped the main building blocks for the validation prototypes and started their validation.

WP6 System verification - WP7 Final report
Not started yet.

WP8 Prestudies (Closed)
WP8 can be divided in two main activities: collection of video footage and a technology survey.
TEKEVER recorded visual and Infra-Red video footage during flight patterns that were defined by TEKEVER and ScioTeq. Airbus provided complementary visual wavelength and fused VIS/IR video footage. This footage is a stimulus during the testing of algorithms. TEKEVER also released a first version of its video player.
The technology survey yields a "shopping list" with building blocks and principles that could be used in the upcoming work packages and this is captured in D8.2 "Technology Survey Report".

WP9 Project management & dissemination
This work package includes
- MS1 "Kick-off Meeting".
- Establishing the contractual framework
- Progress meetings within the consortium and with the topic manager, quarterly report to the CS2 JU.
- Meetings of the general assembly.
- Coordinate and launch the request for amendment of the grant agreement.
- Tracking progress towards the master plan from WP1.
- Maintenance of deliverables D9.2 and D9.3 "Plan for Dissemination and Exploitation of Results" and launching several communication and dissemination actions. Unfortunately, the execution of the dissemination plan has been disrupted by Covid-19.
- Maintenance of an up-to-date log of actions, decisions, risks, project KPI’s and key characteristics.
The main progress beyond the state of art relates to the increased autonomy that will be reached for the landing of large passenger aircraft. About 99% of landings with commercial aircraft still include a phase where the human pilot takes full control of the aircraft. With 49% of fatal aircraft accidents happening in the approach and landing phases and many of them having a demonstratable contribution from human error, it is clear that fully automated landing will have a positive impact on aviation safety, provided that commensurate safety levels of the automated system can be assured. To reach this goal, the image based landing function must meet the safety objectives for catastrophic failure conditions. This immediately sets the highest safety levels to the IPP and its hosted algorithms. This will require novel techniques to establish a measurable confidence level about the output of these algorithms. There is no prior art in certifying image processing algorithms with possibly catastrophic failure conditions for use in commercial aircraft.
The high level of automation enabled with image based landing will also support the transition from dual to single pilot operation in large commercial aircraft, yielding an important competitive advantage for the aircraft manufacturer and airliner.
Example of feature detection at touch down
Example of feature detection at slightly longer distance
On target demo of runway detection and pose estimate