Improving image processing for airplane landing systems
The approach and landing are critical and difficult phases of flight. According to IATA’s 2022 safety report, 43 % of fatal accidents in commercial aviation between 2012 and 2022 occurred during these stages. Landings typically take place in dense airspace, under a variety of weather conditions and close to terrain and obstacles. As they come at the end of the flight and often the working day, pilot fatigue is also at its highest level, increasing the possibility of human error. “Fatigue in combination with high workload is a lethal cocktail that may lead to a sequence of events that finally result in an accident,” says Heikki Deschacht, expert electronics & innovation programs at ScioTeq and IMBALS project coordinator. Automation is generally considered a mitigation for this risk, but the current state of the art in automated landing depends on ground infrastructure that is lacking at many airports. Image-based landing systems are a promising technological development that could assist pilots during these phases of flights, increasing automation of the approach and landing and reducing exposure to human error, without dependency on expensive ground infrastructure. In the EU-funded IMBALS project, researchers developed, validated and verified a new image processing system to help planes land automatically.
Spotting the runway
The IMBALS Image Processing Platform (IPP) processes images supplied by an on-board camera system, removing the need for ground-based precision instrument landing aids and other support. The IPP analyses images from a camera mounted in the nose of the aircraft, recognises the runway and then estimates the aircraft’s relative position to it. The position output from the IPP feeds into cockpit displays, telling the flight crew where the aircraft actually is. The aircraft systems also use the IPP position output to autonomously fly the aircraft through the approach and landing.
Progress through the project
The IMBALS consortium worked with Airbus to understand the requirements for an image-based landing function for next-generation large passenger aircraft, before developing the validation and verification plan for the technology. They tested the concept and algorithms of an image-based landing system on a certifiable embedded system using flight simulators and pre-recorded real video.
Proof of concept and future research opportunities
The IMBALS project successfully demonstrated the potential of the technology. “With the given camera sensor and system data properties, the IPP is able to estimate the camera position relative to the runway with sufficient performance for automating the approach and landing,” adds Deschacht. In clear weather, the IPP detects the runway from almost 6 km away and produces accurate position data within the last 3 km of the approach. Validation by Airbus showed that the autopilot can use the guidance provided by the IPP, enabling automated landing. IMBALS results also proved that the image processing works well with real camera images. “These are already great achievements that motivate us to continue the research for this technology,” notes Deschacht. ScioTeq, the IMBALS project coordinator, is orienting its follow-up R&D efforts towards the creation of a certifiable open avionics system with image processing capabilities. “This open system will enable our customers to develop and host various image processing applications, thus not limiting the image processing benefit to image-based landing only,” says Deschacht.
Keywords
IMBALS, flight, landing, approach, dangerous, image, processing, vision, system, runway, safety