European Commission logo
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS

Modern ATM via Human/Automation Learning Optimisation

Periodic Reporting for period 2 - MAHALO (Modern ATM via Human/Automation Learning Optimisation)

Periodo di rendicontazione: 2021-06-01 al 2022-11-30

The MAHALO project started from simple questions: In the emerging age of Machine Learning, should we be developing automation that is conformal to the human, or should we be developing automation that is transparent to the human? Do we need both? Further, are there trade-offs and interactions between the concepts, in terms of operator trust, acceptance, or performance?

To answer these questions, the MAHALO team has been, first, defining an ATM Concept of Operations and User Interface on which to base this work (see deliverable D2.2 Concept of Operations report, earlier in this series); Second, the team developed an automated conflict detection and resolution (CD&R) capability, realised in a prototype Machine Learning (ML) hybrid system of combined architectures.

The aim of the current work was to use these foundations to address a more specific research question, as originally laid out by MAHALO, and to experimentally evaluate, using HITL simulations, the relative impact of conformance and transparency of advanced AI, in terms of e.g. controller trust, acceptance, workload, and human/machine performance. The broad research question to be addressed has been redefined as:
"How does the strategic conformance and transparency of a machine learning decision support system for conflict detection and resolution affect air traffic controllers’ understanding, trust, acceptance, and workload of its advice and performance in solving conflicts, and how do these factors (conformance and transparency) interact?"
MAHALO conducted field simulations to evaluate the impact of conformance and transparency manipulations on controller acceptance, agreement, workload, and general subjective feedback, among other measures. Each simulation consisted of two phases. For the second one and most important (the Main Experiment), conformance and transparency were manipulated within participant. Conformance was implemented as either a personal model, a group model, or an optimal model. ML was used to build the group and optimal models, whereas a synthetic approach was used to construct personal models for each participant. Transparency of proposed advisories was defined as either a baseline vector solution display, a prototype Situation Space Diagram representation, or a text-based condition that combined SSD with a contextual explanation of the systems rationale.

The advisory conformance, or personalisation, of advisories had an impact on controllers’ response to advisories, but not in a uniform direction. Although personalized advisories received more favourable responses in many cases, there were also cases when the optimal or group advisories were favoured. There was no strong effect of advisory transparency on controllers’ responses. An in-depth analysis was made dividing participants in two groups depending on how close their separation distance preference was to the target separation distance aimed for by the optimal model’s advisory. The analysis revealed a reoccurring pattern emerged where the group of participants, whose average separation distance measured in the training pre-test was closer to the separation distance aimed for by the optimal advisory, showed unchanged or more positive responses to the advisory with increasing transparency. That is, their acceptance of advisories and ratings of agreement, conformance, and understanding was higher compared with the other group.

The project also provided valuable findings and guidelines on how to incorporate conformance and transparent mechanisms of AI solutions to conflict detection and resolution in particular, and to problem solving tasks in safety critical systems in general.
An implicit assumption going into this research was that Transparency fosters understanding, acceptance and agreement. As a thought experiment, however, consider the case where poorly functional automation is outputting advisories. In this case, transparency might have the opposite effect and lower controllers’ agreement and acceptance of the system. The notion here is that if transparency involves making clear to the operator the inner workings of the algorithm, it does not necessarily increase agreement and acceptance, but should optimize them. Transparency and explainability should increase acceptance and agreement for an optimal algorithm, which should also decrease acceptance and agreement for a sub optimal algorithm.

Although personalization of ML systems is held as a positive goal, there is one potential challenge that we need to consider. Namely, attempts to personalize advisory systems introduce the risk that they drive the operator to solve the problem in a particular way. For example, the simulated advisories aimed to solve en-route conflicts using a single intervention with only one of two involved aircraft. This approach is inconsistent with controllers who solved the conflict with two interactions (for example, slightly turning both aircraft). It should be noted that the way advisories are framed can give a suggestion for how the system proposed to solve a given conflict, and offers an implicit reference against which controllers’ judgment and decision is formed. Without an advisory system the controller would search for information and cues with regard to traffic pattern, speeds, altitude etc. in deciding how to solve a conflict. Past research has noted that advisory systems can have the unintended consequence of increasing task load. The notion is that whereas a current controller has to devise a solution, under an automated advisory system that controller has the additional task of processing the advisory, and comparing that to their own strategy.

MAHALO participated in some important dissemination events, and two SJU ER4 Automation Workshops. It hosted two public Workshops with the Advisory Board to validate and gather useful insights regarding the experimental design for the two simulations, and then to present the first results, also presented at the ANACNA Conference in Rome, at the World ATM Congress and to EASA.
In the last period, a paper was presented at the EASN Conference 2022 in Barcelona, with a specific focus on the Guidelines for future AI systems in ATC. Then it was submitted and presented another paper summarising the main results of the project at the 12th SESAR Innovation Days 2022. In cooperation with the other ER4 project, it was produced a common White Paper about the introduction of Explainable AI in ATM, as well as a common dissemination video. MAHALO hosted its Final Dissemination Event in Rome. Results and Guidelines were again presented to a wide number of attendees with different backgrounds, that welcomed enthusiastically the project's achievements.
MAHALO created a system that learns from the individual operator, but also provides the operator insight into what the machine has learnt. Several models were trained and evaluated to reflect a continuum from individually-matched to group-average. The user interface in MAHALO presented ML outputs, in terms of: current and future (what-if) traffic patterns; intended resolution maneuvers; and rule-based rationale. The project’s output added knowledge and design principles on how AI and transparency can be used to improve ATM performance, capacity, and safety.
That said, the knowledge reached within MAHALO could be applied in the future via a transfer learning process also to other transportation domains, such as automotive or aerospace.
mahalo-overview.jpg
sm2b-1-1152x1536.jpg