European Commission logo
français français
CORDIS - Résultats de la recherche de l’UE



Période du rapport: 2022-01-01 au 2022-12-31

Air Traffic Management (ATM), where decision making is more and more associated with AI and with Machine Learning (ML). However, these algorithms are still facing acceptability issues as they are non-intuitive and not understandable by human. In other words, today’s automation systems with AI or ML do not provide additional information on top of the data processing result to support its explanation which make them not transparent enough. To address these limitations, ARTIMATION project investigates the applicability of AI methods from the domain of Explainable Artificial Intelligence (XAI). In ARTIMATION, we will investigate specific features to make AI model transparent and post hoc interpretable (i.e. decision understanding) for users in the domain of ATM systems.

The overall objectives are in three folds:

Research Objective: Provide transparency and explainability to the AI, build a conceptual framework for building human centric XAI and provide user guidelines for further AI algorithm development and application with AI transparency in ATM domain

Technical Objective: Design human-AI-interaction (hAIi) to provide a data-driven storytelling. And define a data exploration approach through visual analytics and evaluate the XAI by novel immersive analytics technologies with virtual reality and Brain-Computer Interface (BCI) systems

Social Objective: Develop transparent AI models for ATM operators with better integrated approach between them and AI, with guidelines for shortening the training period.
These methodologies are :

Phase 1: Definition & specifications (WP3); it considered a use user-centric design principles. A SotA study on AI in ATM domain have been conducted under the WP3. Here, the main objective is to review and identify AI techniques, methods and algorithms that have been applied in different ATM domain’s related tasks. To identify AI transparency in ATM, first, a state of the art of AI/ML algorithms in the ATM domain is analysed. Then an overview of AI/ML and their explainability is provided.

Two workshops were conducted under the Task T3.2 1) to identify the specific ATM segments where to apply the AI algorithms and 2) to matches the chosen task with the most appropriate artificial intelligence system. Based on the workshops, two tasks are selected i.e. Delay Propagation/prediction (DP) and Conflict Resolution (CR), and depending on the stages of explainability and selected AI/ML algorithms, (i.e. ANN and RF), two different methods of adding explanation are considered. The methods are: LIME; SHAP.

Phase 2: Development Cycles (WP4, & 5) that will include multivariate data analysis, data driven AI modelling, transparency, visualization, explanation and adaptation framework.
The neurophysiological measures and its related algorithms have been investigated through T4.1. Two XAI solution tools have been developed considering transparent AI models with Explainability: DP tool, and CR tool. Here, the development has been conducted through three tasks T4.2 T4.3 & T4.4. The tools are developed based on two kinds of datasets, 1) for DP a real dataset is collected though EUROCONTROL and 2) the for the CR synthetic datasets are considered. In terms of AI methods developments, RF, LSTM, and Genetic algorithms are applied and for the explanation LIME, SHAP, Model centric explainability and user centric explainability is considered.

A lifelong machine learning framework and integration of a causal model for providing an explanation of the delay prediction tool is investigated through the tasks 5.1 & 5.2. Here, for lifelong machine learning, three methods are introduced: 1) Lifelong random forest with a genetic algorithm, 2) XGBoost-CBR framework, and 3) Long-short term Memory with elastic weight consolidation. The structural causal model is adopted to provide a causal understanding of takeoff time delay from the observational data.

The development of user centric explainable artificial intelligence and adaptive human computer interface for providing an explanation of the conflict detection and resolution tool through the tasks T5.3 & T5.4. Here, for user centric explanation, three level of explanation have been developed, Blackbox, Heatmap, and Storyboard, using aggregation and storytelling techniques.

Phase 3: Test and validation (WP6) where two different types of tests is taken place July - October, 2022, for the development of the models, and the user tests.
A validation plan is conducted through the T6.1: Validation plan in work package 6.

Validation have been performed for both tools, where a) for Conflict detection and Resolution (CD&R), In total, 21 participants were recruited to participate in the ARTIMATION validation sessions physically. Participants were recruited targeting two populations, “Expert” and “Student”. The recruitment was made using 1) ENAC internal process, contacting personal, and students, and 2) using researcher network.
On the other hand, b) for the delay prediction and propagation tool an offline validation is conducted where 9 participants have been performed. the results are in D 6.2.

Phase 4: Guidelines and Training (WP7), A summary analysis of ARTIMATION in terms of guidelines of the development of transparent AI in ATM domain, generalisation approach to transfer the knowledge to other area in ATM domain, and a proof-of-concept of transparent AI models including visualization, explanation, generalization with adaptability over longer time and user acceptability in the domain of air traffic management transportation systems have been performed through the tasks T7.1: AI Transparency Guidelines - the lesson learned from our project, T7.2 - Generalisation (applicability to other AI algorithms and ATM task, and T7.3 Guidelines for training purpose.

A Communication, Dissemination and Exploitation Plan has been made during the beginning of the project based on the task T 8.1 and T 8.3 and tried to follow the plan according. Here, an intermediate and final dissemination reports are produced, see the D 8.2 and D 8.3.
ARTIMATION effectively accomplished its research goal of creating and implementing AI and XAI algorithms that promote transparency and explainability for delay propagation and conflict avoidance tasks. Two proof-of-concept prototypical tools were developed for human-AI interaction (hAIi), which facilitate data-driven storytelling and validation. In addition, a passive brain-computer interface (BCI) was developed that can adjust between different levels of transparency modes based on the operator's mental and emotional state, including workload, stress, and acceptability. These prototype tools have the potential to enhance human-AI interaction by promoting transparency and validation for decision-making processes. The prototypical system incorporating these algorithms was assessed using both qualitative and quantitative methods, with results demonstrating operational feasibility and partial trust. ARTIMATION's summary analysis provided recommendations for a proof-of-concept of transparent AI models, encompassing visualization, explanation, generalization, adaptability over extended periods, and user acceptability in the domain of air traffic management transportation systems. This method was successful in reducing the training duration.
he state of the art with regards to usefulness of AI within aviation/ATM domain. It includes researc