Skip to main content
European Commission logo
español español
CORDIS - Resultados de investigaciones de la UE
CORDIS

Safer Autonomous Systems

Periodic Reporting for period 2 - SAS (Safer Autonomous Systems)

Período documentado: 2020-11-01 hasta 2023-04-30

What was the challenge?

Previously, prevailing safety standards and assurance methods were tailored to systems featuring human intervention capabilities, rather than autonomous systems equipped with predefined responses. Furthermore, these methodologies assumed that once deployed, systems would remain static and not undergo learning or evolution. Nonetheless, advancements in machine learning have empowered autonomous systems to learn from errors and enhance safety. Despite these advancements, the integration of machine learning introduces an element of uncertainty in future decision-making, thereby presenting a formidable challenge for ensuring safety. Effectively addressing this challenge demanded a workforce possessing both high-level skills and a multidisciplinary approach.

Why is it important for society?

Autonomous systems have the potential to significantly reduce accidents, save lives, and prevent injuries on roads, in factories, and in other environments. However, societal acceptance of autonomous systems relies on trust. The public's perception of these technologies directly affects their adoption rates. Gaining trust through transparent safety measures, thorough validation & verification, and effective communication can facilitate the integration of autonomous systems into various sectors, from transportation to healthcare, thereby redefining industries for the better. In conclusion, building trust in the safety of autonomous systems is a cornerstone of societal progress. By assuring their reliability, society can unlock their full potential, improve safety, stimulate economic growth, and foster harmonious human-machine interaction.

What were the overall objectives?

SAS, the European Training Network for Safer Autonomous Systems, was a key instrument for getting people to trust autonomous systems by making the systems safer. In order to achieve this objective, a group of 15 highly skilled early-stage researchers investigated new forms of safety-assurance strategies, dynamic risk mitigation, fault-tolerant and failsafe hardware/software design, model-based safety analysis, as well as legal aspects related to autonomous systems.
Researchers from different disciplines – electrical engineering, computer science, safety engineering and legal aspects – worked together in the SAS project. They focused on three specific challenges:
- Challenge 1: Directly integrating guaranteed safe behavior into the design and architecture of autonomous systems (WP1).
- Challenge 2: Demonstrating the ongoing safety of an autonomous system under all conceivable conditions through model-based safety-analysis techniques (WP2).
- Challenge 3: Ensuring that safety-assurance strategies, which blend architectural/design measures with evidence, foster trust in autonomous systems that are likely to evolve and learn (WP3).

For the first challenge, SAS made significant contributions on how to improve - autonomous systems' inherent resilience against safety threats. ESR1 (Raul Ferreira) focused on identifying threats related to machine learning (ML), developing runtime monitoring approaches, designing dedicated reactions for specific threats, and proposing a unified evaluation method for safety monitors. ESR2 (Yuan Liao) developed an initial implementation of a monitoring framework aimed at detecting untrusted information and identifying potential reliable information to counteract corrupted data. ESR3 (João Zacchi) extended safety contracts to dynamically adapt to the environmental context of autonomous systems-of-systems. ESR4 (Dejana Ugrenovic) concentrated on detecting out-of-distribution data and architecturally designing Neural Network classification components. After she left the SAS project, the newly recruited ESR4 (Mohaddaseh Nikseresht) continued by assessing software design and testing strategies on an autonomous robot test-case within a small-scale factory. Finally ESR5's (Aleksandr Ovechkin) work aimed at enhancing interconnected autonomous systems' resilience against electromagnetic interference. He developed two innovative spectral leakage-based techniques to bolster OFDM communication's robustness against narrowband electromagnetic disturbances.

For the second challenge, model-based safety-analysis techniques were applied to several aspects involved in the validation and verification (V&V) of an autonomous system's safety. The research efforts of ESR6 (Luca Sartori) led to a simulation-based framework for automatic world generation in virtual V&V of autonomous systems. The initial ESR7 (Zaid Tahir) explored situation coverage for more efficient virtual testing of autonomous vehicles, using genetic algorithms and reinforcement learning, while a newly recruited ESR (Nawhin Proma) conducted research on operational design domains and road structure ontologies; ESR8 (Ahmad Aden) developed a hybrid framework for addressing functional insufficiencies in software-intensive systems through model-based analysis, validated with a LIDAR-based car detection case study. And ESR9 (Hassan Tirmizi) focused on electromagnetic diversity techniques to enhance resilience against electromagnetic disturbances, culminating in a novel technique based on symbol diversity for PAM-4 communication.

For the third challenge, ESR10 (Fang Yan) delved into run-time assurance case patterns across various use cases, creating a tool support environment. ESR11 (Vibhu Gautham) evaluated assurance case structures for machine learning in autonomous driving. After he left the SAS project, a new ESR (Hasan Firoz) implemented dedicated Simulink models. ESR12 (Tianlei Miao) developed a novel sensor fusion algorithm for autonomous sailing, validated through real-ship experimentation. ESR13 (Haris After) created a taxonomy and safety case for human-bot interaction in clinical settings. ESR14 (Luis Cobos) focused on unified dependability assurance for vehicle software updates, while ESR15 (Orion Dheu) analyzed liability mechanisms, proposing a new framework to enhance legal certainty and safety when utilizing autonomous systems.
SAS was one of the first large-scale initiatives on training qualified people to tackle many of the problems that were being faced by European industry in the development of safe and trustworthy autonomous systems. Through the research of the SAS ESRs, it became even more evident that conventional safety-assurance standards and methodologies, based on the assumption of human intervention and static behavior, are insufficient to address the challenges posed by autonomous systems. The SAS project made significant steps forward in tackling these challenges by pioneering innovative safety-assurance strategies for autonomous systems. These strategies account for the absence of human operators, the dynamic and uncertain operational contexts, and the incorporation of artificial intelligence and machine learning within these systems.

The SAS scientific papers and conference/workshop contributions clearly had a large impact on the (scientific and operational) field and the (academic and industrial) actors. Furthermore, various beneficiaries and partner organizations of SAS have been actively engaged in international standardization working groups and committees. This involvement encompasses notable standards such as ISO 26262, ISO 21448, UL4600, and IEC SC42, reflecting a commitment to shaping and advancing safety standards within the industry.
SAS consortium overview
SAS WP overview