Skip to main content
Aller à la page d’accueil de la Commission européenne (s’ouvre dans une nouvelle fenêtre)
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS

SAFE AND EXPLAINABLE CRITICAL EMBEDDED SYSTEMS BASED ON AI

Periodic Reporting for period 1 - SAFEXPLAIN (SAFE AND EXPLAINABLE CRITICAL EMBEDDED SYSTEMS BASED ON AI)

Période du rapport: 2022-10-01 au 2024-03-31

SAFEXPLAIN aims at devising ways to use Artificial Intelligence (AI) software, and more specifically, Machine Learning (ML) software in safety critical systems so that such ML software inherits safety requirements and hence, it must be developed following the same principles as any other software in those systems. This poses a number of challenges because neither functional safety standards are compatible with the way ML software is developed, nor such software is amenable “as it is” to be included in safety-critical systems. In SAFEXPLAIN, we contend that such a huge challenge can only be addressed holistically, adapting coordinately the safety development processes, the ML software architecture, and the way high-performance platforms are used to run such software. For that purpose, SAFEXPLAIN devises concepts and principles that allow addressing the challenge, and specific realizations of software architectures and tools, on industrially-relevant platforms, applied to automotive, space, and railway use cases.
During the first 18 months of the project, SAFEXPLAIN partners have carefully defined the steps to follow to achieve the project goals, have devised the development processes to follow for ML software, as well as how software architectures must include it, have devised ML solutions capable of complementing ML software realizing the system functionality (e.g. detecting and classifying objects) with information about the confidence of the predictions, have set specific middleware capable of providing all services needed for the AI-based application, have found ways to smartly use high-performance platforms in a predictable and traceable manner, and have defined an example case study bridging the gap between the technologies developed and the actual case studies in the project.
In more detail, SAFEXPLAIN has made significant progress towards achieving its goals in the different fronts of the project: (1) requirements and success criteria definition, (2) safety lifecycle and safety architecture design, (3) solutions to provide meta-information along with ML software predictions that allows assessing confidence on the predictions, (4) platform support and system services where to practically integrate the AI-based software architectures, and (5) case study preparation towards assessing SAFEXPLAIN technologies and concepts.

(1) SAFEXPLAIN partners have defined precise requirements to meet during each phase of the project, and success criteria to assess the degree of achievement. Until month 18, roughly all requirements have been met, and small deviations are in the process of being addressed successfully.

(2) A safety lifecycle describing how to set training and validation data for safety-relevant ML software, training processes, and inference processes during operation have been defined. Such safety lifecycle has already been assessed positively by relevant entities in the domain. Also, safety architectures amenable for different degrees of integrity levels for ML software have been devised, and some of them explicitly applied to an internal case study used to guide the project case studies. Such integrity levels relate to the cases where ML software provides complementary information related to the safety of the system, whether it intervenes in the safety management of the system, or whether it implements the safety functionality.

(3) Processes and solutions to assess the trustability of ML software have been identified, and some of them already applied to part of the challenges exposed in the case studies. Such solutions allow telling whether the system is being fed with data different to that used for training (e.g. system trained to identify people only, but a dog appears in the scene), whether the ML model used for raising predictions is capable of raising trustable predictions even if input data is similar to the one used for training, and whether input data offers insufficient information to raise trustable predictions (e.g. detecting and classifying overlapped objects).

(4) A suitable middleware has been deployed allowing to integrate ML-based safety-relevant applications onto the platform with appropriate levels of abstraction and providing the services needed by the applications. Most of the services are already up and ready, as well as the features to properly control the application beneath. Related to the latter, the target platform of the project (NVIDIA Orin) has been carefully analyzed identifying how to master it, setting convenient configurations, and setting the basis to deploy solutions to provide real-time guarantees to the ML-based applications to be run on top.

(5) Case studies have already been ported to the target platform, which is a key achievement. In parallel, and to speed up integration, an example case study providing most of the common elements across the automotive, space and railway case studies of the project has been deployed, and it is turning out to be very useful to prepare technologies for integration with the case studies, and to tailor case studies’ architectures to be easily deployed on top of the middleware.

Apart from all the technical advances achieved so far, huge effort has been done to disseminate and communicate the work in progress and the achievements so far in a wide variety of communities and audiences, spanning from technical specialists, to industrial stakeholders and general audiences. Hand in hand with the dissemination efforts, exploitation efforts have allowed identifying the exploitable items of the project, defining exploitation paths for each one of them, and creating a dialogue between the relevant standardization bodies and the project. In particular, standardization bodies are working toward defining ways to enable the incorporation of ML software in safety-critical systems, and explicit communication with the project is providing those bodies with practical processes and examples they can use, and as part of the dialogue, SAFEXPLAIN partners are exposed to the directions that those standards are taking so that project solutions will match those standards whenever they become final.
SAFEXPLAIN Ambition:architecting DL solutions enabling certification/qualification
Mon livret 0 0