Periodic Reporting for period 1 - SAFEXPLAIN (SAFE AND EXPLAINABLE CRITICAL EMBEDDED SYSTEMS BASED ON AI)
Reporting period: 2022-10-01 to 2024-03-31
(1) SAFEXPLAIN partners have defined precise requirements to meet during each phase of the project, and success criteria to assess the degree of achievement. Until month 18, roughly all requirements have been met, and small deviations are in the process of being addressed successfully.
(2) A safety lifecycle describing how to set training and validation data for safety-relevant ML software, training processes, and inference processes during operation have been defined. Such safety lifecycle has already been assessed positively by relevant entities in the domain. Also, safety architectures amenable for different degrees of integrity levels for ML software have been devised, and some of them explicitly applied to an internal case study used to guide the project case studies. Such integrity levels relate to the cases where ML software provides complementary information related to the safety of the system, whether it intervenes in the safety management of the system, or whether it implements the safety functionality.
(3) Processes and solutions to assess the trustability of ML software have been identified, and some of them already applied to part of the challenges exposed in the case studies. Such solutions allow telling whether the system is being fed with data different to that used for training (e.g. system trained to identify people only, but a dog appears in the scene), whether the ML model used for raising predictions is capable of raising trustable predictions even if input data is similar to the one used for training, and whether input data offers insufficient information to raise trustable predictions (e.g. detecting and classifying overlapped objects).
(4) A suitable middleware has been deployed allowing to integrate ML-based safety-relevant applications onto the platform with appropriate levels of abstraction and providing the services needed by the applications. Most of the services are already up and ready, as well as the features to properly control the application beneath. Related to the latter, the target platform of the project (NVIDIA Orin) has been carefully analyzed identifying how to master it, setting convenient configurations, and setting the basis to deploy solutions to provide real-time guarantees to the ML-based applications to be run on top.
(5) Case studies have already been ported to the target platform, which is a key achievement. In parallel, and to speed up integration, an example case study providing most of the common elements across the automotive, space and railway case studies of the project has been deployed, and it is turning out to be very useful to prepare technologies for integration with the case studies, and to tailor case studies’ architectures to be easily deployed on top of the middleware.
Apart from all the technical advances achieved so far, huge effort has been done to disseminate and communicate the work in progress and the achievements so far in a wide variety of communities and audiences, spanning from technical specialists, to industrial stakeholders and general audiences. Hand in hand with the dissemination efforts, exploitation efforts have allowed identifying the exploitable items of the project, defining exploitation paths for each one of them, and creating a dialogue between the relevant standardization bodies and the project. In particular, standardization bodies are working toward defining ways to enable the incorporation of ML software in safety-critical systems, and explicit communication with the project is providing those bodies with practical processes and examples they can use, and as part of the dialogue, SAFEXPLAIN partners are exposed to the directions that those standards are taking so that project solutions will match those standards whenever they become final.