Periodic Reporting for period 2 - SPATIAL (Security and Privacy Accountable Technology Innovations, Algorithms, and machine Learning)
Okres sprawozdawczy: 2023-03-01 do 2024-08-31
From societal perspective, SPATIAL project covers the critical subjects of data privacy, resilience engineering, and legal-ethical accountability in the context of three core pillars towards trustworthy AI. The work carried out in SPATIAL on both social and technical aspects serve as a stepping stone to establish an appropriate governance and regulatory framework for AI-driven security in Europe.
Through project executions, SPATIAL takes on the following objectives: 1) develop systematic verification and validation software/hardware mechanisms that ensure AI transparency and explainability in security solution development; 2) develop system solutions, platforms, and standards that enhance resilience in the training and deployment of AI in decentralized, uncontrolled environments; 3) define effective and practical adoption and adaptation guidelines to ensure streamlined implementation of trustworthy AI solutions; 4) create educational modules that provide technical skills, ethical and socio-legal awareness to current and future AI engineers/developers to ensure the accountable development of security solutions; 5) develop a communication framework that enables accountable and transparent understanding of AI applications for users, software developers and security service providers. Aligning with the objectives, all the work deliverables and milestones have been submitted and achieved. The concluding of SPATIAL project has hence achieved the tasks and missions in line with the objectives and the project work plan.
WP1 - Requirements and threat modeling: SPATIAL has performed analysis on risks, threat modeling, and attack scenarios that could affect the security and trustworthiness of distributed AI systems. In addition, an analysis covering software and hardware requirements, data requirements, model requirements, legislative requirements, security requirements, usability and finally accessibility requirements are created for providing realistic guidelines for developers and operators on how to design, deploy, and modify AI-based systems. SPATIAL also proposed a series of design patterns and principles in order to streamline the development process, reducing the risks associated with complex AI systems, and building stakeholder trust.
WP2 - Resilient accountability metrics and embedded algorithmic accountability: SPATIAL has identified and proposed systematic verification and validation mechanisms and metrics that help ensure AI transparency and explainability in security solution development.
WP3 - System architecture, consistency and accountability for AI, validation and testing: The SPATIAL Explanatory AI Platform (open-source) offers functionality to empirically estimate the robustness of ML/AI models against adversarial attacks, calculate diverse trustworthiness metrics, and apply XAI methods to explain the decision-making of ML models. For SPATIAL platform, a novel technology for secure the replication of TEE processes in the cloud has been developed to preserve the privacy of both the data and the model.
WP4 - User engagement, acceptance and practice transitions: SPATIAL has developed education modules and COMPASS framework for knowledge transfer purpose as to convey sociotechnical 'how-to' on accountable and transparent understanding of AI applications for users, software developers and security service providers.
WP5 - Deployment and demonstration: Four SPATIAL industrial driven pilots with high industrial and social relevance have been developed and used as validation for the technical development and potential exploitation of SPATIAL platform and solutions.
WP6 - Impact, outreach and collaboration: Impacts of SPATIAL have been safeguarded by carefully matching SPATIAL research outreach with KIPs, by building a network of contacts for creating and establishing exploitation plans for SPATIAL education modules, platform and use cases.There have been active promotion throughout the project about comprehensive SPATIAL activities via the website and social media.
WP7 - Project management: Continuous coordination of project partners have been performed to ensure that task inputs and outputs flow coherently between the different work packages, as well as maintaining communication between the project and the European Commission.
Through communication and exploitation efforts, SPATIAL has reinforced the engagement and collaboration with other EU-funded projects by founding the Secure Cyber Cluster and organizing joint activities. The dissemination of SPATIAL results are highlighted as follows:
* Participation in 57 events, organizing several workshops and a final event;
* SPATIAL published 38 scientific papers in prestigious conferences and journals, and 5 more publications are foreseen by the end of project final year;
* Reinforcement of our presence on social media, with more than 2800 followers reaching 260.000 impressions;
* Preparation of 12 project newsletters and participation in 12 external newsletters.
* Published a podcast with four episodes;
* Published 21 interviews with stakeholders and the “Advisory Board” testimonials.
On socio-economic impact, the SPATIAL educational modules enhance the technical skills, ethical and socio-legal awareness for current and future AI engineers/developers. This will create long-term socio-economic impact for EU as accountable development of AI security solutions can be better ensured.
On societal impact, the SPATIAL COMPASS framework is a solid contribution which has impact on enabling accountable and transparent understanding of AI applications for users, software developers and security service providers. As to be used by more AI practitioners, this framework can facilitate better alignment with current and future relevant EU policies.