CORDIS - Forschungsergebnisse der EU
CORDIS

Security and Privacy Accountable Technology Innovations, Algorithms, and machine Learning

Periodic Reporting for period 1 - SPATIAL (Security and Privacy Accountable Technology Innovations, Algorithms, and machine Learning)

Berichtszeitraum: 2021-09-01 bis 2023-02-28

One of the most critical concerns for security of AI-based solutions is the opacity of algorithms in the software and hardware operations. This opacity is affecting the trustworthiness of AI, upon which the EU can develop, deploy and use AI solutions - especially in the cybersecurity domain.

SPATIAL project focuses on trust in cybersecurity AI, aiming to influence effective regulation, governance and standardisation processes and procedures for AI usage in ICT systems. SPATIAL project will tackle the identified gaps of data issues and black-box AI by designing and developing resilient accountable metrics, privacy-preserving methods, verification tools and system solutions that will serve as critical building blocks for trustworthy AI in ICT systems and cybersecurity. Besides technical measures, SPATIAL project aims to facilitate generating appropriate skills and education for AI to strike a balance among technological complexity, societal difficulty and value conflicts in AI deployment. The project covers data privacy, resilience engineering, and legal-ethical accountability in the context of three pillars towards trustworthy AI.

Upon successful completion of the project, SPATIAL will provide solid building blocks to enable trustworthy governance and regulatory framework in AI-driven security, in terms of evaluation metrics, verification tools and system framework. In addition, the project will generate dedicated education modules for trustworthy AI in cybersecurity. The contributions of SPATIAL on both social and technical aspects will serve as a steppingstone to establish an appropriate governance and regulatory framework in Europe.
SPATIAL is structured into seven work packages (WPs). As of end of February 2023, the following work has been performed:

WP1 - Requirements and threat modeling:
A requirements analysis was carried out by the contributing partners to identify the key design principles for implementing explainable and accountable algorithms. In addition, the work in this WP has also gone toward identifying risks and attack scenarios that could affect the security and trustworthiness of distributed AI systems.

WP2 - Resilient accountability metrics and embedded algorithmic accountability:
Work in WP2 has gone toward identifying a set of privacy, accountability and resilience metrics to be used in a SPATIAL process for embedding them into AI algorithms. These metrics, along with the requirements from WP1, have been the analytical foundation for work continuing in WP3.

WP3 - System architecture, consistency and accountability for AI, Validation and Testing:
The work in WP3 for this period has been focused on laying the groundwork for the SPATIAL platform that is meant to integrate the metrics and requirements from WP1 and 2 into a platform of services for trustworthy AI that will be developed and evaluating in the second half of the project.

WP4 - User engagement, acceptance and practice transitions:
This task is geared toward the sociotechnical analysis and validation of SPATIAL’s outputs and research process. A framework has been produced for this process, followed by an embedded field analysis of the work done in SPATIAL.
The development of an educational module has also begun for transferring knowledge output from SPATIAL as an online course for trustworthy AI that can be included in academic courses or serve as training for interested companies and other independent parties.

WP5 - Deployment and demonstration:
The purpose of the work spent in this WP is to first design and build the infrastructure to deploy the pilots of the project; and second, to run the pilots and validate the technical developments of SPATIAL through them. As of the end of this reporting period, a report of the initial descriptions of all pilots has been written. Additionally, workshops have been carried out within the project in order to map the needs of individual pilots onto the requirements of the SPATIAL platform, so that we know later in the project which platform services and components can be evaluated in which pilot.

WP6 - Impact, outreach and collaboration:
Efforts in WP6 have been spent on reaching our pre-established dissemination KIPs for measuring SPATIAL’s research outreach, building a network of contacts, creating an exploitation plan for the deployment stage of the SPATIAL platform, and actively promoting the project’s activities on our website and social media.

WP7 - Project management:
The efforts in this WP has been directed toward a continuous coordination of project partners to ensure that task inputs and outputs flow coherently between the different work packages, as well as maintaining communication between the project and the European Commission.
One of the expected short-term impacts of SPATIAL is filling a research gap pertaining to resiliency against data interference attacks for gradient boosted trees, and against evasion attacks, data inference attacks, and model stealing attacks for Bayesian networks. Additionally, we have currently made great progress in developing a SPATIAL process for specifically addressing this gap. While we have found that existing AI-based network solutions mainly provide model accuracy metrics, SPATIAL will provide more metrics which allows stakeholders to better evaluate the accountability and resilience of their model.

Our initial studies have revealed how the SPATIAL process can be used to assess the AI system before and after evasion attacks and data poisoning attacks. Our expected results of this by the end of the project is that we can, by deployment of the SPATIAL platform, provide this SPATIAL process as a service to any stakeholder that requires it. This would enable a novel and improved approach to assessment of model resilience against data interference attacks. By providing better accountability and resilience, this would in the long-term ensure greater trust in cybersecurity AI.

Another expected societal impact of SPATIAL is educational, with our educational module for teaching the broader public about trust, fairness, explainability and security in AI. This module is currently undergoing alpha testing and is expected to have a full launch within the duration of the project.
Consortium meeting in Barcelona