European Commission logo
English English
CORDIS - EU research results
CORDIS

Security and Privacy Accountable Technology Innovations, Algorithms, and machine Learning

Project description

Towards a trustworthy governance and regulatory framework for AI-driven security in Europe

Black box AI refers to AI systems that receive input and produce output without the end-user understanding. As inputs and outputs cannot be easily seen or understood, it can lead to issues within and across organisations. The EU-funded SPATIAL project will address the challenges of black box AI and data management in cybersecurity. To do this, it will design and develop resilient accountable metrics, privacy-preserving methods, verification tools and system framework to pave the way for trustworthy AI in security solutions. In addition to this, the project aims to help generate appropriate skills and education for trustworthy AI in cybersecurity on both societal and technical aspects.

Objective

The SPATIAL (Security and Privacy Accountable Technology Innovations, Algorithms, and machine Learning) project seeks to address the challenges of black-box AI and data management in cybersecurity by designing and developing resilient accountable metrics, privacy-preserving methods, verification tools and system framework that will serve as critical building blocks to achieve trustworthy AI in security solutions. The main objectives include: 1) To develop systematic verification and validation software/hardware mechanisms that ensure AI transparency and explainability in security solution development; 2) To develop system solutions, platforms, and standards that enhance resilience in the training and deployment of AI in decentralized, uncontrolled environments; 3) To define effective and practical adoption and adaptation guidelines to ensure streamlined implementation of trustworthy AI solutions; 4) To create an educational modules that provide technical skills, ethical and socio-legal awareness to current and future AI engineers/developers to ensure the accountable development of security solutions; 5) To develop a communication framework that enables accountable and transparent understanding of AI applications for users, software developers and security service providers. Besides technical measures, SPATIAL project aims to facilitate generating appropriate skills and education for AI security to strike a balance among technological complexity, societal complexity and value conflicts in AI deployment. The project covers data privacy, resilience engineering, and legal-ethical accountability that are in line with EU top agenda to achieve trustworthy AI. In addition, the work carried out in SPATIAL on both social and technical aspects will serve as a stepping stone to establish an appropriate governance and regulatory framework for AI-driven security in Europe.

Call for proposal

H2020-SU-DS-2018-2019-2020

See other projects for this call

Sub call

H2020-SU-DS-2020

Coordinator

TECHNISCHE UNIVERSITEIT DELFT
Net EU contribution
€ 833 500,00
Address
STEVINWEG 1
2628 CN Delft
Netherlands

See on map

Region
West-Nederland Zuid-Holland Delft en Westland
Activity type
Higher or Secondary Education Establishments
Links
Total cost
€ 833 500,00

Participants (12)