Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS

Cybersecurity for AI-Augmented Systems

Periodic Reporting for period 1 - Sec4AI4Sec (Cybersecurity for AI-Augmented Systems)

Reporting period: 2023-10-01 to 2024-12-31

#Sec4AI4Secs: AI for better security, security for better AI.

The Sec4AI4Sec project aims to develop innovative security-by-design methodologies to address vulnerabilities in modern systems. Unlike traditional efforts focusing on software and hardware components, Sec4AI4Sec includes the emerging frontier of AI-enabled components. These components, identified as critical assets under the European Digital Resilience and Sovereignty strategy, encompass data, AI-driven software, runtime platforms, development pipelines, and the human actors involved (developers and data scientists).

#Dual Approach: Sec4AI and AI4Sec
Sec4AI4Sec recognizes that AI plays two pivotal roles in cybersecurity:

- Sec4AI: AI components embedded in deployed systems extend the attack surface with new vulnerabilities, including adversarial attacks, poisoning, bias, and interpretability challenges. Traditional testing tools (e.g. SAST, DAST) fall short in addressing these issues.
- AI4Sec: AI-powered tools support DevOps teams in secure coding and vulnerability mitigation. However, high false-positive rates and lack of structured methodologies hinder their adoption in certification frameworks.

The project’s ultimate goal is to establish robust assurance methods for AI-augmented systems, thereby facilitating cybersecurity certification.

#Objectives and Outcomes

Sec4AI4Sec outlines seven key objectives (Figure 1).

The first and last objectives tie together the approach to security and AI in a coherent and applicable whole.
- O1 - Certification methods for AI/ML components: development of comprehensive assurance processes rather than reliance on unverified tool outputs, which could exacerbate technical debt.
- O7 - Real-world case studies: validation will occur through pilots addressing critical cybersecurity scenarios aligned with European Digital Sovereignty goals

The remaining objectives tackles the two perspectives
- O2: Benchmarking frameworks: support the development of trustworthy security benchmarking data as a key step to standardize the evaluation of AI-driven tools and models.
- O3: Robustness and fairness testing: create attack algorithms and testing methodologies to identify AI-specific vulnerabilities of AI-models.
- O4: Runtime monitoring and correction: develop non-invasive monitoring techniques of AI-augmented systems to detect threats, correct misconfigurations, and update assurance protocols before exploits occur.
- O5: Reduce false positives: design AI-driven tools to accurately locate security flaws, minimizing false positives in vulnerability detection tools
- O6: Automate patches: employ AI to create, validate, and recommend secure patches for identified vulnerabilities to software developers

This multifaceted approach ensures that Sec4AI4Sec addresses the immediate security challenges of AI systems and paves the way for sustainable, long-term resilience in AI-driven ecosystems.

#Real-World Validation and Strategic Impacts
The project focuses on three critical domains towards the EU Resilience Act and Digital Compass priorities:
- 5G core virtualization,
- Autonomous systems in aviation, and
- Third-party software quality and security assessments.

#Team Members
A diverse consortium of leading universities, innovative SMEs, large enterprises, and a center for digital innovation has collaborated to ensure a comprehensive and multi-faceted approach.
The Sec4AI4Sec project is actively advancing security-by-design testing and assurance techniques tailored for AI-augmented systems.

To support certification, a first draft of a security assurance framework for AI-based systems has been developed, adapting existing cybersecurity standards to AI across the product lifecycle. This framework incorporates novel methodologies for both offline and online security assurance leveraging AI technologies.

For what concerns benchmarking for AI (Sec4AI), a comprehensive, searchable taxonomy of model-level and system-level attacks has been established (Figure 2), offering a structured identification of security threats in AI-based systems. A benchmarking methodology has also been introduced to assess the robustness of AI models. Benchmarking for Security (AI4Sec) has also been significant with the development of tools to collect vulnerability-related data from software repositories, generate vulnerability-witnessing test cases, and create candidate patches.

In terms of technologies (AI4Sec) the project has introduced innovative methodologies for detecting software vulnerabilities and cloud misconfiguration through deep learning models , transformer-based models and conversational LLMs have been fine-tuned to support these tasks effectively. These approaches integrate information from multiple sources, including static code analysis and domain knowledge. Progress has been made in reducing the millions of false positives generated by static analyzers, with the development of methods that provide robust evidence of findings. Specifically, the project has synthesized test cases that can serve as exploits, improving the reliability of security assessments.

The project is currently designing its validation strategy through three key scenarios: 5G core virtualization, autonomous safety systems in aviation, and quality assurance for third-party software assessment and certification.
The Sec4AI4Sec project has made significant strides in developing security-by-design testing and assurance techniques tailored for AI-augmented systems. These advancements have the potential to democratize security and AI expertise, reduce development costs, improve software quality and increase trustworthiness of AI systems.

As part of this effort, researchers have developed new tools to help identify security issues in software, making it easier to detect and address vulnerabilities. Several datasets have been published to support security research, contributing to safer and more resilient digital infrastructure.

An open-source benchmark, that can be interactively navigated, has been created to evaluate the security of machine learning models, providing valuable resources for AI engineers and system designers.. Another major outcome of the project is a dataset designed to facilitate the improvement of automated vulnerability repair techniques, helping developers address security issues more efficiently.

Building on this foundation, a general assurance framework has been introduced to provide guidance on securing AI-based systems. This includes structured processes for evaluating risks, best practices for designing secure AI architectures, and methodologies to mitigate uncertainties in AI decision-making. AI itself is also being leveraged to enhance security, from advanced intrusion detection systems to automated tools that assist in software analysis and misconfiguration fixes.
Figure 2 - Benchmark Model Level Attack
Project Logo
Figure 1 - Sec4AI4Sec Critical Objectives
My booklet 0 0