Periodic Reporting for period 1 - AI4CYBER (Trustworthy Artificial Intelligence for Cybersecurity Reinforcement and System Resilience)
Periodo di rendicontazione: 2022-09-01 al 2024-02-29
After the first half of the research work of the action, the project is successfully progressing towards achieving its objectives. The project is researching on methods and tools of AI-driven software robustness and security testing to facilitate the testing experts work, through smarter flaw identification and code fixing automation. Cybersecurity services for comprehension, detection and analysis of AI-powered attacks to prepare the critical systems to be resilient against them are also included in the framework. And response automation services will allow offloading security operators from complex and tedious tasks offering them mechanisms to optimize the orchestration of the most appropriate combination of security protections.
The technical and scientific work in the project has progressed as expected and has even outperformed the expected results of the period, since the project has explored and adopted Large Language Models (LLMs) as part of the artificial intelligence solutions that will be leveraged in the project, thus modernising the initial approach proposed in the description of the action for some of the components of the AI4CYBER framework.
WP3 was devoted to the research on AI-driven testing solutions and AI use for preparedness of the system against advanced and sophisticated threats. The work led to the initial version of AI4FIX and AI4VULN components of the AI4CYBER framework. While AI4FIX uses AI technologies, like Large Language Models, to automate the correction of errors and weaknesses in software code, AI4VULN prototype tool uses AI-enhanced symbolic execution to identify source code vulnerabilities.
In addition, WP3 progressed on AI-powered simulation, and designed and implemented the AI4SIM component where advanced attack simulation workflows and supporting tools were developed, along with datasets for testing and validation. Finally, AI4CTI was designed and implemented, which leverages AI, and particularly LLMs to increase knowledge of advanced threats. The component extracts deep knowledge from open CTI sources such as security advisories and attack graphs and extracts tactics, techniques and procedures TTPs and temporal information to propose ordered mitigations.
In WP4, the activities carried out in the first period included the design, implementation and detailed definition of the initial versions of AI4FIDS and AI4TRIAGE components. AI4FIDS is a federated Intrusion Detection System (IDS) which adopts a multimodal architecture where several detectors are combined as a set of collaborative federated IDS. The corresponding DL models were implemented, utilising network flow statistics, system logs, operational data, and binary representations, and federation schemas of these DL models were designed and developed.
In WP5, the models and methods supporting the autonomous response and defence strategy optimization were delivered. Particularly, four software services have been conceived and implemented: i) AI4ADAPT that leverages reinforcement learning (RL) to offer the needed intelligence to autonomously evolve the needed response measures in the system so as protection efficiency is increased.; ii) AI4SOAR that analyses optimal defence strategies and intelligently orchestrates multiple incident responses at different layers of the system and the organisation and provides automation in the orchestration of response playbooks; iii) AI4DECEIVE which uses Game theory to intelligently deploy and configure networks of honeypots that maximise the time the attackers get lured by the trap; and iv) AI4COLLAB that enables incident information sharing for third parties to benefit in preparing their systems to counter similar attacks and uses anonymisation techniques to prevent private information disclosure.
WP6 results were delivered in form of TRUST4AI component, which considers the ML models as black-box entities and assess trustworthiness of the model. The designed TRUST4AI.XAI subcomponent allows model engineers investigating the AI explainability, while the TRUST4AI.Fairness allows them detecting bias of an AI system and mitigate it. The TRUST4AI.Security service is dedicated to ensuring the security against Adversarial Machine Learning (AML) attacks.
• AI4VULN – Code testing: An open-source solution to automatic identification and verification of vulnerabilities and weaknesses in the code thanks to applying symbolic execution and the use of AI to support scalability.
• AI4FIX – Code vulnerability fixing: A fully open-source end-to-end vulnerability fixing solution supporting Java, bringing automatic unit testing of proposed fixes, which enables to shift the fixing of the vulnerability much earlier in the software development flow, saving time and reworks.
• AI4AICTI - Cyber Threat Intelligence improvement: An advanced solution that offers latest AI-powered Cyber Threat Intelligence (CTI) to detection and threat simulation tools for raising their efficiency.
• AI4SIM - Threat Simulation: An Advanced cyberattacks simulation solution capable to simulate advanced and AI-powered attacks against IT, OT and IoT systems.
• AI4FIDS – Federated Detection of threats: A high-performance and accuracy detection solution for Advanced and AI-powered attacks detection in distributed environments where privacy of data needs to be kept.
• AI4TRIAGE – Incident triage: AI-based root cause analysis and alert triage to prioritize events to focus on the response.
• AI4SOAR – Security Orchestration, Automation Response: AI-powered SOAR capable to deploy multiple security controls at different layers of the system to better react against cyber attacks.
• AI4DECEIVE – Deception and honeypots: The intelligent deception mechanisms that will enrich the response of the AI4SOAR.
• AI4ADAPT – Long term adaptation: The service that will enhance the AI4SOAR with long-term response based on self-learning the system status and the efficiency of the security controls deployed.
• AI4COLLAB –Information sharing and collaboration: The service for automatic anonymous sharing of incident information.
• TRUST4AI - Trustworthiness of AI: A set of highly innovative methods and models ensuring trustworthiness of AI systems.
As the prototypes mature, the project partners will go defining the IPR and exploitation models.