Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS

Cyber-kinetic attacks using Artificial Intelligence

Periodic Reporting for period 1 - KINAITICS (Cyber-kinetic attacks using Artificial Intelligence)

Reporting period: 2022-10-01 to 2024-03-31

KINAITICS aims to bring robustness, resilience and responsiveness capabilities to systems involving cyberspace exposure, connections with the physical world through sensors or actuators, and in which Artificial Intelligence (AI) is used to sense, process or control. AI is profoundly modifying products and systems in various sectors. On the one hand, its adoption creates new risks for systems, such that 60% of companies adopting AI acknowledge that the cybersecurity risks generated by AI are among the most critical. On the other hand, AI has an impact on cyber-physical security practices, both on the attack and defence sides. As a new paradigm emerges from the ubiquitous use of AI in cyber-physical systems, threat and risk assessments on systems need to be redefined to take into account the interconnection of the cyber and physical worlds and the dual use of AI. KINAITICS addresses this challenge by undertaking in-depth technical research to understand the emerging risks, and by adopting innovative defence approaches to protect systems from attack and ensure their robustness and resilience. The ambition of the KINAITICS project is to develop tools adapted to these requirements while taking into account the highest ethical standards.
The primary goals of KINAITICS include: (i) Designing an Integrated Framework: this involves creating a framework that consolidates legal, ethical, and technical requirements to ensure human-aware cyber-physical security. (ii) Evaluating Risks and Potential Attacks Involving AI: The project aims to go beyond the current state-of-the-art in assessing risks associated with cyber-physical attacks, particularly those that leverage AI technologies. (iii) Innovating Defence Strategies: KINAITICS seeks to develop defence strategies that are more advanced than the current practices in cyber-physical systems security.
KINAITICS promotes collaboration between technical and legal stakeholders, aligning its efforts with use case holders. It encompasses a methodology that includes detailed use case analyses, descriptions of attack and defence building blocks, and progress in combining various cyber and physical strategies. Interleaved with its technical objectives, the KINAITICS project has several objectives concerning the legal aspects:
- Mapping Technical, Legal, and Ethical Requirements: part of the action is dedicated to mapping the technical, legal, and ethical requirements relevant to the project. This includes fundamental legal research on the applicable requirements and how they relate to AI-enabled cyber-attacks and cyber-defence.
- Ensuring Compliance with Legal and Ethical Standards: The project aims to ensure legal and ethics compliance to contribute to Trustworthy AI. By integrating technical and legal requirements, KINAITICS facilitates interactions that enable a comprehensive understanding and management of these aspects.
- Focus on AI Safety and Cybersecurity Regulations: The project places emphasis on regulations related to AI safety and cybersecurity, including GDPR, NIS Directive, AI Act Proposal, and others. It aims to identify best practices and guidelines on ethical research and legal compliance, including data protection and security principles.
- Proposing Updates to Legal Frameworks: KINAITICS also involves proposing updates to the current legal framework, aligning it with the evolving realities of Information Technologies (IT) and Operational Technologies (OT) convergence. The project seeks to provide guidance and feedback on regulations, particularly those under development, such as the AI Act.
These objectives collectively aim to address the legal complexities and challenges arising from the integration of AI technologies in cyber-physical systems, ensuring that KINAITICS aligns with current legal standards and contributes to the development of a more coherent legal framework for AI and cybersecurity.
A bottom-up approach has been adopted to converge to a three-layer architecture (defence layer, attack layer, and simulation layer through a cyberange). The ongoing efforts involve studying the building blocks of six proposed use cases brought by partners of KINAITICS. We are aiming at factoring the building blocks of considered systems against a wide range of newly designed attacks, using AI or against AI. Legal considerations have played a significant role in the discussions, driven by our legal partner. They have analysed existing and potential future regulations associated with the use cases. Based on first research a first deliverable has been delivered, which aims to update classical risk analysis frameworks by incorporating the new attack surface associated with combinations of the four factors mentioned earlier (human, physics, digital, AI).
Partners are working to analyse and formally define AI attacks for physical systems and AI-enabled attack tools. The threat matrix developed by WP3 incorporates information from various sources, including MITRE ATLAS, ENISA, and academic state-of-the-art research. It encompasses 19 attack tactics and 60 specific threats, providing a comprehensive overview of potential existing threats.
Based on use cases analysis, a set of tools have been identified on the attack (7 tools) and defence side (10 tools). Those tools are currently under active design, development and evaluation, before they can be passed to technical partners for design of demonstrators.
In workpackage 5, partners are actively working on designing and developing frameworks that combine defence mechanisms. One goal of this work package also consists in training a wide general audience to risks and cybersecurity approaches developed within KINAITICS. A first hackathon took place in July 2023, highlighting various tools in an educative context aiming at training participants in using initial developed or improved within KINAITICS. A second one is scheduled for June 2024 to explore many more tools.
The project has finished detailing the use cases and scenarios, identifying building blocks of the systems to ensure security and work on potential new attacks, specifically at the interface between two or more attack surfaces (e.g. human risk and system based on AI). These scenarios constitute the basis for future research and will help advance state of the art in interactions between AI and cybersecurity. WP3 has already identified various risks in building blocks of use case related systems. Future work in WP3 and WP4 will propose number of solutions to attack and defend in these contexts.
My booklet 0 0