Skip to main content
Aller à la page d’accueil de la Commission européenne (s’ouvre dans une nouvelle fenêtre)
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS

Cyber-kinetic attacks using Artificial Intelligence

Periodic Reporting for period 2 - KINAITICS (Cyber-kinetic attacks using Artificial Intelligence)

Période du rapport: 2024-04-01 au 2025-09-30

Artificial Intelligence (AI) now powers decision-making in many products and infrastructures—from hospital information system and medical imaging to industrial control systems and web services. This creates a dual challenge: AI can strengthen cyber-defence, yet it also opens new attack paths, including those that cross from the digital world into the physical one. KINAITICS set out to make AI-enabled, cyber-physical systems more robust, resilient, and responsive. The project’s goals were to: (i) unite technical, legal and ethical requirements into practical guidance; (ii) identify and evidence emerging AI-related risks and attack techniques; and (iii) deliver and validate advanced defence frameworks that are usable in realistic settings. The consortium also emphasised trustworthy AI practices (explainability, human-in-the-loop, privacy-by-design) so that organisations can adopt defence tools with confidence and in line with EU rules. KINAITICS concludes with a portfolio of exploitable results (tools, methods, training, and demonstrators), designed for further uptake in research and industry.
KINAITICS implemented an integrated platform that combines: a shared cyber-range to run end-to-end scenarios; pluggable defence/decision modules; and use-case demonstrators with measurable outcomes. The platform hosted multiple scenarios and live showcases across healthcare, bot detection, and simulation/industrial contexts, including the 3rd Hackathon (3 June 2025) and two Paris workshops (16–17 September 2025, https://ai4cyber-workshop.github.io , https://mlsecurity-workshop.github.io ).
On the defence side, the project delivered three complementary frameworks:
• Decision Support System / CyberShield: a reinforcement-learning defender that turns simulated training into guidance or automated actions. Reported benefits include up to 60% faster detection and 70% faster mitigation.
• PhiShield (phishing/URL analysis) and related components packaged for containerised deployment, achieving around 90% F1 and aimed at reducing fraud by ~15%.
• Behavioural monitoring for malicious bots (e.g. SeerBox B.O.T. with human-in-the-loop validation) exercised on real traffic as part of the bot-defence use case and public demos.
On the attack and evaluation side, the project implemented realistic adversary behaviours and ran scenario-based demonstrations on the shared testbed. An important result is the physical backdoor attack on a structural-health-monitoring pipeline: with a physical trigger, the poisoned model suppressed alarms; after defensive retraining, the system recovered and correctly detected defects. This public demo (17 September 2025) showed how KINAITICS validates the feasibility of physical attacks. KINAITICS also produced extensive training and community assets (professional courses, hackathons, and clustering with sister EU projects) to accelerate adoption and skills development.
The project leaves a portfolio of 17 Key Exploitable Results spanning methods, software, training, and events. Highlights include: open-source decision support for AI-driven defence; phishing and bot-detection components with explainability and human-in-the-loop; and methodologies for AI robustness assessment and risk analysis tailored to cyber-physical systems. Together, these results provide ready-to-adopt building blocks and standardisation hooks (e.g. contributions aligned with IDMEFv2 for security events), supporting integration with existing security operating centers tooling and associated workflows. Further uptake is planned via licensing, services, open-source releases, and new Horizon Europe/Digital Europe activities.
The project also demonstrated evidence-based performance (e.g. faster incident handling and high-accuracy phishing detection) and validations (e.g. physical backdoor and recovery), offering concrete benchmarks for organisations considering AI-enabled defences.
Mon livret 0 0