AI for cybersecurity reinforcement
Artificial intelligence (AI) is present in almost every application area where massive data are involved. Understanding the implications and possible side effects for cybersecurity however requires deep analysis, including further research and innovation. On the one hand, AI can be used to improve response and resilience such us for the early detection of threats and other malicious activities with the aim to more accurately identify, prevent and stop attacks. On the other hand, attackers are increasingly powering their tools by using AI or by manipulating AI systems (including the AI systems used to reinforce cybersecurity).
The proposed actions should develop AI-based methods and tools in order to address the following interrelated capabilities: (i) improve systems robustness (i.e. the ability of a system to maintain its initial stable configuration even when it processes erroneous inputs, thanks to self-testing and self-healing); (ii) improve systems resilience (i.e. the ability of a system to resist and tolerate an attack, anticipate, cope and evolve by facilitating threat and anomaly detection and allowing security analysts to retrieve information about cyber threats); (iii) improve systems response (i.e. the capacity of a system to respond autonomously to attacks, thanks to identifying vulnerabilities in other machines and operate strategically by deciding which vulnerability to attack and at which point, and by deceiving attackers; and to (iv) counter the ways AI can be used for attacking. Advanced AI-based solutions, including machine learning tools, as well as defensive mechanisms to ensure data integrity should also be included in the proposed actions. Proposals should strive to ultimately facilitate the work of relevant cybersecurity experts (e.g. by reducing the workloads of security operators).
Regarding the manifold links among AI and cybersecurity, privacy and personal data protection, applicants should demonstrate how their proposed solutions comply with and support the EU policy actions and guidelines relevant to AI (e.g. Ethics Guidelines for Trustworthy AI[[Ihttps://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai]] the AI Whitepaper[[https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en]] EU Security Strategy[[https://ec.europa.eu/info/strategy/priorities-2019-2024/promoting-our-european-way-life/european-security-union-strategy_en]] and the Data Strategy[[https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/european-data-strategy_en]]). Proposals should foresee activities to collaborate with projects stemming from relevant topics included in the Cluster 4 “Digital, Industry and Space” of Horizon Europe. Generally, proposals should also build on the outcomes of and/or foresee actions to collaborate with other relevant projects (e.g. funded under Horizon 2020, Digital Europe Programme or Horizon Europe).
Proposals should strive to use, and contribute to, European relevant data pools (including federations of national and/or regional ones to render their proposed solutions more effective. To this end, applicants should crucially strive to ensure data quality and homogeneity of merged/federated data. Applicants should also identify and document relevant trade-offs between effectiveness of AI and fundamental rights (such as personal data protection). Moreover, privacy in big data should also be addressed.
Key performance indicators (KPI), with baseline targets in order to measure success and error rates, should demonstrate how the proposed work will bring significant progress to the state-of-the-art. All technologies and tools developed should be appropriately documented, to support take-up and replicability. Participation of SMEs is encouraged.