Skip to main content

Programme Category


Article available in the folowing languages:

Secure and resilient Artificial Intelligence technologies, tools and solutions in support of Law Enforcement and citizen protection, cybersecurity operations and prevention and protection against adversarial Artificial Intelligence

Proposals under this topic should aim at exploring use of AI in the security dimension at and beyond the state-of-the-art, and exploiting its potential to support LEAs in their effective operational cooperation and in the investigation of traditional forms of crime where digital content plays a key role, as well as of cyber-dependent and cyber-enabled crimes. On the one hand, as indicated in “Artificial Intelligence – A European Perspective”, AI systems are being and will increasingly be used by cyber criminals, so research into their capabilities and weaknesses will play a crucial part in defending against such malicious usage. On the other hand, Law Enforcement will increasingly engage in active usage of AI systems to reinforce investigative capabilities, to strengthen digital evidence-making in court and to cooperate effectively with relevant LEAs. Consequently, proposals should:

  • develop AI tools and solutions in support of LEAs daily work. This should include combined hardware and software solutions such as robotics or Natural Language Processing, in support of LEAs to better prevent, detect and investigate criminal activities and terrorism and monitor borders, i.e. opportunities and benefits of AI tools and solutions in support of the work of Law Enforcement and to strengthen their operational cooperation.

Building on existing best practices such as those obtained through the ASGARD project [[ASGARD project - ( aims to contribute to LEA Technological Autonomy, by building a sustainable, long-lasting community for LEAs and the R&D industry. This community will develop, maintain and evolve a best-of-class tool set for the extraction, fusion, exchange and analysis of Big Data, including cyber-offense data for forensic investigation. ASGARD helps LEAs significantly increase their analytical capabilities.]], proposals should establish a platform of easy-to-integrate and interoperable AI tools and an associated process with short research and testing cycles, which will serve in the short term perspective as a basis for identifying specific gaps that would require further reflection and development. This platform should, in the end, result in a sustainable AI community for LEAs, researchers and industry as well as a specific environment where relevant AI tools would be tailored to specific needs of the security sector, including the requirements of LEAs. Those AI tools would be developed in a timely manner using an iterative approach to define, develop and assess the most pertinent digital tools with a constant participation of end-users throughout the project. By the end of the project, the platform should also enable a direct access for Law Enforcement to an initial set of tools. Specific consideration should be given to the issue of setting an appropriate mechanism to enable a proper access to the relevant data necessary to develop and train AI based systems for security.

Proposals should also:

  • develop cybersecurity tools and solutions for the protection of AI based technologies in use or to be used by LEAs, including those developed under this project against manipulation, cyber threats and attacks, and;
  • exploit AI technologies for cybersecurity operation purposes of Law Enforcement infrastructures, including the prevention, detection and response of cybersecurity incidents through advanced threat intelligence and predictive analytics technologies and tools targeting Cybercrime units of LEAs, Computer Security Incident Response Teams (CSIRTs) of LEAs, Police and Customs Cooperation Centers (PCCCs), Joint Investigation Teams.

Finally, in order to have the full picture of all AI-related issues in the domain of work of Law Enforcement and citizen protection, proposals should:

  • tackle the fundamental dual nature of AI tools, techniques and systems, i.e.: resilience against adversarial AI, and prevention and protection against malicious use of AI (including malicious use of the LEA AI tools developed under this project) for criminal activities or terrorism.

The improvement of research results, application and uptake should be taken into consideration.

The functionality of existing EU LEAs' tools and systems needs to be analysed since they need to support the prevention, reaction and detection of cyber threats and security incidents.

Furthermore, the accuracy of AI tools depends on the quantity and on the quality of the training and testing data, including the quality of their structure and labelling, and how well these data represent the problem to be tackled. In the security domain, this issue is further emphasized due to the sensitivity of the data, which complicates the access to real multilingual datasets and the creation of representative datasets. A huge amount of up-to-date high-quality data needed to develop reliable AI tools in support of Law Enforcement, in the areas of cybersecurity and of the fight against crime, including cybercrime and terrorism, asks for the development of training/testing datasets at a European level. This requires a close cooperation of different national Law Enforcement and judiciary systems. Namely, training and testing data sets considered legal and used in one country have to be shared and accepted in another one, while simultaneously observing fundamental rights and substantial or procedural safeguards. The lack of legislation at the national and international level makes this particularly difficult. The availability of such datasets to the scientific community would ensure future advances in the field.

Thus, in order to address the problem of securing European up-to-date high-quality training and testing data sets in the domain of AI in support of Law Enforcement, proposals under this topic should, from a multidisciplinary point of view, identify, assess and articulate the whole set of actions that should be carried out in a coherent framework:

  • A comparative analysis of existing legal provisions throughout Europe that apply in these cases and their impact, including obstacles for research community to access datasets used by LEAs and means of overcoming these obstacles;
  • The identification and definition of legislative changes that could be promoted both at the European and Member State level;
  • Ethical and operational implications for LEAs;
  • The identification of the technical developments that should be carried out to sustain all these aspects;
  • Determination of legal and ethical means at the European level that allow for a creation of European up-to-date, representative and large enough high-quality training and testing data sets for AI, in support of Law Enforcement and available to the scientific community working with LEAs.

Proposals should have a clear dissemination plan, ensuring the uptake of project results by LEAs in their daily work.

Taking into account the European dimension of the topic, the role of EU agencies supporting Law Enforcement should be exploited regarding:

  • effective channels established between industry and LEAs, closing the gap between public investment and uptake of project results by relevant end-users in their daily work;
  • increased exchange of experiences, best practices and lessons learnt throughout Europe leading to EU common approaches for opportunity/risk assessment of AI;
  • better understanding and readiness of policy makers on future trends in AI;
  • enhanced cooperative operations and synergies between EU LEAs.

Proposals should take into account the existing EU and national projects in this field, as well as build on existing research and articulate a legal, ethical and practical framework to take the best out of the AI based technologies, systems and solutions in the security dimension. Whenever appropriate, the work should complement, build on available resources and contribute to common efforts such as (but not limited to) ASGARD, SIRIUS[[SIRIUS, launched by Europol in October 2017, is a secure web platform for law enforcement professionals in internet-facilitated crime investigations, with a special focus on counter-terrorism.]], EPE[[EPE (Europol Platform for Experts) is a secure, collaborative web platform for specialists in a variety of law enforcement areas.]], networks of practitioners [[Such as ILEAnet ( and I-LEAD (]] AI4EU[[developing the AI-on-demand platform, central access point to AI resources and tools:]] or activities carried out in the LEIT programme, namely in Robotics[[For instance exploiting technology developed in H2020 robotics projects in Search and Rescue, support to civil protection, or inspection and maintenance - ]], Big Data[[ - such as AEGIS, Lynx or FANDANGO.]], and IoT[[MONICA, SecureIoT.]]. As proposals will leverage existing technologies (open source or not), they should show sufficient triage of these technologies to ensure no internalisation of Intellectual Property Rights or security risks as well as demonstrate that such technologies come with adequate license and freedom to operate.

As far as the societal dimension is concerned, proposed solutions of AI applications should respond to the needs of an individual and society as a whole by building and retaining trust. Proposals should analyse the societal implications of AI and its impacts on democracy. Therefore, the values guiding AI and responsible design practices that encode these values into AI systems should also be critically assessed. It should be also shown that the testing of the tools represents well the reality. In addition, AI tools should be unbiased (gender, racial, etc.) and designed in such a way that the transparency and explainability of the corresponding decision processes are ensured, which would, amongst other, reinforce the admissibility of any resulting evidence in court.

Proposals’ consortia should comprehend, besides industrial and research participants, relevant security practitioners, civil society organisations, experts on criminal procedure from a variety of European Member States and Associated Countries as well as LEAs. Proposals should ensure a multidisciplinary approach and have the appropriate balance of IT specialists as well as Social Sciences and Humanities experts.

As indicated in the Introduction of this call, proposals should foresee resources for clustering activities with other projects funded under this call to identify synergies and best practices.

The Commission considers that proposals requesting a contribution from the EU of around EUR 17 million would allow this specific challenge to be addressed appropriately. Nonetheless, this does not preclude submission and selection of proposals requesting other amounts.

The increasing complexity of security challenges, as well as more and more frequent use of AI in multiple security domains, such as fight against crime, including cybercrime and terrorism, cybersecurity (re-)actions, protection of public spaces and critical infrastructure makes the security dimension of AI a matter of priority. Research is needed to assess how to mostly benefit from the AI based technologies in enhancing EU’s resilience against newly emerging security threats (both “classical” and new AI supported) and in reinforcing the capacity of the Law Enforcement Agencies (LEAs) at national and at EU level to identify and successfully counter those threats. In addition, in security research, data quality, integrity, quantity, availability, origin, storage and other related challenges are critical, especially in the EU-wide context. To this end, a complex set of coordinated developments is required, by different actors, at the legislative, technology and Law Enforcement levels. For AI made in Europe, three key principles are: ""interoperability"", “security by design” and “ethics by design”. Therefore, potential ethical and legal implications have to be adequately addressed so that developed AI systems are trustworthy, accountable, responsible and transparent, in accordance with existing ethical frameworks and guidelines that are compatible with the EU principles and regulations.[[Special focus should be put on verifying the compatibility with:(1) Guidelines of the European Group on Ethics in Science and New Technologies (regulatory framework to be ready in March 2019), (2) General Data Protection Regulation (GDPR).]]

Proposals should lead to:

Short term:

  • Effective contribution to the overall actions of this call;
  • Development of a European representative and large enough high-quality multilingual and multimodal training and testing dataset available to the scientific community that is developing AI tools in support of Law Enforcement;
  • EU common approach to AI in support of LEAs, centralized efforts as well as solutions on, e.g. the issue of huge amount of data needed for AI.

Medium term:

  • Improved capabilities for LEAs to conduct investigations and analysis using AI, such as a specific environment/platform where relevant AI tools would be tailored to specific needs of the security sector including the requirements of LEAs;
  • Ameliorated protection and robustness of AI based technologies against cyber threats and attacks;
  • Raised awareness and understanding of all relevant issues at the European as well as national level, related to the cooperation of the scientific community and Law Enforcement in the domain of cybersecurity and the fight against crime, including cybercrime and terrorism regarding the availability of the representative data needed to develop accurate AI tools;
  • Raised awareness of the EU political stakeholders in order to help them to shape a proper legal environment for such activities at EU level and to demonstrate the added value of common practices and standards;
  • Increased resilience to adversarial AI.

Longer term:

  • Improved capabilities for trans-border LEA data exchange and collaboration;
  • Modernisation of work of LEAs in Europe and improvement of their cooperation with other modern LEAs worldwide;
  • A European, common tactical and human-centric approach to AI tools, techniques and systems for fighting crime and improving cybersecurity in support of Law Enforcement, in full compliance with applicable legislation and ethical considerations;
  • Fostering of the possible future establishment of a European AI hub in support of Law Enforcement, taking into account the activities of the AI-on-demand platform;
  • Making a significant contribution to the establishment of a strong supply industry in this sector in Europe and thus enhancing the EU’s strategic autonomy in the field of AI applications for Law Enforcement;
  • Creation of a unified European legal and ethical environment for the sustainability of the up-to-date, representative and high-quality training and testing datasets needed for AI in support of Law Enforcement; as well as for the availability of these datasets to the scientific community working on these tools;
  • Development of EU standards in this domain.

The outcome of the proposal is expected to lead to development from Technology Readiness Levels (TRL) 7-8; please see part G of the General Annexes.