Periodic Reporting for period 1 - I2 RAMP (Implementing International Responsibility for AI in Military Practice)
Reporting period: 2021-09-01 to 2024-08-31
The aim of the 'Implementing International Responsibility for AI in Military Practice' (I2 RAMP or Project) was to contribute to the ongoing debate on the various ethical and legal challenges of military AI. This was achieved through a series of publications in specialized outlets, as well as a variety of public engagements at academic, military, and diplomatic fora.
The research conducted during the Project revealed that military AI does not create intrinsically unique legal challenges but rather brings to the surface unresolved systemic shortcomings of international humanitarian law (IHL), particularly the unclear legal consequences of unintended engagements. That said, once an inadvertent harmful outcome resulting from deploying AI-enabled tools can be classified as a violation of IHL, there is no such thing as the ‘accountability gap.’ Under the law as it already stands today, crucial decisions in the targeting process and the responsibility for them are already allocated to human commanders as well as the State they belong to, irrespective of whether the systems they use in combat are AI-enabled or not.
WP1 and the first research question included mapping out the legal and ethical challenges raised by military AI in the international responsibility realm to determine whether these challenges are unique enough to justify treating AI-based military equipment differently from other technologically advanced weapons systems. The research revealed that military AI does not create intrinsically unique legal challenges but rather brings to the surface unresolved systemic shortcomings of international humanitarian law.
WP2 and the second research question examined who can bear international responsibility for the wrong done with the use of military AI and analyzed the legal avenues that would allow for holding both individuals and States responsible. It was determined that: 1) the individual responsibility for violations of IHL rests with the commanders who bear the ultimate responsibility for weapons release in compliance with IHL; and 2) State responsibility as it stands today already allows for attribution of responsibility for an internationally wrongful act resulting from the use of military AI to the fielding State.
WP3 and the third research question inquired how such responsibility can be implemented. Research revealed that the crucial problem stems from an unresolved legal classification of targeting mistakes and other unintended engagements (attacks) under IHL.
The detailed findings, analysis, and theories developed have been disseminated in the following publications:
Two book chapters:
• (2024) Many Hands in the Black Box: Artificial Intelligence and the Responsibility of International Organizations [in:] R. Deplano, A. Berkes & R. Collins (eds.) Reassessing the Articles on the Responsibility of International Organizations: From Theory to
Practice, Edward Elgar.
• (2023) Autonomous Weapons [in:] B. Brożek, O. Kanevskaia & P. Pałka (eds.) Research Handbook on Law and Technology, Edward Elgar.
Three peer-reviewed articles in legal journals:
• (2025, forthcoming) AI-Enabled Facial Recognition Technologies & IHL: Impact, Compliance, and Risks, International Review of the Red Cross, Special Issue on Military Perspectives on IHL (with Ido Rosenzweig).
• (2024) Beyond retribution: Individual reparations for IHL violations as peace facilitators, International Review of the Red Cross, Special Issue on “International Humanitarian Law and Peace: Lessons for the Future” (with Steven van de Put).
• (2023) ‘Neither Criminal Nor Civil’: Russian State Responsibility for Conduct of Hostilities Violations in Ukraine, Texas Tech Law Review Special Issue on “Russia, Ukraine and the Challenge of Wartime Accountability”, 56(1), pp. 151-170.
• (2022) Military Artificial Intelligence and the Principle of Distinction: A State Responsibility Perspective. 56(1) Israel Law Review (OUP), pp. 3-23.
Beyond the publications, the exploitation and dissemination actions of the project findings and results included participation in 9 conferences, 5 workshops as well as 2 guest lectures, and one newspaper interview.
1. there is no technical or universally agreed-upon distinction between automated and autonomous military systems, and what is labeled as autonomous rather than (highly) automated is subjective and contextual;
2. AI-enabled weapon systems are often more compliant with IHL than the systems they have replaced; and
3. in war, human control over every step of the kill chain is either legally required to ensure accountability or ethically superior
—resonate more and more strongly in professional commentaries.
The major progress beyond the state of the art includes developing an interpretation of the legality of unintended engagements under IHL, pursuant to which attacks on civilians and infrastructure normally dedicated to civilian purposes, irrespective of the acting individuals’ intent or knowledge, are not in conformity with the principle of distinction, but constitute internationally wrongful acts only if the state cannot substantiate that a reasonable commander, based on the information available to them at the time, would designate the target as a military objective. This is of major importance for implementing international responsibility for military AI, given that autonomous weapons are designed to channel the intent of parties to the conflict. Understanding that it is the intent of the Commander, and not the outcome produced by the AI-enabled system, that is crucial for the determination of whether a given attack was legal or not, will contribute to the debate on the legality of military AI.
The impact of the project is likely to continue, as in early 2024, the Researcher was invited to join The Roundtable for AI Security and Ethics (RAISE), a collaborative, multi-year initiative led by the United Nations Institute for Disarmament Research (UNIDIR) in partnership with Microsoft. RAISE is designed as the neutral, trusted, and independent platform for inclusive cross-regional and multisectoral engagement on artificial intelligence (AI) in security and defense, and as such, provides an optimal platform for further influencing the global debate on military AI.