European Commission logo
English English
CORDIS - EU research results
CORDIS

Transforming Norms Research through Practices: Weaponised Artificial Intelligence, Norms, and Order

Periodic Reporting for period 2 - AUTONORMS (Transforming Norms Research through Practices: Weaponised Artificial Intelligence, Norms, and Order)

Reporting period: 2022-02-01 to 2023-07-31

The AutoNorms project examines how integrating autonomous or artificial intelligence (AI) technologies into the targeting functions of weapon systems changes international norms on the use of force. Often, this development is captured by the term “autonomous weapon systems” (AWS) to describe weapons that can apply force without requiring human intervention. We define norms as understandings of appropriateness that can be social but also legal in nature. While AWS may sound like “science fiction”, states have used weapon systems integrating automated and autonomous technologies in targeting for decades. Examples are air defence systems, counter-drone systems, active protection systems, and guided missiles. The AutoNorms project investigates the normative consequences of this historical trajectory, while also connecting it to present-day development. In the war in Ukraine, for example, both sides have used loitering munitions, aerial systems that integrate autonomous and potentially AI technologies in targeting. To track emerging norms on weaponised AI, the AutoNorms project pursues three objectives: first, we build new analytical models that allow us to understand how norms emerge not only through public debate, but also through (operational) practices that (state) actors perform when designing, training personnel for, and using autonomous weapon systems according to understandings of appropriateness. Second, we analyse how norms related to integrating autonomous or AI technologies into the targeting functions of weapon systems emerge and evolve across military, transnational political, dual-use and popular imagination contexts in China, France, India, Japan, Russia, and the United States. Third, we investigate how emerging norms on autonomous weapon systems will affect the make-up of the international security order. The debate about weaponised AI and autonomous weapon system ultimately concerns the role of humans in warfare and the extent to which humans will remain in control over the use of force. Tracking and drawing attention to normative developments in this area is important scientifically, but also politically, because it raises critical awareness of how autonomous and AI technologies should/not be used.
Since the beginning of the AutoNorms project in August 2020, the research group has focused on realising two objectives: first, to analyse how and under what conditions norms emerge and change in practices. Here, the AutoNorms project has built one main, new analytical model studying how norms on autonomous weapons systems (AWS) initially emerge in practices. This pushes the contours of current norm research in international relations, which focuses primarily on how norms change as part of public deliberation. Since 2014, there has been a public debate about AWS at the UN in Geneva, but this moves slowly, and states have not agreed on whether AWS require new legal norms. In the absence of deliberatively agreed, legal norms, the AutoNorms project finds that norms emerge in operational practices that states perform in relation to designing, training personnel for, and operating weapon systems integrating autonomous or AI technologies. These practices are typically performed at sites not accessible to the public. The AutoNorms project finds that the norm on human control emerging from such practices has a minimum quality: it assigns humans a reduced role in specific use of force decisions and understands this diminished decision-making capacity as “appropriate” and “normal”. We have published these findings, also drawing on similar theoretical insights, in nine journal articles, one book, five op-ed essays written for a broader audience, and two policy reports. (2) The AutoNorms project team has also started tracking emergent norms across the four contexts of practice it studies in China, Russia, and the US. We have, for example, closely investigated the origins of the US’, Russia’s and China’s position on autonomous weapons as expressed at the UN debate in Geneva through examining the transnational political, military, and popular imagination contexts. We found that these positions are inspired by practices performed in relation to a pursuit for status (Russia), that these positions ambiguously reflect normative positions held by various societal actors (China), and that these positions draw on narratives about AI that originate in the popular imagination (US). This work has so far led to a further nine journal articles, two contributions to edited volumes, 21 op-ed essays written for a broader audience, and two policy briefs.
The AutoNorms project has made significant headway in going beyond the state of the art on how norms emerge in international relations. It has done so by distinguishing between practice-based and a public-deliberative processes of norm emergence. This pushes knowledge frontiers on international norms by significantly widening the normative space (on autonomous weapons and beyond) that scholarship studies. Rather than only focusing on how norms are discussed and deliberated publicly, the AutoNorms project argues that norms emerge initially in the context of operational practices, typically performed outside of the public eye. In the case of autonomous weapons systems (AWS), these practices preceded public deliberation by decades – and we can also observe this temporal dynamic in other arms control processes. Before a topic, such as AWS, has been salient enough for public debate, such practices have therefore long shaped what counts as “appropriate” behaviour among states. How norms on AWS take shape then depends significantly on if and how the normative understandings that emerge from practices are addressed once public debate starts. In the case of AWS, states have either ignored these practices, argued that these practices are distinct from a discussion of AWS because existing weapon systems including autonomous technologies include human operators, or they have positively acknowledged them as representing “best practices”. Therefore, the potentially problematic normative consequences that arise from state practices with regard to “accepting” a diminished quality of human control have not been scrutinised. The AutoNorms project will continue to track such emerging norms on AWS, and in particular, on human control across China, France, India, Japan, Russia, and the US. We expect to deliver additional fine-grained analysis of norm emergence in this space – as well as an evaluation on how this changing normative space may eventually affect the international security order.
Fritzchens Fritz / Better Images of AI / GPU shot etched 5 / CC-BY 4.0