Since the beginning of the AutoNorms project in August 2020, the research group has focused on realising two objectives: first, to analyse how and under what conditions norms emerge and change in practices. Here, the AutoNorms project has built one main, new analytical model studying how norms on autonomous weapons systems (AWS) initially emerge in practices. This pushes the contours of current norm research in international relations, which focuses primarily on how norms change as part of public deliberation. Since 2014, there has been a public debate about AWS at the UN in Geneva, but this moves slowly, and states have not agreed on whether AWS require new legal norms. In the absence of deliberatively agreed, legal norms, the AutoNorms project finds that norms emerge in operational practices that states perform in relation to designing, training personnel for, and operating weapon systems integrating autonomous or AI technologies. These practices are typically performed at sites not accessible to the public. The AutoNorms project finds that the norm on human control emerging from such practices has a minimum quality: it assigns humans a reduced role in specific use of force decisions and understands this diminished decision-making capacity as “appropriate” and “normal”. We have published these findings, also drawing on similar theoretical insights, in nine journal articles, one book, five op-ed essays written for a broader audience, and two policy reports. (2) The AutoNorms project team has also started tracking emergent norms across the four contexts of practice it studies in China, Russia, and the US. We have, for example, closely investigated the origins of the US’, Russia’s and China’s position on autonomous weapons as expressed at the UN debate in Geneva through examining the transnational political, military, and popular imagination contexts. We found that these positions are inspired by practices performed in relation to a pursuit for status (Russia), that these positions ambiguously reflect normative positions held by various societal actors (China), and that these positions draw on narratives about AI that originate in the popular imagination (US). This work has so far led to a further nine journal articles, two contributions to edited volumes, 21 op-ed essays written for a broader audience, and two policy briefs.