Skip to main content
Przejdź do strony domowej Komisji Europejskiej (odnośnik otworzy się w nowym oknie)
polski polski
CORDIS - Wyniki badań wspieranych przez UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

From Keyboard Warrior to Digital Army: Mapping Far-Right Networked Publics

Periodic Reporting for period 1 - MAFNET (From Keyboard Warrior to Digital Army: Mapping Far-Right Networked Publics)

Okres sprawozdawczy: 2022-09-01 do 2024-08-31

In an era marked by online extremism and increasing levels of social polarization, there is resounding interest within public debate on the appropriate role that technology companies should take to address harmful activity. Companies have developed content moderation tools that include deplatforming, demonetization, algorithmic changes, and search result manipulation. However, content moderation measures have been applied disproportionately and often unsystematically with regard to far-right content. Partly this is due to lack of definitional and legal consensus, with policies and community guidelines varying across platforms. On the other hand, hate and hate speech is dealt with as a separate category from (violent) extremism and terrorism, despite ideological overlap in content. Increasingly, platforms’ content policies and content moderation tools face difficulty addressing harmful content that may be deemed ‘borderline’ (i.e. lawful but awful), falling short of violating terms of service if not explicitly illegal.

Another pressing challenge is that far-right users of technology are especially well adept at circumventing regulation of their activity, through tactics like coded language and visual manipulation, or adapting to community norms. The emergence of far-right networked publics, or a public sphere structured by networked technologies, constitutes an imagined community of far-right users who engage and interact with each other due to their shared exploitative practices of technology infrastructure. Looking beyond the behaviour of individual users (i.e. keyboard warriors) towards the collective action of users (i.e. digital army), we can understand the ways in which far-right networked publics develop and consolidate in reaction to content policy and moderation measures of tech companies.

The objectives of the MAFNET project were to examine the decision making processes surrounding social media platforms’ content policies on extremism and terrorism, with particular focus on measures targeting far-right content and users. By drawing attention to various stages of content policy such as development, implementation, and enforcement, combined with a focus on content that presents specific challenges with regards to legality and user circumvention, this project explored the challenges of platform governance on far-right content. To assess the dynamics of digital regulation, the MAFNET project conducted qualitative interviews and fieldwork with tech company employees who oversee content policies on extremism and terrorism in Europe and North America. The project found that governance of online extremism and terrorism must be situated within a broader ecosystem of tech companies engaging with stakeholders across government, security services, industry, and civil society.
This project conducted interviews and fieldwork with tech company employees who oversee content policies on extremism and terrorism in Europe and North America. These interviews were facilitated through a six-month secondment with the Global Internet Forum to Counter Terrorism (GIFCT), a consortium of thirty-two tech company members that share insights and best practices to prevent terrorists and violent extremists from exploiting online platforms. The secondment was based in GIFCT’s offices in London and Washington D.C. which provided access and resources to tech company employees collaborating on GIFCT programs. Alongside these formal interviews and attendance at meetings with GIFCT members and partners, the project included additional fieldwork with participation at the Trust and Safety Research Conference at Stanford University, the GIFCT side event hosted during UN General Assembly high-level week, a side event on algorithms and interventions during the Christchurch Call Leaders’ Summit, and closed-door workshops and private briefings hosted by tech companies.

The aim of interviews and fieldwork was to assess how tech companies understand, identify, and respond to (far-right) extremist and terrorist content on their platforms, and the operational dynamics of policy and moderation teams. The interviews highlighted changing content policy and moderation practices, shifts in resource allocation, expertise and knowledge sharing, as well as adaptation dilemmas, in response to increased global far-right activity on platforms. While conducting interviews and fieldwork with tech company employees, a clear and significant finding was that tech companies rarely operate in isolation, but that decision making practices encompass an ecosystem of stakeholders who are involved in various stages of content policy development, implementation, and enforcement. This includes academics and researchers, independent platform advisory councils, and industry partners in the private sector (within content policy development); industry partners and global governance institutions (content policy implementation); and government bodies, regulatory authorities, law enforcement, industry partners, independent platform advisory councils, and civil society (content policy enforcement, or content moderation). Consequently, this project broadens the discussion of content policy and moderation from a platform-user level of interaction, to understanding how these function within a platform governance framework that includes multi-stakeholders across the private sector, governments, and civil society.

Results of MAFNET were disseminated at academic conferences and seminars such as the Dimensions of Right-Wing Extremism in Europe conference and Association of Internet Researchers annual conference, and through institutional networks such as the Center for Research on Extremism, the Far Right Analysis Network, the International Centre for Counter-Terrorism, and the Global Network on Extremism and Technology. Planned scientific publications include a co-authored journal article with the supervisor on the professionalization of the Trust and Safety industry, and a monograph with an academic press.
The MAFNET project has strong potential for societal and policy impact. Given its timely and relevant focus for the public interest, dissemination efforts have targeted pertinent audiences. In addition to substantial communication activities through international media outreach, with each media appearance crediting the MSCA postdoctoral fellowship, findings from MAFNET will benefit the private sector and policymakers tasked with digital policy interventions. Through the secondment with GIFCT, which allowed access to tech company employees at the forefront of designing and enforcing content policies on extremism and terrorism, the MAFNET project was continuously informed by, and results directed towards, essential concerns expressed by tech companies and industry partners.

Findings from MAFNET will also support efforts of the Christchurch Call Advisory Network. The Christchurch Call is a multistakeholder foundation established by the governments of New Zealand and France, composed of over fifty-five governments plus the EU and nineteen online service providers, all dedicated to eliminating terrorist and violent extremist content online. The Call’s Advisory Network consists of civil society representatives who advise on the fulfilment of the Call’s commitments. Given that the researcher of MAFNET is a member of the Advisory Network, there is strong potential impact for the project through involvement in monthly meetings with the forum and support with implementing the priority areas for action, such as increased transparency in how technology companies respond to terrorist and extremist content.

Finally, the researcher of MAFNET is an expert member of the European Research Community on Radicalisation for the EU Knowledge Hub on the Prevention of Radicalisation (previously the Radicalisation Awareness Network) with the European Commission. Results from MAFNET will continue to be disseminated in activities hosted by the EU Knowledge Hub, such as presenting research findings in seminars and trainings with practitioners and policymakers, and providing expert interviews for reports.