Periodic Reporting for period 2 - FERMI (Fake nEws Risk MItigator)
Período documentado: 2024-04-01 hasta 2025-09-30
Accordingly, three out of five core objectives set by the FERMI project aimed at developing an integrated FERMI platform, including “Key Technology Offerings” that facilitate 1) investigations of disinformation-induced crimes spread in the form of social media messages, 2) threat assessments regarding the offline crime landscape, and 3) impact assessments, including proposing counter-measures, if necessary. More specifically, the FERMI platform can grasp how a social media message has travelled and what accounts have shared and discussed it and assess the influence of all said accounts and their origins (bot-operated vs. human-operated accounts); it can distinguish between different social media posts‘ sentiments and estimate the future crime landscape; and it can calculate the likely costs of disinformation-induced violence and recommend counter-measures on this basis.
All of these tools have been successfully validated in two iterations centred around three use cases. Proper validation was the fourth core objective set by the project and it could be implemented along the conceptual lines of use cases 1) on violent right-wing extremism covering crucial steps required to facilitate an investigation into human-operated accounts; 2) on violent Covid-related extremism covering an in-depth threat assessment, including grasping the overall atmosphere surrounding the disinformation campaign and estimating the likely crime landscape; 3) on violent left-wing extremism addressing the assessment of the criminal activities’ ramifications by estimating the crimes’ impact (in terms of cost) and identifying proper counter-measures.
The validation efforts were guided by a detailed experimentation protocol. Feedback was fully in line with pre-defined end-user expectations paving the ground for exploiting the results, which is part of the last core objective that concerned the consortium’s outreach efforts. Accordingly, an exploitation strategy guided post-project commercialization with a focus on the jointly developed platform. 3 viable commercialization scenarios have been identified: (1) Exploiting the FERMI platform as a Service, (2) forming a licensing pool of modules and (3) providing consulting and customization services.
Insights from social sciences and humanities guided the legal and ethics scope of the project, for example by delineating the role of LEAs in the fight against disinformation and the constraints the law places on them. All of FERMI’s research complied with an Ethics Protocol signed by all partners. Besides the contributions of the legal and ethics advisors (KU Leuven and VUB), BIGS developed a model to calculate the costs of disinformation campaigns (see above), whereas CONVERGENCE organised training activities for the general public. Three such external webinars disseminating project insights were held and recordings as well as material have been available on the FERMI website.
• Dynamic Flows Modeler: assesses the nexus between disinformation and the crime landscape. More specifically, datasets on the former are used to do an estimate of the latter.
• Disinformation sources, spread and impact analyser: grasps the spread of disinformation on social media, including the messages’ influence and the distinction between human- and bot-operated accounts.
• Community Resilience Management Modeler: impact analysis based on the costs of criminal activities (input is provided by behaviour profiling and socioeconomic analysis) combined with the Socioeconomic Disinformation Watch: counter-measures to stem the tide of disinformation, if the impact thereof is deemed medium or high.
• Swarm Learning module: fine-tunes estimates of the crime landscape using data from the LEAs without the need to share such data between them.
• Sentiment Analysis module: analyses the underlying sentiments of the social media messages and categorises the latter as positive, negative or neutral.
Obviously, a further technical breakthrough was the integration of all of these modules into the FERMI platform.
Further non-technical achievements include the above-mentioned Behaviour Profiler & Socioeconomic Analyser that, again, calculates the likelihood and severity (cost-wise) of disinformation campaigns, the experimentation protocols, in particular, the impact assessment methodology aimed at analysing whether the aforementioned components meet end-user expectations and the societal landscape analysis on finding a proper balance between LEAs' needs to fight disinformation that may cause unrest and the obligation to safeguard freedom of expression, privacy and data protection.
More specifically, investigations are advanced by giving LEA end-users a quick overview of all social media interactions involving a message spread by a to-be-investigated account, threat assessments are improved by estimating the future crime landscape, impact assessments can be carried out in light of the crime-induced costs and counter-measures are proposed in view of LEA preferences and input.
FERMI has also developed a training curriculum on predictive policing in general and the application of the FERMI tools in particular.
The platform’s further development until it reaches market readiness is envisaged to take app. 30 months
Hybrid exploitation approaches (e.g. SaaS subscriptions combined with licensing and consulting services) may mitigate risks and diversify income streams. Additional agreements may be required to determine who will represent the commercialization partners after the project concludes, as well as to establish the applicable royalty rates.