European Commission logo
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS

WIDER AND ENHANCED VERIFICATION FOR YOU

Periodic Reporting for period 2 - WeVerify (WIDER AND ENHANCED VERIFICATION FOR YOU)

Periodo di rendicontazione: 2019-12-01 al 2021-11-30

WeVerify’s addresses advanced content verification challenges through a participatory verification approach, open source algorithms, low-overhead human-in-the-loop machine learning and intuitive visualizations. The main project objectives are to:
● Enable disinformation detection and content verification through an open-source cross-modal verification platform.
● Implement a blockchain-based, authoritative database of already debunked fake items.
● Develop a digital companion for citizens and an advanced version for journalists.
● Develop open-source tools for sourcing and tracking disinformation flows across social platforms, news media, and online sites.
● Deliver a collaborative cross-media verification workbench (building on the existing Truly Media commercial verification platform).
● Pilot the adoption of the WeVerify platform with large stakeholder communities in order to ensure technological relevance for all target stakeholders: journalist and media communities , worldwide community of debunkers and fact-checkers, independent verification organisations; citizen journalists; media literacy organisations; human rights activists and NGOs.
The WeVerify project developed a platform and a suite of content verification tools and algorithms covering the content verification workflow:
Step 1. Verification of content and source - a set of open-source AI-based tools for content analysis and verification of text, images, videos was developed or significantly enhanced, including: Rumour Veracity Service, COVID-19 disinformation classifier, Multilingual access to COVID-19 health information, a Source Credibility service, Meme and ad OCR component, Stance classifier, Rumour Veracity Service, Multimedia forensics for the web, Deepfake detection tool, Image super resolution, Near-duplicate Detection tool, Context Aggregation and Analysis tool, Visual Location Estimation tool.
Step 2. Analysis of disinformation flows: propagation analysis and community detection; early disinformation discovery on social platforms. We have developed a methodology and tools to collect data from Twitter, CrowdTangle and Telegram, display analytics on the spread of specific hashtags or links shared through these platforms, to visualise communities and echo chambers active in disinformation spreading, and provide a CIB detection tree as the basis that platform and non-platform actors can use to work together on CIB detection.
Step 3. Debunking of disinformation: alert and warn users sharing, replying or liking fabricated content by providing them with evidence and additional context. A fully functional debunking bot prototype was built on top of previous results obtained in Twitter SNA links extraction and on the credibility service. The prototype allows authorised users to trigger the debunking bot on disinformation narrative by highlighting fact checks as an alternative reading to disinformation links, keywords or hashtags.
Step 4. Cataloguing and publishing- a database of already debunked claims and tampered media was developed. It holds detailed information about already debunked content, which is represented using an extended ClaimReview schema. The database itself does not store the content, only metadata describing the content: its type (article, image, video, etc.), location, identification, and finally the claim/narrative that is being debunked. Moreover, the database holds the veracity assessments made by verification professionals on a piece of content, including information such as supporting evidence (sources and reasoning used) and relevant context. At the end of the project, the final prototype of the database comprised more than 60 000 debunks from trusted IFCN sources. Multilingual search and multimodal search services were developed to allow users to search simultaneously for content in different languages and in different formats (text, image, video). Through integration with the reverse image and video search service users can find similar debunks, based on images and videos that appear in the false claim or that are used as evidence to refute it.

Also in the scope of the project two pre-existing, multi-function, professional-oriented tools that integrate the above WeVerify components alongside other widely used verification components, were developed:
1. The InVID-WeVerify verification plugin – an open-source toolbox providing journalists, fact-checkers, human rights defenders, media literacy scholars, researchers and the general public with a variety of components aimed to empower and help them to verify cross-modal information and to debunk disinformation. The content verification browser plugin is an extended and redesigned new version of the InVID verification plugin, developed in the InVID project. The InVID-WeVerify verification plugin has so far been downloaded and used by over 57,000 users. In the course of the project most of the tools for cross-modal content verification, disinformation flow analysis, as well as the verification assistant, debunking bot and the database of debunk claims were integrated within the plugin.
2. The Truly Media collaborative verification workbench, which enables teams of journalists/fact-checkers to work collaboratively on the verification of a collection of social media and news content, circulating around a particular event. Truly Media allows users to find, organise, and collaboratively verify content coming from social media or other digital sources. It addresses the complete verification and debunking workflow. Within WeVerify Truly Media was extended with functionalities for ingestion of content from multiple platforms, features for content verification based on the AI tools developed in the project, searching and storing debunks in the database of debunked claims.
Dissemination activities carried out in WeVerify to popularise project results can be summarised as follows:
• We reached people in 192 countries via the project website.
• Our Twitter channel had more than 2000 followers.
• More than 58 presentations are publicly available (project website and Slideshare).
• Published 36 videos on YouTube and generated 14,942 video views.
• Consortium partners spoke at 130 events.
• Partners have 28 scientific publications.
• Carried out 18 training events and organised 4 workshops.

Our tools and services proved to be very well regarded in the fact-checking community, becoming an integral component for their activities. WeVerify partners actively collaborated with other projects and entities (first to mention is EDMO).
The success indicator for AI components for cross-modal content verification developed in WeVerify was to increase the accuracy of the automatic content verification tools more than 20% over the state of the art. For most of the components developed or extended in the course of the project we managed to meet and even exceed the target. More details on comparison with other state-of-the-art methods for content verification can be found in the scientific papers published in the course of WeVerify.
Within WeVerify the InVID-WeVerify plug-in significantly increased its user base to more 57,000 users from 202 countries. Through this community, the project is having a significant impact in different social aspects related to disinformation, including playing an important role in the debunking of COVID-19 disinformation.
WeVerify project logo