Boosting societal resilience with trustworthy AI tools
In an era of artificial intelligence (AI) and deepfakes, verifying whether information is trustworthy or not can be a fraught and challenging task. Disinformation is also often multimodal and cross-platform, combining text, images, video and audio in ways that verification tools cannot comprehensively analyse. “While false information spreads rapidly, thorough analysis requires time and expertise,” explains vera.ai(opens in new window) project coordinator Akis Papadopoulos from the Information Technologies Institute(opens in new window) in Greece. “Accessible and robust solutions remain limited.”
Advanced AI methods for content analysis
To address the serious impact that disinformation campaigns can have on public trust and societal resilience, the vera.ai project set out to develop advanced AI methods for content analysis, enhancement and evidence retrieval. AI tools were also built to detect deepfakes and other forms of manipulated content, and to track and measure the impact of disinformation narratives and campaigns. “We also wanted to build an intelligent verification assistant based on chatbot-driven technologies to support media professionals,” notes Papadopoulos. To achieve these aims, vera.ai brought together a multidisciplinary group of experts spanning social and communication science and machine learning through to natural language processing and media forensics. “This breadth of expertise enabled us to address disinformation from technological and societal perspectives,” says Papadopoulos. Once developed, project prototypes were validated through real-world testing on actual cases provided by the project’s media partners. “Co-creation with journalists helped to significantly improve usability, transparency and real-world relevance,” adds Papadopoulos. “A fact-checker-in-the-loop methodology enabled continuous expert feedback, ensuring scientific robustness, usability and practical impact.”
Human oversight to ensure usability
The project has helped to advance explainable and trustworthy AI and underlined the importance of human oversight in ensuring usability. “Overall, vera.ai produced both practical tools and methodological insights that will strengthen Europe’s capacity to detect, analyse and respond to evolving AI-driven disinformation and coordinated manipulation campaigns,” remarks Papadopoulos. More concretely, the project results have been made publicly accessible(opens in new window). These include updated tools for media professionals, namely the verification plugin (Fake News Debunker), Truly Media, and the Database of Known Fakes. A number of high-impact scientific publications(opens in new window), open-source repositories(opens in new window) and datasets(opens in new window) have also been published.
Strengthening information integrity
Following project completion, vera.ai partners have continued to support and enhance the delivered tools and technologies. “Online disinformation is constantly evolving, with new techniques, tactics and threats constantly emerging,” notes Papadopoulos. “This requires developing new detection and analysis methods.” This work is critical given that coordinated disinformation campaigns have the power to significantly undermine public debate, distort electoral processes and erode confidence in institutions and media. “In crisis situations, such as conflicts or natural disasters, unverified information risks amplifying panic and causing real-world harm,” adds Papadopoulos. “For journalists, the inability to reliably and quickly assess content threatens editorial credibility and reputation.” Papadopoulos and his colleagues are confident that the work achieved through the vera.ai project will contribute in the long run to strengthening information integrity. “The strongest impact is expected in journalism and fact-checking,” he says. “AI-assisted content analysis, synthetic media detection, and coordinated inauthentic behaviour monitoring will help to enhance speed, accuracy and credibility.” Other areas where this work has significant potential for application include public institutions, platform governance and regulatory compliance, particularly in light of frameworks such as the Digital Services Act(opens in new window).