We’ve never been so connected to each other, yet we all live in a bubble. When John Scruggs, a lobbyist for Philip Morris, first described the concept of an echo chamber back in 1998, he probably didn’t expect social media to produce these bubbles. Now, these information bubbles threaten the very foundations of our democracies. In 2020, debate on social media – and increasingly in the real world, too – can be depicted as a wide network of increasingly heated communities wearing blinkers, pointing at each other without ever really listening to anyone else other than those with the same opinion. The direct consequence of this trend? Thriving misinformation. As Vasilis Koulolias, director of Stockholm University’s eGovlab, puts it, “AI-curated feeds create echo chambers and filter bubbles in which individuals might never see any counter-arguments.” The Co-Inform (Co-Creating Misinformation-Resilient Societies) project takes this reality as a direct threat to the integrity of elections and, ultimately, democracy. To the looming disappearance of well-informed choices, it opposes what it calls a ‘decentralised, transparent and community-driven misinformation linking system’. The project can be described as a misinformation detection system, but with a twist: it provides the general public with evidence as to why their content is tagged as misinformation. “Just like misinformation detection, misinformation linking is based on automated algorithms which predict whether a given content is misinformative. It finds existing credibility signals online (reviews by reputable fact checkers, reputation ratings), evaluates previous posts from the same source, and estimates the accuracy of content based on social media reactions. By adding an extra layer of transparency, we allow users to verify our rating. We also encourage them to consider some credibility signals they might have missed,” Koulolias explains. The project is also decentralised. In addition to its algorithms, it integrates an extension to Schema.org and ClaimReview to allow anyone to assess the credibility of content and share it online. The only requirement is to provide evidence. Likewise, users can provide feedback on credibility assessments. The Co-Inform system has been tested extensively already reaching impressive accuracy rates. “We could demonstrate how online behaviour could be nudged by the system and found that human values also had an impact on whether someone is inclined to believe a piece of information,” Koulolias notes. Co-Inform can be installed either as a browser extension for the general public or as a dashboard for policymakers and journalists. The dashboard version allows users to filter tweets or articles by topic and labels them as credible or not credible. “We are developing some additional features thanks to our collaboration with SOMA – a project building collaborative tools for journalists. For instance, a claim made by a user on our Twitter plugin will automatically be sent to SOMA’s platform for assessment,” says Koulolias. Tests with journalists are foreseen in the autumn of 2020, along with policy recommendations for the European Commission. Koulolias hopes that Co-Inform’s tools will contribute to critical thinking amongst social media users. As misinformation becomes more and more difficult to identify, they will need all the tools they can get not to fall into the ‘fake news’ trap, and Co-Inform provides at least part of the solution. “Of course, tackling misinformation will require more than fact checks debunking false claims. Critical thinking and information literacy campaigns are crucial, and we need a real interdisciplinary method going forward,” Koulolias concludes.
Co-Inform, echo chamber, social media, misinformation, browser extension, algorithms