Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS

Problem Definition in the Digital Democracy

Periodic Reporting for period 3 - PRODIGI (Problem Definition in the Digital Democracy)

Reporting period: 2024-01-01 to 2025-06-30

The project is focused on understanding the impact of digital technology on democracy, particularly how it shapes public issues and opinions regarding the societal implications of social media, artificial intelligence, and other kinds of digital platforms and tools. It pursues four main objectives. First, it aims to formulate a theory that explicates how digital technology influences the way societal issues are defined in public debates. Second, it seeks to develop computational methods that can effectively measure these definitions and the shifts in public perception. Third, it examines specific instances where digital technology has been framed as potentially problematic for society, including content moderation enacted by social media platforms, artificial intelligence, and encryption. Fourth, through survey experiments, the project explores how the way an issue is framed can influence public opinion. Collectively, these objectives combine theoretical models, computational techniques, and empirical analysis to offer a thorough understanding of the role digital technology plays in shaping discourse and policy.

While the project explores various topics, such as content moderation and encryption, Generative AI and Large Language Models (LLMs) like ChatGPT serve as excellent examples to underscore its research objectives. These technologies illustrate the dual role of digital tools in shaping public discourse, either acting as democratizing forces or as platforms for the spread of misinformation. Moreover, active debates exist concerning the potential existential risks Generative AI could pose to humanity, contrasting with those who argue that the risks, while real, are more immediate but limited in scope. From a methodological standpoint, LLMs like ChatGPT offer new tools for measuring the framing of societal issues. Additionally, survey experiments that focus on public reactions to these technologies can offer insights into how different narrative frameworks influence public opinion, including in the context of AI regulation.
The project has achieved the following results so far. First, it demonstrated how social media is revolutionizing political communication by allowing a broader set of actors to participate in public debates and shape the political agenda. Second, various techniques were evaluated for analyzing policy frames in text, and it was found that large language models like ChatGPT are effective in text classification tasks. Third, through case studies in areas such as content moderation, platform governance, and encryption, the research revealed that these digital technology issues are becoming increasingly politicized and are sensitive to debates in both traditional and social media. Finally, experiments found that flagging false information only moderately affects people's views on content moderation, while token-based rewards can encourage users to share news, including misinformation.
The project's findings about using Large Language Models (LLMs) like ChatGPT for text annotation have been groundbreaking. The results indicate that ChatGPT performs better than workers on crowd-sourcing platforms in various annotation tasks, such as identifying frames. This highlights the ability of large language models to significantly improve text classification efficiency. A separate study broadened the scope to include open-source LLMs, which have advantages in terms of transparency, data security, reproducibility, and cost-effectiveness when compared to proprietary models like ChatGPT. Results show that open-source LLMs not only surpass crowd-workers in performance but also show a capability to compete with ChatGPT in particular tasks. As such, these findings on LLMs are pioneering in the field.
Project's logo
My booklet 0 0