Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Strengthening demOcratic engagement through vaLue-bAsed geneRative adversarIal networkS

Periodic Reporting for period 1 - SOLARIS (Strengthening demOcratic engagement through vaLue-bAsed geneRative adversarIal networkS)

Reporting period: 2023-02-01 to 2024-03-31

We are living in times in which digital technologies and platforms are the main vehicles for spreading news. With great advantages that such technologies have brought, new challenges have also come. In particular, disinformation has been on the agenda of researchers, journalists, and policymakers already for quite some time. However, we are now witnessing a new phase, in which the technologies do not just contribute to the spreading of dis- and mis-information, but also to their generation. In fact, there is a class AI systems that are able to generate fake contents (audiovisual, text), from some real specimen – these are usually known as ‘deepfakes’. It is not difficult to understand how the spreading of fake audiovisuals or texts can put in danger democratic processes as well as individual freedom or privacy. Project SOLARIS sets out to study these technologies from various perspectives: technical, psychological, semiotic. We also study the potential dangers of spreading deepfakes from a legal and geopolitical perspective, and we map and evaluate current protocols in place to counteract deepfakes that may go viral. At the same time, technologies are not to be demonized and for this reason SOLARIS also investigates potential good use of generative AI. The activities of the project are organized around three use cases. In the first use case, SOLARIS studies the technical and semiotic features of deepfakes, complementing this with an experimental approach based on psychometric methods. Use case 1 thus intends to understand what makes a deepfake credible or trustable, from the user’s perspective. Use case 2 studies protocols in place in key organizations (e.g. ministries, press agencies) and performs a real-time simulation, closed door, of viral spreading of deepfakes. Use case 3 engages with citizens in participatory approaches to create ‘good’ contents, i.e. AI-generated contents that vehicles positive, constructive, and inclusive messages.
To understand the workings of deepfakes, SOLARIS has developed a ‘psychometric scale of perceived trustworthiness’. This has been done in a series of psychological experiments in which we tested what makes a deepfake credible to a layperson. We found out that the quality of the video or audio are not the sole element explaining ‘trust’ in deepfake. Instead, there are numerous elements that play a role, from where a deepfake is spread to the close environment of the user. We have therefore mapped a ‘network’ of actors involved in the deepfake life cycle, including developers, targets and their close environment, regulatory bodies, users and their close environment, etc. This map does more than explaining the life cycle and possible factors influencing trust in deepfakes; it also suggests leverage points for policy, which the project investigates at a later moment.
SOLARIS has taken a unique approach to the study of deepfakes, one that combines technical expertise with insights in the humanities and in the social sciences. Our analysis of AI-generated audiovisual contents, based on visual semiotic and on psychology provides a novel take on the phenomenon of deepfakes and their credibility. Moreover, the network approach to the actors involved in the deepfake lifecycles is contributing simultaneously to explaining the phenomenon, and to provide input for policy recommendations.
SOLARIS meeting Sofia April 2024
SOLARIS map of actors
SOLARIS meeting Tirana Sept 2023
SOLARIS folder front
SOLARIS folder back
SOLARIS map of actors (simplified)
SOLARIS poster
SOLARIS kick off meeting Amsterdam Feb 2023