We are living in times in which digital technologies and platforms are the main vehicles for spreading news. With great advantages that such technologies have brought, new challenges have also come. In particular, disinformation has been on the agenda of researchers, journalists, and policymakers already for quite some time. However, we are now witnessing a new phase, in which the technologies do not just contribute to the spreading of dis- and mis-information, but also to their generation. In fact, there is a class AI systems that are able to generate fake contents (audiovisual, text), from some real specimen – these are usually known as ‘deepfakes’. It is not difficult to understand how the spreading of fake audiovisuals or texts can put in danger democratic processes as well as individual freedom or privacy. Project SOLARIS sets out to study these technologies from various perspectives: technical, psychological, semiotic. We also study the potential dangers of spreading deepfakes from a legal and geopolitical perspective, and we map and evaluate current protocols in place to counteract deepfakes that may go viral. At the same time, technologies are not to be demonized and for this reason SOLARIS also investigates potential good use of generative AI. The activities of the project are organized around three use cases. In the first use case, SOLARIS studies the technical and semiotic features of deepfakes, complementing this with an experimental approach based on psychometric methods. Use case 1 thus intends to understand what makes a deepfake credible or trustable, from the user’s perspective. Use case 2 studies protocols in place in key organizations (e.g. ministries, press agencies) and performs a real-time simulation, closed door, of viral spreading of deepfakes. Use case 3 engages with citizens in participatory approaches to create ‘good’ contents, i.e. AI-generated contents that vehicles positive, constructive, and inclusive messages.