Skip to main content
Ir a la página de inicio de la Comisión Europea (se abrirá en una nueva ventana)
español es
CORDIS - Resultados de investigaciones de la UE
CORDIS

Participative Assistive AI-powered Tools for Supporting Trustworthy Online Activity of Citizens and Debunking Disinformation

CORDIS proporciona enlaces a los documentos públicos y las publicaciones de los proyectos de los programas marco HORIZONTE.

Los enlaces a los documentos y las publicaciones de los proyectos del Séptimo Programa Marco, así como los enlaces a algunos tipos de resultados específicos, como conjuntos de datos y «software», se obtienen dinámicamente de OpenAIRE .

Resultado final

Project Handbook, Quality Assurance Plan and Data Management Plan - Update 1 (se abrirá en una nueva ventana)

The Quality assurance plan (incorporated with the Project Handbook) will include instructions, procedures, checklists (e.g., audit checklists, inspection checklists, deliverable report formats) and processes for reviewing deliverables and milestones (appointment of reviewers, checks for consistency, clarity, technical content, adherence to documentation standards, etc). The data generated during the project will be handled in the best possible way to maximise impact and inspired by FAIR principles. A Data management plan (DMP) will be drafted in the beginning of the project (M6) as part of the Project Handbook and will be continuously updated throughout its implementation, detailing precisely the procedure for data collection, consent procedure, storage, protection, retention and destruction of data, and confirmation that they comply with national and EU legislation. The DMP of the project will serve as a living document that will address all aspects of the data life cycle as described in part 1.2.7 “Research data management and management of other research outputs” of the proposal. To create this DMP, we will establish at the beginning of the project a Working group consisting of IPR experts from each partner.

Initial report on the resilience mechanisms triggered by the tools (se abrirá en una nueva ventana)

1) Creation of an ethical committee of the tool; 2) Selection of beta testers; 3) Experimental tests with the beta testers (with a set of questions, images, and multimedia content); 4) Guidelines and recommendations to the tools’ designers.

Report on the definition of the app (se abrirá en una nueva ventana)

1) User requirements analysis, 2) Smartphone app design and mockups, 3) Focus group, 4) Testing and debugging.

Report on the definition of the collaborative platform (se abrirá en una nueva ventana)

1) User requirements analysis, 2) Define platform interface for all other task, 3) Define input on the platform 4) Programming of the platform while keeping pace with the apps and AI development, 5) Low level testing and debugging of the platform.

Working paper 1. Title “Theoretical(in months) framework for the analysis of disinformation campaigns and foreign interference in the EU policy making” (se abrirá en una nueva ventana)

1) Identifying variety of content to influence, disrupt or distort the information ecosystem no matter where it comes from and who they target; 2) Building a theoretical framework and methodology for categorising the types of content involved in information manipulation; 3) Collecting evidence of information manipulation and interference incidents in the EU; 4) Analysing appropriate policies, strategies and instruments to respond to the disinformation threat including national and international (EEAS, NatoStratCom etc.) experience.

Working paper 2. “Information manipulation in the EU media ecosystem and response effectiveness” (se abrirá en una nueva ventana)

1) Mapping the information environment is to understand the current social media landscape in the EU; 2) Analysing commonalities and differences in information manipulation campaigns which occur offline as well as through online platforms and mainstream media; 3) Analysing whether social media companies become better at detecting and removing information manipulation from their platforms; 4) Understanding how threat actors have learned to modify their strategies, tools and tactics in social media; 5) Identifying effective instruments in detecting and building resilience against information manipulation in social media.

Report on the desk review analysis (se abrirá en una nueva ventana)

-Desk review analysis of case study 1 including the first findings of WPs 4, 5, and 6 : Russian disinformation including sources and propagation;- Desk review analysis of case study 2 including the first findings of WPs 4, 5, and 6 : Disinformation on Climate change including sources and propagation.

Report on the definition of the AR/VR environments applications (se abrirá en una nueva ventana)

1) Walkthrough logical model of the AR interaction, 2) UNITY based programming for creating the virtual characters and digital elements, 3) Integration of the UNITY script with the smartphone application, 4) User testing, debugging and overall improvement, 5) User manual and technical documentation.

Project Handbook, Quality Assurance Plan and Data Management Plan (se abrirá en una nueva ventana)

The Quality assurance plan (incorporated with the Project Handbook) will include instructions, procedures, checklists (e.g., audit checklists, inspection checklists, deliverable report formats) and processes for reviewing deliverables and milestones (appointment of reviewers, checks for consistency, clarity, technical content, adherence to documentation standards, etc). The data generated during the project will be handled in the best possible way to maximise impact and inspired by FAIR principles. A Data management plan (DMP) will be drafted in the beginning of the project (M6) as part of the Project Handbook and will be continuously updated throughout its implementation, detailing precisely the procedure for data collection, consent procedure, storage, protection, retention and destruction of data, and confirmation that they comply with national and EU legislation. The DMP of the project will serve as a living document that will address all aspects of the data life cycle as described in part 1.2.7 “Research data management and management of other research outputs” of the proposal. To create this DMP, we will establish at the beginning of the project a Working group consisting of IPR experts from each partner.

Working paper 4 and policy brief. “Narratives and foreign interference throughout Europe illustrated by case studies” (se abrirá en una nueva ventana)

1) Identifying different narratives used to polarise, and mislead the European people in information manipulation campaigns to inflame political, racial, religious, cultural, gender and other divides. 2) Analysing information manipulation narratives in the context of Russia’s war against Ukraine and climate change; 3) Providing comprehensive analysis of case studies to illustrate disinformation and attempts of foreign interference in EU policy making.

Self-Assessment Plan (se abrirá en una nueva ventana)

UL will prepare a self-assessment plan, setting out the measures against which the project’s operational performance will be assessed (including the measurement of progress toward achieving the objectives).

1st version of PDCER - Communication, Dissemination and Exploitation Activities (se abrirá en una nueva ventana)

The exploitation strategy will comprise different phases including product identification, market analysis, preparation of business planning and strategic alliances. For all demo & use cases and exploitable results, a dedicated business plan will be developed based on the innovation roadmap of the end users. A feasibility analysis will be performed to ensure a smooth commercialization of the developed processes and new materials. This includes market and competition analysis, SWOT and PESTLE analysis in order to define the external environment, financial analysis (cost breakdown, further investment costs, and break-even point), proposition of marketing activities and seeking of additional funding opportunities, in collaboration with the next task). F6S will undertake a regular review and assessment of the ‘Freedom to Operate’ (FTO) in the areas of exploitable outputs. F6S will provide guidance to the partners for their FTO and patentability search, but each partner will be responsible for the right management of its result.

Initial reports on the multimodal fake news detection modules and multimodal fake news dataset (se abrirá en una nueva ventana)

1) Dataset: a dataset of multimodal content will be used (or collected if needed) in order to train and evaluate the models. The data will represent a statement in different modalities, for example a video of a person speaking, their audio speech and textual transcription of what was said, another example could be the textual, visual and audio content of a news webpage. This dataset will be related to the graph and grow during the project. 2) Models’ development: different configurations of model architecture, graph information selection and fusion strategies will be implemented and evaluated. At current time of writing the most likely ones to be considered are the transformer-based contrastive learning ones using visual and textual data, as these are the ones showing the highest efficiency in different “fake news” and “deepfake” related work in the literature. Two versions of the models will be developed for comparison purposes : using contextual information from the graphs and not using them in order to evaluate the models themselves and the graphs.

Working paper 3 and policy brief. “Disinformation target groups in the EU member states, sources and hosts of propaganda” (se abrirá en una nueva ventana)

1) Evidence-based analysis of the EU population groups most vulnerable to disinformation; 2) Opportunities and limits to develop the critical thinking as a powerful response to information manipulation; 3) Identification of threat actors involved in information manipulation campaigns (Hate and other extremist groups, foreign governments, commercial actors, non-independent media, etc.).

Report on the definition of the plug-in (se abrirá en una nueva ventana)

1) Data analysis; 2) User requirements analysis; 3) Define interface for task 10.2; 4) Define input for task 10.1; 5) Programming of the parser-interface while keeping pace with AI development for the mainstream browsers; 6) Low level testing and debugging of the browser interface and help functions.

Initial reports on the trustworthiness of the different modules developed (se abrirá en una nueva ventana)

1) Identifying the probabilistic sources of uncertainty for the different critical AI-based systems - Incorporate the uncertainty modelling of these probabilistic sources into the AI-based models, 2) Ensuring the uncertainty measures are communicated to enhance reliability and trustworthy.

Initial reports on the modules developed (se abrirá en una nueva ventana)

1) Written fake news detection module development: such a module will take as input a piece of text (a post, a part of a web page, a message, etc.) and will perform an authenticity verification. It is expected that the output response of such a module will contain not only a binary or a statistical assessment but other supporting information on specific phrases and/or words to allow further investigations and references. 2) Image/video deepfake analysis module development: such a module will basically take as input an image or a video and will carry out a check for its authenticity. The output of such a module could contain different information presented in multiple ways like localization heat maps, binary assessment (e.g. fake or not), probabilistic evaluation and so on. In the case of video, the response could be frame-based and/or as a whole. Audio signals from videos or from standalone audio files will also be taken into account; 3) Datasets: different kinds of datasets, available on-line, will be gathered to train and test the models; data containing disparate characteristics will be selected in order to improve the generalisation capability of the implemented systems.

Self-Assessment Plan - Update 1 (se abrirá en una nueva ventana)

UL will prepare a self-assessment plan, setting out the measures against which the project’s operational performance will be assessed (including the measurement of progress toward achieving the objectives).

Report on the possible impacts of the tool on the perceptions of the citizens and the social media users (se abrirá en una nueva ventana)

1) Desk review, 2) Benchmark with other tools, 3) Questionnaires, 4) Online poll, 5) Follow up and Monitoring.

Initial report on the multi-stakeholders perspectives (se abrirá en una nueva ventana)

1) Set up of the guidelines for the focus groups, 2) Organization of the focus groups by the local partners year 2, 3) Analysis, follow up, and scaling up on the multi-stakeholders’ involvement; 4) Organisation of 2 transnational online focus groups multi-stakeholders’ involvement.

Report on requirements (se abrirá en una nueva ventana)

Delivery date (in months): M12, M211) Identifying appropriate tools for building resilience against information manipulation; 2) The specific format in which the content for WPs 6 and 7 should be represented in the graph will be defined in this task. This will account for the two possible solutions that will be investigated in the knowledge graph. The first solution will directly embed the multimedia contents within the knowledge graph as they are: this will require the definition of the admitted formats, etc. The second one consists in extracting the textual description from multimodal contents and adding these textual annotations in the knowledge graph; 3) Identifying requirements for the WPs 8 and 9. For example, requirements for the disinfoscore will be defined here (range of values, the way it is shown, e.g., percentage, letters, stars, etc.).

Report on the definition of the debunking API (se abrirá en una nueva ventana)

1) Analysis of API requirements with partners; 2) Programming; 3) Testing and debugging with other partners; 4) Writing documentation and examples.

First report on the process of continuous graph adaptation (se abrirá en una nueva ventana)

1) Adaptation based on AI/ML modules feedback: ML/AI modules from WPs 8 and 9 will provide newly analysed data with the related disinfoscore (score of disinformation within the data). This information is then validated before updating the database (T6.1) and further updating the graph (T6.3). 2) Adaptation based on user feedback: End-users will be given new statements (newer than the data in Task 6.1) in a different modality and asked to annotate how fake the information is on a continuous scale. The annotated data will be used to update the information in the graphs created in Task 6.3. (Task 6.3: Creation of the knowledge graphs - task aims to construct the knowledge graphs that illustrate the structure of the deceptive data. The building of the knowledge graphs starts from the construction of the Wikidata/DBPedia taxonomy subgraph that contains the relevant concepts related to the fields of the two case studies.

Initial calculation of a score representing the amount of disinformation in the data (se abrirá en una nueva ventana)

1) Voting/integration module: several approaches based on voting on different input modules or any fusion system possibly based on a neural network will be tested and the best approach will be chosen; 2) Choice of the input models: the disinfoscore modules can have a plurality of fake news detectors as input. A choice will be done on the best fake news detectors combinations. 3) Disinfoscore stability: the score needs to provide all the values (0% and 100% should not be too exceptional), to show only some big levels and not change to few.

Starting dataset of fake statements and related multimedia contents (se abrirá en una nueva ventana)

Delivery date (in months): M6, M9, M131) Data gathering: Collection of the starting set of fake statements that can be gathered from a number of highly-reputable fact-checking websites and also made available from Euractiv and Internews Ukraine.2) Information extraction: From these data we have to extract the following features:1) the textual statement of the claim; 2) the audios, videos, images related to the claim; 3) the author of the claim; 4) the date of publication of the claim; 5) the entities extracted from the claim body together with their Wikipedia categories.

Initial explainability module tracing back between the data and the score (se abrirá en una nueva ventana)

1) Module showing potentially fake regions in the signal (multimodal) : A module will be implemented for the different modalities which is able to locate areas in the signal which are potentially attacked to create a fake news. This will be based on the work in T8.3. 2) Context data from graph justifying the disinfoscore : A list of links to media from the graph (WPs 6 and 7) will be extracted and the top N links (based on the requirements) will be shown to a person along with regions from this data which are close or the same with the potentially fake information. These links can then be used by citizens to fact check the news and debunk rapidly fake news.

Gender Equality Plan (se abrirá en una nueva ventana)

Type:ETHICS1) Gender equality plan for the project, 2) Set up of guidelines on gender equality in the tool’s developed, 3) Testing of the guidelines with the developers, 4) Testing of the guidelines with the beta testers, 5) Recommendations.

Publicaciones

CID: Measuring Feature Importance Through Counterfactual Distributions (se abrirá en una nueva ventana)

Autores: Eddie Conti, Álvaro Parafita, Axel Brando
Publicado en: Northern Lights Deep Learning Conference 2026, 2025
Editor: University of Norway
DOI: 10.48550/ARXIV.2511.15371

Probing the Embedding Space of Transformers via Minimal Token Perturbations (se abrirá en una nueva ventana)

Autores: Eddie Conti, Alejandro Astruc, Alvaro Parafita, Axel Brando
Publicado en: IJCAI 2025 Workshop on Explainable Artificial Intelligence, 2025
Editor: XAI
DOI: 10.48550/ARXIV.2506.18011

Temporal surface frame anomalies for deepfake video detection (se abrirá en una nueva ventana)

Autores: Andrea Ciamarra, Roberto Caldelli, Alberto Del Bimbo
Publicado en: 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2024, ISSN 2160-7516
Editor: IEEE
DOI: 10.1109/CVPRW63382.2024.00388

Detecting Deepfakes Through Inconsistencies in Local Camera Surface Frames (se abrirá en una nueva ventana)

Autores: Andrea Ciamarra, Roberto Caldelli, Alberto Del Bimbo
Publicado en: 2024 IEEE International Conference on Image Processing Challenges and Workshops (ICIPCW), 2024
Editor: IEEE
DOI: 10.1109/ICIPCW64161.2024.10769135

Robustness and Generalization of Synthetic Images Detectors

Autores: Coccomini D.A., Caldelli R., Gennaro C., Fiameni G., Amato G., Falchi F.
Publicado en: CEUR Workshop Proceedings, Edición 3762, 2024, ISSN 1613-0073
Editor: CEUR-WS.org

MAD '24 Workshop: Multimedia AI against Disinformation (se abrirá en una nueva ventana)

Autores: Cristian Stanciu, Bogdan Ionescu, Luca Cuccovillo, Symeon Papadopoulos, Giorgos Kordopatis-Zilos, Adrian Popescu, Roberto Caldelli
Publicado en: Proceedings of the 2024 International Conference on Multimedia Retrieval, 2024, ISBN 979-8-4007-0619-6
Editor: ACM
DOI: 10.1145/3652583.3660000

Visual Quality Improved Watermarking based on Dual-Reference Loss for Deepfake Attribution (se abrirá en una nueva ventana)

Autores: Qiushi Li, Stefano Berretti, Roberto Caldelli
Publicado en: Proceedings of the 1st on Deepfake Forensics Workshop: Detection, Attribution, Recognition, and Adversarial Challenges in the Era of AI-Generated Media, 2025
Editor: ACM
DOI: 10.1145/3746265.3759660

Spotting fully-synthetic facial images via local camera surface frames (se abrirá en una nueva ventana)

Autores: Andrea Ciamarra, Roberto Caldelli, Alberto Del Bimbo
Publicado en: 2024 IEEE International Workshop on Information Forensics and Security (WIFS), 2024
Editor: IEEE
DOI: 10.1109/WIFS61860.2024.10810698

Practical do-Shapley Explanations with Estimand-Agnostic Causal Inference (se abrirá en una nueva ventana)

Autores: Álvaro Parafita, Tomas Garriga, Axel Brando, Francisco J. Cazorla
Publicado en: The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS 2025), 2025
Editor: NeurIPS 2025
DOI: 10.48550/ARXIV.2509.20211

Position Paper: If Innovation in AI Systematically Violates Fundamental Rights, Is It Innovation at All? (se abrirá en una nueva ventana)

Autores: Josu Eguiluz Castañeira, Axel Brando, Migle Laukyte, Marc Serra-Vidal
Publicado en: The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS 2025), 2025
Editor: NeurIPS 2025
DOI: 10.48550/ARXIV.2511.00027

Knowledge Graphs and Machine Learning in Fake News and Disinformation Detection (se abrirá en una nueva ventana)

Autores: Anastasios Manos, Despina Elisabeth Filippidou, Nikolaos Pavlidis, Georgios Karanasios, Georgios Vachtanidis, Arianna D'Ulizia, Alessia D'Andrea
Publicado en: 2024 International Conference on Engineering and Emerging Technologies (ICEET), 2025, ISSN 2831-3682
Editor: IEEE
DOI: 10.1109/ICEET65156.2024.10913780

On the Generalisation Capability of Local Surface Frames in Detecting Diffusion-Based Facial Images (se abrirá en una nueva ventana)

Autores: Andrea Ciamarra, Roberto Caldelli, Alberto Del Bimbo
Publicado en: 2025 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW), 2025
Editor: IEEE
DOI: 10.1109/WACVW65960.2025.00154

Shedding Light on Large Generative Networks: Estimating Epistemic Uncertainty in Diffusion Models

Autores: Axel Brando, Lucas Berry, David Mege
Publicado en: The 40th Conference on Uncertainty in Artificial Intelligence, 2024
Editor: UAI 2024

A Novel Application of SCMs to Time Series Counterfactual Estimation in the Pharmaceutical Industry

Autores: Tomàs Garriga, Gerard Sanz, Eduard Serrahima, Axel Brando
Publicado en: NeurIPS'24 Workshop on Causal Representation Learning, Edición 7, 2024
Editor: NeurIPS

MAD’24 Workshop Chairs' Welcome Message (se abrirá en una nueva ventana)

Autores: Stanciu, Cristian and Ionescu, Bogdan and Cuccovillo, Luca and Papadopoulos, Symeon and Kordopatis-Zilos, Giorgos and Popescu, Adrian and Caldelli, Roberto
Publicado en: Proceedings of the 3rd ACM International Workshop on Multimedia AI against Disinformation, 2024
Editor: ACM
DOI: 10.1145/3643491

Linguistic insights, media mechanisms and role of AI in dissemination and impact of disinformation (se abrirá en una nueva ventana)

Autores: Alessia D’Andrea, Giorgia Fusacchia, Arianna D’Ulizia
Publicado en: Journal of Information, Communication and Ethics in Society, 2025, ISSN 1477-996X
Editor: Emerald
DOI: 10.1108/JICES-01-2025-0014

A Sociopolitical Approach to Disinformation and AI: Concerns, Responses and Challenges (se abrirá en una nueva ventana)

Autores: Pascaline Gaborit
Publicado en: Journal of Political Science and International Relations, Edición 7, 2024, ISSN 2640-2785
Editor: Science Publishing Group
DOI: 10.11648/j.jpsir.20240704.11

Policy Review: Countering Disinformation in the Digital Age - Policies and Initiatives to Safeguard Democracy in Europe (se abrirá en una nueva ventana)

Autores: Alessia D’Andrea, Giorgia Fusacchia, Arianna D’Ulizia
Publicado en: Information Polity, Edición 30, 2025, ISSN 1570-1255
Editor: SAGE Publications
DOI: 10.1177/15701255251318900

Buscando datos de OpenAIRE...

Se ha producido un error en la búsqueda de datos de OpenAIRE

No hay resultados disponibles

Mi folleto 0 0