Skip to main content
Przejdź do strony domowej Komisji Europejskiej (odnośnik otworzy się w nowym oknie)
polski pl
CORDIS - Wyniki badań wspieranych przez UE
CORDIS

Participative Assistive AI-powered Tools for Supporting Trustworthy Online Activity of Citizens and Debunking Disinformation

CORDIS oferuje możliwość skorzystania z odnośników do publicznie dostępnych publikacji i rezultatów projektów realizowanych w ramach programów ramowych HORYZONT.

Odnośniki do rezultatów i publikacji związanych z poszczególnymi projektami 7PR, a także odnośniki do niektórych konkretnych kategorii wyników, takich jak zbiory danych i oprogramowanie, są dynamicznie pobierane z systemu OpenAIRE .

Rezultaty

Project Handbook, Quality Assurance Plan and Data Management Plan - Update 1 (odnośnik otworzy się w nowym oknie)

The Quality assurance plan (incorporated with the Project Handbook) will include instructions, procedures, checklists (e.g., audit checklists, inspection checklists, deliverable report formats) and processes for reviewing deliverables and milestones (appointment of reviewers, checks for consistency, clarity, technical content, adherence to documentation standards, etc). The data generated during the project will be handled in the best possible way to maximise impact and inspired by FAIR principles. A Data management plan (DMP) will be drafted in the beginning of the project (M6) as part of the Project Handbook and will be continuously updated throughout its implementation, detailing precisely the procedure for data collection, consent procedure, storage, protection, retention and destruction of data, and confirmation that they comply with national and EU legislation. The DMP of the project will serve as a living document that will address all aspects of the data life cycle as described in part 1.2.7 “Research data management and management of other research outputs” of the proposal. To create this DMP, we will establish at the beginning of the project a Working group consisting of IPR experts from each partner.

Initial report on the resilience mechanisms triggered by the tools (odnośnik otworzy się w nowym oknie)

1) Creation of an ethical committee of the tool; 2) Selection of beta testers; 3) Experimental tests with the beta testers (with a set of questions, images, and multimedia content); 4) Guidelines and recommendations to the tools’ designers.

Report on the definition of the app (odnośnik otworzy się w nowym oknie)

1) User requirements analysis, 2) Smartphone app design and mockups, 3) Focus group, 4) Testing and debugging.

Report on the definition of the collaborative platform (odnośnik otworzy się w nowym oknie)

1) User requirements analysis, 2) Define platform interface for all other task, 3) Define input on the platform 4) Programming of the platform while keeping pace with the apps and AI development, 5) Low level testing and debugging of the platform.

Working paper 1. Title “Theoretical(in months) framework for the analysis of disinformation campaigns and foreign interference in the EU policy making” (odnośnik otworzy się w nowym oknie)

1) Identifying variety of content to influence, disrupt or distort the information ecosystem no matter where it comes from and who they target; 2) Building a theoretical framework and methodology for categorising the types of content involved in information manipulation; 3) Collecting evidence of information manipulation and interference incidents in the EU; 4) Analysing appropriate policies, strategies and instruments to respond to the disinformation threat including national and international (EEAS, NatoStratCom etc.) experience.

Working paper 2. “Information manipulation in the EU media ecosystem and response effectiveness” (odnośnik otworzy się w nowym oknie)

1) Mapping the information environment is to understand the current social media landscape in the EU; 2) Analysing commonalities and differences in information manipulation campaigns which occur offline as well as through online platforms and mainstream media; 3) Analysing whether social media companies become better at detecting and removing information manipulation from their platforms; 4) Understanding how threat actors have learned to modify their strategies, tools and tactics in social media; 5) Identifying effective instruments in detecting and building resilience against information manipulation in social media.

Report on the desk review analysis (odnośnik otworzy się w nowym oknie)

-Desk review analysis of case study 1 including the first findings of WPs 4, 5, and 6 : Russian disinformation including sources and propagation;- Desk review analysis of case study 2 including the first findings of WPs 4, 5, and 6 : Disinformation on Climate change including sources and propagation.

Report on the definition of the AR/VR environments applications (odnośnik otworzy się w nowym oknie)

1) Walkthrough logical model of the AR interaction, 2) UNITY based programming for creating the virtual characters and digital elements, 3) Integration of the UNITY script with the smartphone application, 4) User testing, debugging and overall improvement, 5) User manual and technical documentation.

Project Handbook, Quality Assurance Plan and Data Management Plan (odnośnik otworzy się w nowym oknie)

The Quality assurance plan (incorporated with the Project Handbook) will include instructions, procedures, checklists (e.g., audit checklists, inspection checklists, deliverable report formats) and processes for reviewing deliverables and milestones (appointment of reviewers, checks for consistency, clarity, technical content, adherence to documentation standards, etc). The data generated during the project will be handled in the best possible way to maximise impact and inspired by FAIR principles. A Data management plan (DMP) will be drafted in the beginning of the project (M6) as part of the Project Handbook and will be continuously updated throughout its implementation, detailing precisely the procedure for data collection, consent procedure, storage, protection, retention and destruction of data, and confirmation that they comply with national and EU legislation. The DMP of the project will serve as a living document that will address all aspects of the data life cycle as described in part 1.2.7 “Research data management and management of other research outputs” of the proposal. To create this DMP, we will establish at the beginning of the project a Working group consisting of IPR experts from each partner.

Working paper 4 and policy brief. “Narratives and foreign interference throughout Europe illustrated by case studies” (odnośnik otworzy się w nowym oknie)

1) Identifying different narratives used to polarise, and mislead the European people in information manipulation campaigns to inflame political, racial, religious, cultural, gender and other divides. 2) Analysing information manipulation narratives in the context of Russia’s war against Ukraine and climate change; 3) Providing comprehensive analysis of case studies to illustrate disinformation and attempts of foreign interference in EU policy making.

Self-Assessment Plan (odnośnik otworzy się w nowym oknie)

UL will prepare a self-assessment plan, setting out the measures against which the project’s operational performance will be assessed (including the measurement of progress toward achieving the objectives).

1st version of PDCER - Communication, Dissemination and Exploitation Activities (odnośnik otworzy się w nowym oknie)

The exploitation strategy will comprise different phases including product identification, market analysis, preparation of business planning and strategic alliances. For all demo & use cases and exploitable results, a dedicated business plan will be developed based on the innovation roadmap of the end users. A feasibility analysis will be performed to ensure a smooth commercialization of the developed processes and new materials. This includes market and competition analysis, SWOT and PESTLE analysis in order to define the external environment, financial analysis (cost breakdown, further investment costs, and break-even point), proposition of marketing activities and seeking of additional funding opportunities, in collaboration with the next task). F6S will undertake a regular review and assessment of the ‘Freedom to Operate’ (FTO) in the areas of exploitable outputs. F6S will provide guidance to the partners for their FTO and patentability search, but each partner will be responsible for the right management of its result.

First report on the building process of the knowledge graphs (odnośnik otworzy się w nowym oknie)

1) Creation of the taxonomy subgraph: the building of the knowledge graph starts from the construction of the Wikidata/DBPedia taxonomy subgraph that contains the relevant concepts related to the fields of the two case studies; 2) Import of the false statements and the related multimedia contents: the set of fake statements and the related multimedia contents (videos, images, audios, etc.), extracted in Task 6.1, are imported as a series of nodes into the Wikidata/DBPedia taxonomy subgraph. To import the multimedia contents, two possible solutions will be investigated. The first solution consists in embedding the multimedia contents within the knowledge graph as they are, while the second one consists in extracting the textual description from multimedia contents and adding this textual knowledge in the knowledge graph; 3) Connection of the false statements and the related contents to the taxonomy subgraph: individual nodes of false statements and the related contents are connected to the taxonomy using the features extracted using NLP and ML techniques (e.g. score, topic, keywords, sentiment, multimodal information).

Initial reports on the multimodal fake news detection modules and multimodal fake news dataset (odnośnik otworzy się w nowym oknie)

1) Dataset: a dataset of multimodal content will be used (or collected if needed) in order to train and evaluate the models. The data will represent a statement in different modalities, for example a video of a person speaking, their audio speech and textual transcription of what was said, another example could be the textual, visual and audio content of a news webpage. This dataset will be related to the graph and grow during the project. 2) Models’ development: different configurations of model architecture, graph information selection and fusion strategies will be implemented and evaluated. At current time of writing the most likely ones to be considered are the transformer-based contrastive learning ones using visual and textual data, as these are the ones showing the highest efficiency in different “fake news” and “deepfake” related work in the literature. Two versions of the models will be developed for comparison purposes : using contextual information from the graphs and not using them in order to evaluate the models themselves and the graphs.

Working paper 3 and policy brief. “Disinformation target groups in the EU member states, sources and hosts of propaganda” (odnośnik otworzy się w nowym oknie)

1) Evidence-based analysis of the EU population groups most vulnerable to disinformation; 2) Opportunities and limits to develop the critical thinking as a powerful response to information manipulation; 3) Identification of threat actors involved in information manipulation campaigns (Hate and other extremist groups, foreign governments, commercial actors, non-independent media, etc.).

Report on the definition of the plug-in (odnośnik otworzy się w nowym oknie)

1) Data analysis; 2) User requirements analysis; 3) Define interface for task 10.2; 4) Define input for task 10.1; 5) Programming of the parser-interface while keeping pace with AI development for the mainstream browsers; 6) Low level testing and debugging of the browser interface and help functions.

Initial reports on the trustworthiness of the different modules developed (odnośnik otworzy się w nowym oknie)

1) Identifying the probabilistic sources of uncertainty for the different critical AI-based systems - Incorporate the uncertainty modelling of these probabilistic sources into the AI-based models, 2) Ensuring the uncertainty measures are communicated to enhance reliability and trustworthy.

Initial reports on the modules developed (odnośnik otworzy się w nowym oknie)

1) Written fake news detection module development: such a module will take as input a piece of text (a post, a part of a web page, a message, etc.) and will perform an authenticity verification. It is expected that the output response of such a module will contain not only a binary or a statistical assessment but other supporting information on specific phrases and/or words to allow further investigations and references. 2) Image/video deepfake analysis module development: such a module will basically take as input an image or a video and will carry out a check for its authenticity. The output of such a module could contain different information presented in multiple ways like localization heat maps, binary assessment (e.g. fake or not), probabilistic evaluation and so on. In the case of video, the response could be frame-based and/or as a whole. Audio signals from videos or from standalone audio files will also be taken into account; 3) Datasets: different kinds of datasets, available on-line, will be gathered to train and test the models; data containing disparate characteristics will be selected in order to improve the generalisation capability of the implemented systems.

Self-Assessment Plan - Update 1 (odnośnik otworzy się w nowym oknie)

UL will prepare a self-assessment plan, setting out the measures against which the project’s operational performance will be assessed (including the measurement of progress toward achieving the objectives).

Report on the possible impacts of the tool on the perceptions of the citizens and the social media users (odnośnik otworzy się w nowym oknie)

1) Desk review, 2) Benchmark with other tools, 3) Questionnaires, 4) Online poll, 5) Follow up and Monitoring.

Initial report on the multi-stakeholders perspectives (odnośnik otworzy się w nowym oknie)

1) Set up of the guidelines for the focus groups, 2) Organization of the focus groups by the local partners year 2, 3) Analysis, follow up, and scaling up on the multi-stakeholders’ involvement; 4) Organisation of 2 transnational online focus groups multi-stakeholders’ involvement.

Report on requirements (odnośnik otworzy się w nowym oknie)

Delivery date (in months): M12, M211) Identifying appropriate tools for building resilience against information manipulation; 2) The specific format in which the content for WPs 6 and 7 should be represented in the graph will be defined in this task. This will account for the two possible solutions that will be investigated in the knowledge graph. The first solution will directly embed the multimedia contents within the knowledge graph as they are: this will require the definition of the admitted formats, etc. The second one consists in extracting the textual description from multimodal contents and adding these textual annotations in the knowledge graph; 3) Identifying requirements for the WPs 8 and 9. For example, requirements for the disinfoscore will be defined here (range of values, the way it is shown, e.g., percentage, letters, stars, etc.).

Report on the definition of the debunking API (odnośnik otworzy się w nowym oknie)

1) Analysis of API requirements with partners; 2) Programming; 3) Testing and debugging with other partners; 4) Writing documentation and examples.

First report on the process of continuous graph adaptation (odnośnik otworzy się w nowym oknie)

1) Adaptation based on AI/ML modules feedback: ML/AI modules from WPs 8 and 9 will provide newly analysed data with the related disinfoscore (score of disinformation within the data). This information is then validated before updating the database (T6.1) and further updating the graph (T6.3). 2) Adaptation based on user feedback: End-users will be given new statements (newer than the data in Task 6.1) in a different modality and asked to annotate how fake the information is on a continuous scale. The annotated data will be used to update the information in the graphs created in Task 6.3. (Task 6.3: Creation of the knowledge graphs - task aims to construct the knowledge graphs that illustrate the structure of the deceptive data. The building of the knowledge graphs starts from the construction of the Wikidata/DBPedia taxonomy subgraph that contains the relevant concepts related to the fields of the two case studies.

Initial calculation of a score representing the amount of disinformation in the data (odnośnik otworzy się w nowym oknie)

1) Voting/integration module: several approaches based on voting on different input modules or any fusion system possibly based on a neural network will be tested and the best approach will be chosen; 2) Choice of the input models: the disinfoscore modules can have a plurality of fake news detectors as input. A choice will be done on the best fake news detectors combinations. 3) Disinfoscore stability: the score needs to provide all the values (0% and 100% should not be too exceptional), to show only some big levels and not change to few.

Updated release of the dataset containing extracted features (odnośnik otworzy się w nowym oknie)

1) The truth value or rating (true/half-true, primarily false, and false) of the claim; 2) A set of keywords representing the topics of the claim; 3) The sentiment of the claim; 4) Multimodal features from the videos, images, and audios related to the claim.

Starting dataset of fake statements and related multimedia contents (odnośnik otworzy się w nowym oknie)

Delivery date (in months): M6, M9, M131) Data gathering: Collection of the starting set of fake statements that can be gathered from a number of highly-reputable fact-checking websites and also made available from Euractiv and Internews Ukraine.2) Information extraction: From these data we have to extract the following features:1) the textual statement of the claim; 2) the audios, videos, images related to the claim; 3) the author of the claim; 4) the date of publication of the claim; 5) the entities extracted from the claim body together with their Wikipedia categories.

Initial explainability module tracing back between the data and the score (odnośnik otworzy się w nowym oknie)

1) Module showing potentially fake regions in the signal (multimodal) : A module will be implemented for the different modalities which is able to locate areas in the signal which are potentially attacked to create a fake news. This will be based on the work in T8.3. 2) Context data from graph justifying the disinfoscore : A list of links to media from the graph (WPs 6 and 7) will be extracted and the top N links (based on the requirements) will be shown to a person along with regions from this data which are close or the same with the potentially fake information. These links can then be used by citizens to fact check the news and debunk rapidly fake news.

Gender Equality Plan (odnośnik otworzy się w nowym oknie)

Type:ETHICS1) Gender equality plan for the project, 2) Set up of guidelines on gender equality in the tool’s developed, 3) Testing of the guidelines with the developers, 4) Testing of the guidelines with the beta testers, 5) Recommendations.

Publikacje

CID: Measuring Feature Importance Through Counterfactual Distributions (odnośnik otworzy się w nowym oknie)

Autorzy: Eddie Conti, Álvaro Parafita, Axel Brando
Opublikowane w: Northern Lights Deep Learning Conference 2026, 2025
Wydawca: University of Norway
DOI: 10.48550/ARXIV.2511.15371

Probing the Embedding Space of Transformers via Minimal Token Perturbations (odnośnik otworzy się w nowym oknie)

Autorzy: Eddie Conti, Alejandro Astruc, Alvaro Parafita, Axel Brando
Opublikowane w: IJCAI 2025 Workshop on Explainable Artificial Intelligence, 2025
Wydawca: XAI
DOI: 10.48550/ARXIV.2506.18011

Temporal surface frame anomalies for deepfake video detection (odnośnik otworzy się w nowym oknie)

Autorzy: Andrea Ciamarra, Roberto Caldelli, Alberto Del Bimbo
Opublikowane w: 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2024, ISSN 2160-7516
Wydawca: IEEE
DOI: 10.1109/CVPRW63382.2024.00388

Detecting Deepfakes Through Inconsistencies in Local Camera Surface Frames (odnośnik otworzy się w nowym oknie)

Autorzy: Andrea Ciamarra, Roberto Caldelli, Alberto Del Bimbo
Opublikowane w: 2024 IEEE International Conference on Image Processing Challenges and Workshops (ICIPCW), 2024
Wydawca: IEEE
DOI: 10.1109/ICIPCW64161.2024.10769135

Robustness and Generalization of Synthetic Images Detectors

Autorzy: Coccomini D.A., Caldelli R., Gennaro C., Fiameni G., Amato G., Falchi F.
Opublikowane w: CEUR Workshop Proceedings, Numer 3762, 2024, ISSN 1613-0073
Wydawca: CEUR-WS.org

MAD '24 Workshop: Multimedia AI against Disinformation (odnośnik otworzy się w nowym oknie)

Autorzy: Cristian Stanciu, Bogdan Ionescu, Luca Cuccovillo, Symeon Papadopoulos, Giorgos Kordopatis-Zilos, Adrian Popescu, Roberto Caldelli
Opublikowane w: Proceedings of the 2024 International Conference on Multimedia Retrieval, 2024, ISBN 979-8-4007-0619-6
Wydawca: ACM
DOI: 10.1145/3652583.3660000

Visual Quality Improved Watermarking based on Dual-Reference Loss for Deepfake Attribution (odnośnik otworzy się w nowym oknie)

Autorzy: Qiushi Li, Stefano Berretti, Roberto Caldelli
Opublikowane w: Proceedings of the 1st on Deepfake Forensics Workshop: Detection, Attribution, Recognition, and Adversarial Challenges in the Era of AI-Generated Media, 2025
Wydawca: ACM
DOI: 10.1145/3746265.3759660

Spotting fully-synthetic facial images via local camera surface frames (odnośnik otworzy się w nowym oknie)

Autorzy: Andrea Ciamarra, Roberto Caldelli, Alberto Del Bimbo
Opublikowane w: 2024 IEEE International Workshop on Information Forensics and Security (WIFS), 2024
Wydawca: IEEE
DOI: 10.1109/WIFS61860.2024.10810698

Practical do-Shapley Explanations with Estimand-Agnostic Causal Inference (odnośnik otworzy się w nowym oknie)

Autorzy: Álvaro Parafita, Tomas Garriga, Axel Brando, Francisco J. Cazorla
Opublikowane w: The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS 2025), 2025
Wydawca: NeurIPS 2025
DOI: 10.48550/ARXIV.2509.20211

Position Paper: If Innovation in AI Systematically Violates Fundamental Rights, Is It Innovation at All? (odnośnik otworzy się w nowym oknie)

Autorzy: Josu Eguiluz Castañeira, Axel Brando, Migle Laukyte, Marc Serra-Vidal
Opublikowane w: The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS 2025), 2025
Wydawca: NeurIPS 2025
DOI: 10.48550/ARXIV.2511.00027

Knowledge Graphs and Machine Learning in Fake News and Disinformation Detection (odnośnik otworzy się w nowym oknie)

Autorzy: Anastasios Manos, Despina Elisabeth Filippidou, Nikolaos Pavlidis, Georgios Karanasios, Georgios Vachtanidis, Arianna D'Ulizia, Alessia D'Andrea
Opublikowane w: 2024 International Conference on Engineering and Emerging Technologies (ICEET), 2025, ISSN 2831-3682
Wydawca: IEEE
DOI: 10.1109/ICEET65156.2024.10913780

On the Generalisation Capability of Local Surface Frames in Detecting Diffusion-Based Facial Images (odnośnik otworzy się w nowym oknie)

Autorzy: Andrea Ciamarra, Roberto Caldelli, Alberto Del Bimbo
Opublikowane w: 2025 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW), 2025
Wydawca: IEEE
DOI: 10.1109/WACVW65960.2025.00154

Shedding Light on Large Generative Networks: Estimating Epistemic Uncertainty in Diffusion Models

Autorzy: Axel Brando, Lucas Berry, David Mege
Opublikowane w: The 40th Conference on Uncertainty in Artificial Intelligence, 2024
Wydawca: UAI 2024

A Novel Application of SCMs to Time Series Counterfactual Estimation in the Pharmaceutical Industry

Autorzy: Tomàs Garriga, Gerard Sanz, Eduard Serrahima, Axel Brando
Opublikowane w: NeurIPS'24 Workshop on Causal Representation Learning, Numer 7, 2024
Wydawca: NeurIPS

MAD’24 Workshop Chairs' Welcome Message (odnośnik otworzy się w nowym oknie)

Autorzy: Stanciu, Cristian and Ionescu, Bogdan and Cuccovillo, Luca and Papadopoulos, Symeon and Kordopatis-Zilos, Giorgos and Popescu, Adrian and Caldelli, Roberto
Opublikowane w: Proceedings of the 3rd ACM International Workshop on Multimedia AI against Disinformation, 2024
Wydawca: ACM
DOI: 10.1145/3643491

Linguistic insights, media mechanisms and role of AI in dissemination and impact of disinformation (odnośnik otworzy się w nowym oknie)

Autorzy: Alessia D’Andrea, Giorgia Fusacchia, Arianna D’Ulizia
Opublikowane w: Journal of Information, Communication and Ethics in Society, 2025, ISSN 1477-996X
Wydawca: Emerald
DOI: 10.1108/JICES-01-2025-0014

A Sociopolitical Approach to Disinformation and AI: Concerns, Responses and Challenges (odnośnik otworzy się w nowym oknie)

Autorzy: Pascaline Gaborit
Opublikowane w: Journal of Political Science and International Relations, Numer 7, 2024, ISSN 2640-2785
Wydawca: Science Publishing Group
DOI: 10.11648/j.jpsir.20240704.11

Policy Review: Countering Disinformation in the Digital Age - Policies and Initiatives to Safeguard Democracy in Europe (odnośnik otworzy się w nowym oknie)

Autorzy: Alessia D’Andrea, Giorgia Fusacchia, Arianna D’Ulizia
Opublikowane w: Information Polity, Numer 30, 2025, ISSN 1570-1255
Wydawca: SAGE Publications
DOI: 10.1177/15701255251318900

Wyszukiwanie danych OpenAIRE...

Podczas wyszukiwania danych OpenAIRE wystąpił błąd

Brak wyników

Moja broszura 0 0