Skip to main content
Vai all'homepage della Commissione europea (si apre in una nuova finestra)
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS

Evaluating the Robustness of Non-Credible Text Identification by Anticipating Adversarial Actions

CORDIS fornisce collegamenti ai risultati finali pubblici e alle pubblicazioni dei progetti ORIZZONTE.

I link ai risultati e alle pubblicazioni dei progetti del 7° PQ, così come i link ad alcuni tipi di risultati specifici come dataset e software, sono recuperati dinamicamente da .OpenAIRE .

Risultati finali

Communication, Dissemination & Outreach Plan (si apre in una nuova finestra)

The plan describes the planned measures to maximize the impact of the project, including the dissemination and exploitation measures that are planned, and the target group(s) addressed. Regarding communication measures and public engagement strategy, the aim is to inform and reach out to society and show the activities performed, and the use and the benefits the project will have for citizens.

Data Management Plan (si apre in una nuova finestra)

The Data Management Plan describes the data management life cycle for all data sets that will be collected processed or generated by the action It is a document describing what data will be collected processed or generated and following what methodology and standards whether and how this data will be shared andor made open and how it will be curated and preserved

Pubblicazioni

Attacking Misinformation Detection Using Adversarial Examples Generated by Language Models (si apre in una nuova finestra)

Autori: Piotr Przybyła
Pubblicato in: arXiv preprint, 2024
Editore: arXiv
DOI: 10.48550/ARXIV.2410.20940

Deanthropomorphising NLP: Can a Language Model Be Conscious? (si apre in una nuova finestra)

Autori: Piotr Przybyła, Matthew Shardlow
Pubblicato in: arXiv preprint, 2022
Editore: arXiv
DOI: 10.48550/ARXIV.2211.11483

Verifying the Robustness of Automatic Credibility Assessment (si apre in una nuova finestra)

Autori: Piotr Przybyła, Alexander Shvets, Horacio Saggion
Pubblicato in: arXiv preprint, 2023
Editore: arXiv
DOI: 10.48550/ARXIV.2303.08032

AffilGood: Building reliable institution name disambiguation tools to improve scientific literature analysis

Autori: Nicolau Duran-Silva, Pablo Accuosto, Piotr Przybyła, Horacio Saggion
Pubblicato in: Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024), 2024
Editore: Association for Computational Linguistics

ERINIA: Evaluating the Robustness of Non-Credible Text Identification by Anticipating Adversarial Actions

Autori: Piotr Przybyła, Horacio Saggion
Pubblicato in: Proceedings of the Workshop on NLP applied to Misinformation co-located with 39th International Conference of the Spanish Society for Natural Language Processing (SEPLN 2023), 2023, ISSN 1613-0073
Editore: CEUR Workshop Proceedings

Overview of the CLEF-2024 CheckThat! Lab Task 6 on Robustness of Credibility Assessment with Adversarial Examples (InCrediblAE)

Autori: Piotr Przybyła, Ben Wu, Alexander Shvets, Yida Mu, Kim Chaeng Sheang, Xingyi Song, Horacio Saggion
Pubblicato in: Working Notes of CLEF 2024 - Conference and Labs of the Evaluation Forum, 2024
Editore: CEUR Workshop Proceedings

Know Thine Enemy: Adaptive Attacks on Misinformation Detection Using Reinforcement Learning

Autori: Piotr Przybyła, Euan McGill, Horacio Saggion
Pubblicato in: Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis, 2024
Editore: Association for Computational Linguistics

I've Seen Things You Machines Wouldn't Believe: Measuring Content Predictability to Identify Automatically-Generated Text

Autori: Piotr Przybyła, Nicolau Duran-Silva, Santiago Egea-Gómez
Pubblicato in: Proceedings of the 5th Workshop on Iberian Languages Evaluation Forum (IberLEF 2023), 2023
Editore: CEUR Workshop Proceedings

The CLEF-2024 CheckThat! Lab: Check-Worthiness, Subjectivity, Persuasion, Roles, Authorities, and Adversarial Robustness (si apre in una nuova finestra)

Autori: Alberto Barrón-Cedeño, Firoj Alam, Tanmoy Chakraborty, Tamer Elsayed, Preslav Nakov, Piotr Przybyła, Julia Maria Struß, Fatima Haouari, Maram Hasanain, Federico Ruggeri, Xingyi Song, Reem Suwaileh
Pubblicato in: Lecture Notes in Computer Science, Advances in Information Retrieval, 2024
Editore: Springer Nature Switzerland
DOI: 10.1007/978-3-031-56069-9_62

Overview of the CLEF-2024 CheckThat! Lab: Check-Worthiness, Subjectivity, Persuasion, Roles, Authorities, and Adversarial Robustness (si apre in una nuova finestra)

Autori: Alberto Barrón-Cedeño, Firoj Alam, Julia Maria Struß, Preslav Nakov, Tanmoy Chakraborty, Tamer Elsayed, Piotr Przybyła, Tommaso Caselli, Giovanni Da San Martino, Fatima Haouari, Maram Hasanain, Chengkai Li, Jakub Piskorski, Federico Ruggeri, Xingyi Song, Reem Suwaileh
Pubblicato in: Lecture Notes in Computer Science, Experimental IR Meets Multilinguality, Multimodality, and Interaction, 2024
Editore: Springer Nature Switzerland
DOI: 10.1007/978-3-031-71908-0_2

È in corso la ricerca di dati su OpenAIRE...

Si è verificato un errore durante la ricerca dei dati su OpenAIRE

Nessun risultato disponibile

Il mio fascicolo 0 0