Skip to main content
Vai all'homepage della Commissione europea (si apre in una nuova finestra)
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS

Interactive Natural Language Technology for Explainable Artificial Intelligence

CORDIS fornisce collegamenti ai risultati finali pubblici e alle pubblicazioni dei progetti ORIZZONTE.

I link ai risultati e alle pubblicazioni dei progetti del 7° PQ, così come i link ad alcuni tipi di risultati specifici come dataset e software, sono recuperati dinamicamente da .OpenAIRE .

Risultati finali

Press release 3 to media (si apre in una nuova finestra)

Press release 2 to media at the NL4XAI Student & Industry Days

Press release 4 to media (si apre in una nuova finestra)

Press release 4 to media at the Final Conference.

NL4XAI website and social media profiles available (si apre in una nuova finestra)

NL4XAI website and social media profiles available.

Press release 2 to media (si apre in una nuova finestra)

Press release 2 to media at the PhD autumn school.

Press release 1 to media (si apre in una nuova finestra)

Press release 1 to media at the Initial training meeting.

Technical report on the state of the art in argumentation technology for XAI (si apre in una nuova finestra)

Technical report on the state of the art of argumentation technology for XAI manuscript coauthored by ESR7 and ESR8

Technical report on state of the art end-to-end NLG systems (si apre in una nuova finestra)

Technical report on state of the art endtoend NLG systems manuscript coauthored by ESR5 and ESR6

Technical report on state of the art XAI models (si apre in una nuova finestra)

Technical report on state of the art XAI models review manuscript coauthored by ESRs 14

Technical report on the state of the art in interactive interfaces for XAI (si apre in una nuova finestra)

Technical report on the state of the art in interactive interfaces for XAI manuscript coauthored by ESRs 911

Guidelines for explainable NLG evaluation (si apre in una nuova finestra)

Guidelines for explainable NLG evaluation.

Comparison of content-based recommender systems versus recommender systems supported by XAI (si apre in una nuova finestra)

Comparison of content-based recommender systems versus recommender systems supported by XAI.

Pubblicazioni

The Natural Language Pipeline, Neural Text Generation and Explainability

Autori: J. Faille, A. Gatt and C. Gardent
Pubblicato in: 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, 2020, Pagina/e 16-21
Editore: Association for Computational Linguistics

Detection and Analysis of Moral Values in Argumentation (si apre in una nuova finestra)

Autori: He Zhang, Alina Landowska, Katarzyna Budzynska
Pubblicato in: Pre-Proceedings of A workshop affiliated with the 26th European Conference on Artificial Intelligence: Value engineering in AI., 2023
Editore: ECAI
DOI: 10.13140/rg.2.2.13098.59849

Towards Generating Effective Explanations of Logical Formulas: Challenges and Strategies

Autori: A. Mayn (ESR4-UU), K. van Deemter
Pubblicato in: 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, 2020
Editore: Association for Computational Linguistics

Disentangling Web Search on Debated Topics: A User-Centered Exploration (si apre in una nuova finestra)

Autori: Alisa Rieger, Suleiman Kulane, Ujwal Gadiraju, Maria Soledad Pera
Pubblicato in: Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization, Numero 4, 2024, Pagina/e 24-35
Editore: ACM
DOI: 10.1145/3627043.3659559

Explaining Search Result Stances to Opinionated People (si apre in una nuova finestra)

Autori: Zhangyi Wu; Tim Draws; Federico Cau; Francesco Barile; Alisa Rieger; Nava Tintarev
Pubblicato in: Communications in Computer and Information Science ISBN: 9783031440663, Numero 2, 2023, ISBN 978-3-031-44067-0
Editore: Springer
DOI: 10.1007/978-3-031-44067-0_29

From Potential to Practice: Intellectual Humility During Search on Debated Topics (si apre in una nuova finestra)

Autori: Alisa Rieger, Frank Bredius, Mariët Theune, Maria Soledad Pera
Pubblicato in: Proceedings of the 2024 ACM SIGIR Conference on Human Information Interaction and Retrieval, 2024
Editore: ACM
DOI: 10.1145/3627508.3638306

Interactive Interventions to Mitigate Cognitive Bias (si apre in una nuova finestra)

Autori: Alisa Rieger
Pubblicato in: Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization, 2024
Editore: ACM
DOI: 10.1145/3503252.3534362

Beyond Prediction Similarity: ShapGAP for Evaluating Faithful Surrogate Models in XAI (si apre in una nuova finestra)

Autori: Ettore Mariotti, Adarsa Sivaprasad & Jose Maria Alonso Moral
Pubblicato in: 2023
Editore: Springer Link
DOI: 10.1007/978-3-031-44064-9_10

Towards Harnessing Natural Language Generation to Explain Black-box Models

Autori: Ettore Mariotti, Jose M. Alonso, Albert Gatt
Pubblicato in: 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, 2020
Editore: Association for Computational Linguistics

Habemus a Right to an Explanation: so What? – A Framework on Transparency-Explainability Functionality and Tensions in the EU AI Act (si apre in una nuova finestra)

Autori: Luca Nannini
Pubblicato in: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2024
Editore: AAAI/ACM
DOI: 10.5281/zenodo.14041249

Explaining Bayesian Networks in Natural Language: State of the Art and Challenges

Autori: Conor Hennessy, Alberto Bugarín, Ehud Reiter
Pubblicato in: 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, 2020
Editore: Association for Computational Linguistics

This Item Might Reinforce Your Opinion: Obfuscation and Labeling of Search Results to Mitigate Confirmation Bias (si apre in una nuova finestra)

Autori: Alisa Rieger, Tim Draws, Mariët Theune, Nava Tintarev
Pubblicato in: Proceedings of the 32st ACM Conference on Hypertext and Social Media, 2021, Pagina/e 189-199, ISBN 9781450385510
Editore: ACM
DOI: 10.1145/3465336.3475101

Fairness in Agreement With European Values (si apre in una nuova finestra)

Autori: Alejandra Bringas Colmenarejo, Luca Nannini, Alisa Rieger, Kristen M. Scott, Xuan Zhao, Gourab K Patro, Gjergji Kasneci, Katharina Kinder-Kurlanda
Pubblicato in: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, 2023
Editore: ACM
DOI: 10.1145/3514094.3534158

Explaining data using causal Bayesian networks

Autori: Jaime Sevilla
Pubblicato in: Proceedings of the 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, 2020, Pagina/e 34-38
Editore: Assoication for Computational Linguistics

Understanding Cross-modal Interactions in V&L Models that Generate Scene Descriptions (si apre in una nuova finestra)

Autori: Michele Cafagna; Kees van Deemter; Albert Gatt
Pubblicato in: Proceedings of the Workshop on Unimodal and Multimodal Induction of Linguistic Structures (UM-IoS), Numero 10, 2022
Editore: Association for Computational Linguistics
DOI: 10.18653/v1/2022.umios-1.6

Is Shortest Always Best? The Role of Brevity in Logic-to-Text Generation (si apre in una nuova finestra)

Autori: Eduardo Calò, Jordi Levy, Albert Gatt, Kees Van Deemter
Pubblicato in: Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023), 2023
Editore: ACL
DOI: 10.18653/v1/2023.starsem-1.17

Towards Healthy Engagement with Online Debates (si apre in una nuova finestra)

Autori: Alisa Rieger, Qurat-Ul-Ain Shaheen, Carles Sierra, Mariet Theune, Nava Tintarev
Pubblicato in: Adjunct Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization, 2023
Editore: ACM
DOI: 10.1145/3511047.3537692

Prometheus: Harnessing Fuzzy Logic and Natural Language for Human-centric Explainable Artificial Intelligence

Autori: Ettore Mariotti, Jose M. Alonso-Moral, Albert Gatt
Pubblicato in: XIX Conference of the Spanish Association for Artificial Intelligence (CAEPIA)-ESTYLF-CEDI 2021, 2021
Editore: XIX Conference of the Spanish Association for Artificial Intelligence (CAEPIA)-ESTYLF-CEDI 2021

Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK (si apre in una nuova finestra)

Autori: Luca Nannini, Agathe Balayn, Adam Leon Smith
Pubblicato in: 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, Pagina/e 1198-1212
Editore: ACM
DOI: 10.1145/3593013.3594074

Toward Natural Language Mitigation Strategies for Cognitive Biases in Recommender Systems

Autori: Alisa Rieger, Mariet Theune, Nava Tintarev
Pubblicato in: NL4XAI 20202nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, 2020
Editore: Association for Computational Linguistics (ACL)

Explaining Bayesian Networks in Natural Language using Factor Arguments. Evaluation in the medical domain. (si apre in una nuova finestra)

Autori: Jaime Sevilla; Nikolay Babakov; Ehud Reiter; Alberto Bugarín.
Pubblicato in: EXPLIMED - First Workshop on Explainable Artificial Intelligence for the medical domain (EXPLIMED), 2024
Editore: ECAI 2024
DOI: 10.5281/zenodo.14040523

Linguistically Communicating Uncertainty in Patient-Facing Risk Prediction Models (si apre in una nuova finestra)

Autori: Sivaprasad, Adarsa; Reiter, Ehud
Pubblicato in: Proceedings of the 1st Workshop on Uncertainty-Aware NLP (UncertaiNLP 2024), 2024, Pagina/e 127-132
Editore: Association for Computational Linguistics
DOI: 10.48550/arxiv.2401.17511

A Framework for Analyzing Fairness, Accountability, Transparency and Ethics: A Use-case in Banking Services (si apre in una nuova finestra)

Autori: Ettore Mariotti, Jose M. Alonso, Roberto Confalonieri
Pubblicato in: 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2021, Pagina/e 1-6, ISBN 978-1-6654-4407-1
Editore: IEEE
DOI: 10.1109/fuzz45933.2021.9494481

Entity-Based Semantic Adequacy for Data-to-Text Generation

Autori: Juliette Faille, Albert Gatt and Claire Gardent
Pubblicato in: Findings of the Association for Computational Linguistics: EMNLP 2021, 2021, Pagina/e 1530-1540
Editore: Association for Computational Linguistics

Enhancing and Evaluating the Grammatical Framework Approach to Logic-to-Text Generation (si apre in una nuova finestra)

Autori: Eduardo Calò, Elze van der Werf, Albert Gatt, Kees van Deemter
Pubblicato in: Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM), 2022
Editore: ACL
DOI: 10.18653/v1/2022.gem-1.13

General Boolean Formula Minimization with QBF Solvers (si apre in una nuova finestra)

Autori: Calò, Eduardo; Levy, Jordi
Pubblicato in: CCIA 2023, Numero 12, 2023
Editore: CCIA
DOI: 10.48550/arxiv.2303.06643

HL Dataset: Visually-grounded Description of Scenes, Actions and Rationales (si apre in una nuova finestra)

Autori: Michele Cafagna; Kees van Deemter; Albert Gatt
Pubblicato in: Proceedings of the 16th International Natural Language Generation Conference, Numero 12, 2023, Pagina/e 293–312
Editore: Association for Computational Linguistics
DOI: 10.18653/v1/2023.inlg-main.21

Trust in a Human-Computer Collaborative Task With or Without Lexical Alignment (si apre in una nuova finestra)

Autori: Sumit Srivastava, Mariët Theune, Alejandro Catala, Chris Reed
Pubblicato in: Adjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization, Numero 28 June 2024, 2024, Pagina/e 189 - 194, ISBN 979-8-4007-0466-6
Editore: ACM
DOI: 10.1145/3631700.3664868

Exploiting Peer Trust and Semantic Similarities in the Assignment Assessment Process (si apre in una nuova finestra)

Autori: Jairo Alejandro Lefebre Lobaina, Carles Sierra, Athina Georgara,
Pubblicato in: 2024
Editore: SPRINGER
DOI: 10.5281/zenodo.14066237

A Confusion Matrix for Evaluating Feature Attribution Methods (si apre in una nuova finestra)

Autori: Anna Arias-Duart; Ettore Mariotti; Dario Garcia-Gasulla; Jose Maria Alonso-Moral
Pubblicato in: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2023
Editore: IEEE
DOI: 10.1109/cvprw59228.2023.00380

Position: Machine Learning-powered Assessments of the EU Digital Services Act Aid Quantify Policy Impacts on Online Harms (si apre in una nuova finestra)

Autori: Luca Nannini, Michele Maggini Joshua, Davide Bassi, Eleonora Bonel
Pubblicato in: Proceedings of Machine Learning Research, 2024
Editore: ICML 2024
DOI: 10.5281/zenodo.14041316

The Role of Lexical Alignment in Human Understanding of Explanations by Conversational Agents (si apre in una nuova finestra)

Autori: Sumit Srivastava, Mariët Theune, Alejandro Catala
Pubblicato in: Proceedings of the 28th International Conference on Intelligent User Interfaces, Numero 27 Mar 2023, 2024, Pagina/e 423 - 435, ISBN 979-8-4007-0106-1
Editore: ACM
DOI: 10.1145/3581641.3584086

Exploring Lexical Alignment in a Price Bargain Chatbot (si apre in una nuova finestra)

Autori: Zhenqi Zhao, Mariët Theune, Sumit Srivastava, Daniel Braun
Pubblicato in: ACM Conversational User Interfaces 2024, Numero 08 July 2024, 2024, Pagina/e Article No.: 40, Pages 1 - 7, ISBN 979-8-4007-0511-3
Editore: ACM
DOI: 10.1145/3640794.3665576

Searching for the Whole Truth: Harnessing the Power of Intellectual Humility to Boost Better Search on Debated Topics (si apre in una nuova finestra)

Autori: Alisa Rieger; Frank Bredius; Nava Tintarev; Maria Soledad Pera
Pubblicato in: Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, Numero 11, 2023, ISBN 9781450394222
Editore: Association for Computing Machinery
DOI: 10.1145/3544549.3585693

„Mann“ is to “Donna” as「国王」is to « Reine » Adapting the Analogy Task for Multilingual and Contextual Embeddings (si apre in una nuova finestra)

Autori: Timothee Mickus, Eduardo Calò, Léo Jacqmin, Denis Paperno, Mathieu Constant
Pubblicato in: Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023), 2023
Editore: ACL
DOI: 10.18653/v1/2023.starsem-1.25

Argumentation Theoretical Frameworks for Explainable Artificial Intelligence

Autori: M. H. Demollin, Q. Shaheen, K. Budzynska, C. Sierra
Pubblicato in: Proc. of 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI2020), 2020
Editore: ACL Anthology

Exploring the balance between interpretability and performance with carefully designed constrainable Neural Additive Models (si apre in una nuova finestra)

Autori: Ettore Mariotti; José María Alonso Moral; Albert Gatt
Pubblicato in: Information Fusion, Numero Volume 99, November 2023, 101882, 2023, ISSN 1566-2535
Editore: Elsevier BV
DOI: 10.1016/j.inffus.2023.101882

Nudges to Mitigate Confirmation Bias during Web Search on Debated Topics: Support vs. Manipulation (si apre in una nuova finestra)

Autori: Alisa Rieger, Tim Draws, Mariët Theune, Nava Tintarev
Pubblicato in: ACM Transactions on the Web, Numero 18, 2024, Pagina/e 1-27, ISSN 1559-1131
Editore: Association for Computing Machinary, Inc.
DOI: 10.1145/3635034

Mapping the landscape of ethical considerations in explainable AI research (si apre in una nuova finestra)

Autori: Luca Nannini, Marta Marchiori Manerba & Isacco Beretta
Pubblicato in: Springer Ethics and Information Technology, Numero Volume 26, article number 44, (2024), 2024, ISSN 1572-8439
Editore: Springer Link
DOI: 10.1007/s10676-024-09773-7

An explanation-oriented inquiry dialogue game for expert collaborative recommendations (si apre in una nuova finestra)

Autori: Qurat-ul-ain Shaheen, Katarzyna Budzynska, Carles Sierra
Pubblicato in: Argument & Computation, 2024, Pagina/e 1-36, ISSN 1946-2166
Editore: Taylor & Francis
DOI: 10.3233/aac-230010

Beyond phase-in: assessing impacts on disinformation of the EU Digital Services Act (si apre in una nuova finestra)

Autori: Luca Nannini, Eleonora Bonel, Davide Bassi, Michele Joshua Maggini
Pubblicato in: AI and Ethics, 2024, ISSN 2730-5953
Editore: SPRINGER Nature
DOI: 10.1007/s43681-024-00467-w

Interpreting vision and language generative models with semantic visual priors (si apre in una nuova finestra)

Autori: Michele Cafagna; Lina M. Rojas-Barahona; Kees van Deemter; Albert Gatt
Pubblicato in: Frontiers in Artificial Intelligence, Numero 6, 2023, ISSN 2624-8212
Editore: Frontiers in Artificial Intelligence
DOI: 10.3389/frai.2023.1220476

Recommender systems under European AI regulations (si apre in una nuova finestra)

Autori: Tommaso Di Noia, Nava Tintarev, Panagiota Fatourou, Markus Schedl
Pubblicato in: Communication of the ACM, 2022, ISSN 0001-0782
Editore: Association for Computing Machinary, Inc.
DOI: 10.1145/3512728

ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models (si apre in una nuova finestra)

Autori: Kesen, Ilker; Pedrotti, Andrea; Dogan, Mustafa; Cafagna, Michele; Acikgoz, Emre Can; Parcalabescu, Letitia; Calixto, Iacer; Frank, Anette; Gatt, Albert; Erdem, Aykut; Erdem, Erkut
Pubblicato in: The Twelfth International Conference on Learning Representations (ICLR24),, Numero 15, 2024, ISSN 2835-8856
Editore: OpenReview
DOI: 10.48550/arxiv.2311.07022

TextFocus: Assessing the Faithfulness of Feature Attribution Methods Explanations in Natural Language Processing (si apre in una nuova finestra)

Autori: Ettore Mariotti, Arias Duart Anna, Michele Cafagna, Albert Gatt, Dario Garcia-Gasulla, Maria Jose Alonso
Pubblicato in: IEEE Access, 2024, ISSN 2169-3536
Editore: Institute of Electrical and Electronics Engineers Inc.
DOI: 10.1109/access.2024.3408062

Quantitative and Qualitative Analysis of Moral Foundations in Argumentation (si apre in una nuova finestra)

Autori: Alina Landowska, Katarzyna Budzynska, He Zhang
Pubblicato in: Argumentation, Numero 38, 2024, Pagina/e 405-434, ISSN 0920-427X
Editore: D. Reidel Pub. Co.
DOI: 10.1007/s10503-024-09636-x

Measuring and implementing lexical alignment: A systematic literature review (si apre in una nuova finestra)

Autori: Sumit Srivastava, Suzanna Wentzel, Alejandro Catala, Mariët Theune
Pubblicato in: Computer Speech & Language, Numero 90, 2025, ISSN 0885-2308
Editore: Academic Press
DOI: 10.1016/j.csl.2024.101731

Striving for Responsible Opinion Formation in Web Search on Debated Topics (si apre in una nuova finestra)

Autori: Rieger, Alisa
Pubblicato in: 2024, ISBN 978-94-6366-941-2
Editore: TU Delft
DOI: 10.4233/uuid:703a1aad-d585-459a-b0b3-ac55d9e98fcd

Explainability in Process Mining: A Framework for Improved Decision-Making (si apre in una nuova finestra)

Autori: Luca Nannini
Pubblicato in: 2024
Editore: Universidade de Santiago de Compostela (USC)
DOI: 10.5281/zenodo.14162735

Visually Grounded Language Generation: Data, Models and Explanations beyond Descriptive Captions (si apre in una nuova finestra)

Autori: Cafagna, Michele
Pubblicato in: Numero 12, 2024
Editore: University of Malta
DOI: 10.5281/zenodo.14052376

A holistic perspective on designing and evaluating explainable AI models: from white-box additive models to post-hoc explanations for black-box models

Autori: Ettore Mariotti
Pubblicato in: 2024
Editore: Universidade de Santiago de Compostela (USC)

Data Based Natural Language Generation: Evaluation and Explanability (si apre in una nuova finestra)

Autori: Juliette Faille
Pubblicato in: 2023
Editore: Universite de Lorraine
DOI: 10.5281/zenodo.14231559

Ethos and Pathos in Online Group Discussions: Corpora for Polarisation Issues in Social Media (si apre in una nuova finestra)

Autori: Gajewska, E., Budzynska, K., Konat, B., Koszowy, M., Kiljan, K., Uberna, M., & Zhang, H.
Pubblicato in: arXiv preprint arXiv:2404.04889, 2024
Editore: arxiv
DOI: 10.48550/arxiv.2404.04889

Linguistically Analysing Polarisation on Social Media (si apre in una nuova finestra)

Autori: Ewelina Gajewska, Katarzyna Budzynska, Barbara Konat, Marcin Koszowy, Eds.
Pubblicato in: The New Ethos Reports, Numero vol .1, 2023, Pagina/e 1-32
Editore: Warsaw University of Technology
DOI: 10.17388/wut.2023.0001.ains

Scalability of Bayesian Network Structure Elicitation with Large Language Models: a Novel Methodology and Comparative Analysis (si apre in una nuova finestra)

Autori: Nikolay Babakov, Ehud Reiter, Alberto Bugarin
Pubblicato in: 2024
Editore: Preprint-ACL Rolling Review
DOI: 10.5281/zenodo.14046261

Does Explainable AI Have Moral Value? (si apre in una nuova finestra)

Autori: Brand, Joshua L. M.; Nannini, Luca
Pubblicato in: Numero 12, 2023
Editore: Arxiv
DOI: 10.48550/arxiv.2311.14687

Responsible Opinion Formation on Debated Topics in Web Search (si apre in una nuova finestra)

Autori: Alisa Rieger, Tim Draws, Nicolas Mattis, David Maxwell, David Elsweiler, Ujwal Gadiraju, Dana McKay, Alessandro Bozzon, Maria Soledad Pera
Pubblicato in: Lecture Notes in Computer Science, Advances in Information Retrieval, 2024, Pagina/e 437-465
Editore: Springer Nature Switzerland
DOI: 10.1007/978-3-031-56066-8_32

Examining Lexical Alignment in Human-Agent Conversations with GPT-3.5 and GPT-4 Models (si apre in una nuova finestra)

Autori: Boxuan Wang, Mariët Theune, Sumit Srivastava
Pubblicato in: Lecture Notes in Computer Science, Chatbot Research and Design, Numero 13 March 2024, 2024, Pagina/e 94-114, ISBN 978-3-031-54975-5
Editore: Springer Nature Switzerland
DOI: 10.1007/978-3-031-54975-5_6

Evaluation of Human-Understandability of Global Model Explanations Using Decision Tree (si apre in una nuova finestra)

Autori: Adarsa Sivaprasad, Ehud Reiter, Nava Tintarev, Nir Oren
Pubblicato in: Communications in Computer and Information Science, Artificial Intelligence. ECAI 2023 International Workshops, 2024, Pagina/e 43-65
Editore: Springer Nature Switzerland
DOI: 10.1007/978-3-031-50396-2_3

VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena (si apre in una nuova finestra)

Autori: Parcalabescu, L; Cafagna, M; Muradjan, L; Frank, A; Calixto, I; Gatt, A; Afd Intelligent Software Systems; Sub Natural Language Processing; Natural Language Processing
Pubblicato in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL'22). Association for Computational Linguistics, Numero 21, 2022, Pagina/e 8253–8280
Editore: Association for Computational Linguistics
DOI: 10.18653/v1/2022.acl-long.567

È in corso la ricerca di dati su OpenAIRE...

Si è verificato un errore durante la ricerca dei dati su OpenAIRE

Nessun risultato disponibile

Il mio fascicolo 0 0