Skip to main content
Vai all'homepage della Commissione europea (si apre in una nuova finestra)
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS

Democratize Trustworthy and Efficient Large Language Model Technology for Europe

CORDIS fornisce collegamenti ai risultati finali pubblici e alle pubblicazioni dei progetti ORIZZONTE.

I link ai risultati e alle pubblicazioni dei progetti del 7° PQ, così come i link ad alcuni tipi di risultati specifici come dataset e software, sono recuperati dinamicamente da .OpenAIRE .

Risultati finali

Language-specific adapters for multilingual LLMs (si apre in una nuova finestra)

Pre-trained adapters for Germanic languages, to be used with existing LLMs and the TrustLLM models.

Data formatting pipeline (si apre in una nuova finestra)

Pipeline for formatting data as a preparation for LLM training.

Initial training code (si apre in una nuova finestra)

First version of parallel training code for training LLMs on European HPC systems

Multi-dimensional evaluation metric for text generation (si apre in una nuova finestra)

An evaluation process usable in an online environment for generated texts testing reliability, accuracy, fluency, etc.

Methods for factual correctness based on retriever modelling (si apre in una nuova finestra)

Report and software framework for LLM with factual correctness (Version 1) based on retriever-based modelling.

Quality filtering and deduplication pipeline (si apre in una nuova finestra)

Pipeline for doing quality filtering and deduplication of data as a preparation for LLM training.

Alignment data (si apre in una nuova finestra)

Multilingual datasets for instruction fine-tuning.

Bias Dataset (si apre in una nuova finestra)

An evaluation dataset quantifying the models’ potential biases toward minority groups

Benchmarking Platform (si apre in una nuova finestra)

Open-source software package allowing for automatic benchmarking using the evaluation datasets developed in the project.

Germanic Language Modelling Evaluation Dataset (si apre in una nuova finestra)

An evaluation dataset quantifying the models’ general Germanic linguistic capabilities.

Communication and dissemination toolkit (si apre in una nuova finestra)

Printed and digital material for C&D, for example flyers, posters, social media posts and videos

Project Handbook (si apre in una nuova finestra)

Outline of planned management procedures, tools, project roles and responsibilities.

IPR Management Plan (si apre in una nuova finestra)

Detailed IPR management plan

Strategic plan for communication and dissemination (si apre in una nuova finestra)

Initial plan for strategic communication and dissemination, to be yearly updated (internally)

Design Five Use Cases (si apre in una nuova finestra)

A report detailing the design for each of the use cases

Data Management Plan, V2 (si apre in una nuova finestra)

Detailed data management plan, including the plans for open source and open access publishing, V2

Data Management Plan (si apre in una nuova finestra)

Detailed data management plan, including the plans for open source and open access publishing

Pubblicazioni

How Reliable Are Automatic Evaluation Methods for Instruction-Tuned LLMs? (si apre in una nuova finestra)

Autori: Ehsan Doostmohammadi, Oskar Holmström, Marco Kuhlmann
Pubblicato in: Findings of the Association for Computational Linguistics: EMNLP 2024, 2024
Editore: Association for Computational Linguistics
DOI: 10.18653/V1/2024.FINDINGS-EMNLP.367

FoQA: A Faroese Question-Answering Dataset

Autori: Annika Simonsen, Dan Saattrup Nielsen, Hafsteinn Einarsson
Pubblicato in: 2025
Editore: Proceedings of the Third Workshop on Resources and Representations for Under-Resourced Languages and Domains (RESOURCEFUL-2025)

Tokenizer Choice For LLM Training: Negligible or Crucial? (si apre in una nuova finestra)

Autori: Mehdi Ali, Michael Fromm, Klaudia Thellmann, Richard Rutmann, Max Lübbering, Johannes Leveling, Katrin Klug, Jan Ebert, Niclas Doll, Jasper Buschhoff, Charvi Jain, Alexander Weber, Lena Jurkschat, Hammam Abdelwahab, Chelsea John, Pedro Ortiz Suarez, Malte
Pubblicato in: Findings of the Association for Computational Linguistics: NAACL 2024, 2024
Editore: Association for Computational Linguistics
DOI: 10.18653/V1/2024.FINDINGS-NAACL.247

Investigating Multilingual Instruction-Tuning: Do Polyglot Models Demand for Multilingual Instructions? (si apre in una nuova finestra)

Autori: Alexander Arno Weber, Klaudia Thellmann, Jan Ebert, Nicolas Flores-Herr, Jens Lehmann, Michael Fromm, Mehdi Ali
Pubblicato in: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024
Editore: Association for Computational Linguistics
DOI: 10.18653/V1/2024.EMNLP-MAIN.1159

From text to knowledge graph: Comparing relation extraction methods in a practical context

Autori: Bakker, R. M., & Di Scala, D. L.
Pubblicato in: First International Workshop on Generative Neuro-Symbolic AI, co-located with ESWC, Numero Vol. 4, p. 7, 2024
Editore: CEUR-WS

Memory and Bandwidth are All You Need for Fully Sharded Data Parallel (si apre in una nuova finestra)

Autori: Jiangtao Wang, Jan Ebert, Oleg Filatov, Stefan Kesselheim
Pubblicato in: ICML'24 Workshop on Advancing Neural Network Training (WANT), 2025
Editore: arXiV
DOI: 10.48550/ARXIV.2504.03655

How to Tune a Multilingual Encoder Model for Germanic Languages: A Study of PEFT, Full Fine-Tuning, and Language Adapters

Autori: Romina Oji, Jenny Kunz
Pubblicato in: Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025), 2025
Editore: University of Tartu Library

Do Multilingual Large Language Models Mitigate Stereotype Bias? (si apre in una nuova finestra)

Autori: Shangrui Nie, Michael Fromm, Charles Welch, Rebekka Görge, Akbar Karimi, Joan Plepi, Nazia Mowmita, Nicolas Flores-Herr, Mehdi Ali, Lucie Flek
Pubblicato in: 2024
Editore: Association for Computational Linguistics
DOI: 10.18653/V1/2024.C3NLP-1.6

Train More Parameters But Mind Their Placement: Insights into Language Adaptation with PEFT

Autori: Jenny Kunz
Pubblicato in: Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025), 2025
Editore: University of Tartu Library

The Impact of Language Adapters in Cross-Lingual Transfer for NLU

Autori: Jenny Kunz, Oskar Holmström
Pubblicato in: Proceedings of the 1st Workshop on Modular and Open Multilingual NLP (MOOMIN 2024), 2024
Editore: Association for Computational Linguistics

Encoder vs Decoder: Comparative Analysis of Encoder and Decoder Language Models on Multilingual NLU Tasks

Autori: Dan Saattrup Nielsen, Kenneth Enevoldsen, Peter Schneider-Kamp
Pubblicato in: 2025
Editore: Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025)

Ontology Learning from Text: an Analysis on LLM Performance

Autori: Bakker, R. M., Di Scala, D. L., & de Boer, M. H. T.
Pubblicato in: Proceedings of the 3rd NLP4KGC International Workshop on Natural Language Processing for Knowledge Graph Creation, colocated with Semantics, Numero pp. 17-19, 2024
Editore: CEUR-WS

È in corso la ricerca di dati su OpenAIRE...

Si è verificato un errore durante la ricerca dei dati su OpenAIRE

Nessun risultato disponibile

Il mio fascicolo 0 0