Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS

Democratize Trustworthy and Efficient Large Language Model Technology for Europe

CORDIS provides links to public deliverables and publications of HORIZON projects.

Links to deliverables and publications from FP7 projects, as well as links to some specific result types such as dataset and software, are dynamically retrieved from OpenAIRE .

Deliverables

Language-specific adapters for multilingual LLMs (opens in new window)

Pre-trained adapters for Germanic languages, to be used with existing LLMs and the TrustLLM models.

Data formatting pipeline (opens in new window)

Pipeline for formatting data as a preparation for LLM training.

Initial training code (opens in new window)

First version of parallel training code for training LLMs on European HPC systems

Multi-dimensional evaluation metric for text generation (opens in new window)

An evaluation process usable in an online environment for generated texts testing reliability, accuracy, fluency, etc.

Methods for factual correctness based on retriever modelling (opens in new window)

Report and software framework for LLM with factual correctness (Version 1) based on retriever-based modelling.

Quality filtering and deduplication pipeline (opens in new window)

Pipeline for doing quality filtering and deduplication of data as a preparation for LLM training.

Alignment data (opens in new window)

Multilingual datasets for instruction fine-tuning.

Bias Dataset (opens in new window)

An evaluation dataset quantifying the models’ potential biases toward minority groups

Benchmarking Platform (opens in new window)

Open-source software package allowing for automatic benchmarking using the evaluation datasets developed in the project.

Germanic Language Modelling Evaluation Dataset (opens in new window)

An evaluation dataset quantifying the models’ general Germanic linguistic capabilities.

Communication and dissemination toolkit (opens in new window)

Printed and digital material for C&D, for example flyers, posters, social media posts and videos

Project Handbook (opens in new window)

Outline of planned management procedures, tools, project roles and responsibilities.

IPR Management Plan (opens in new window)

Detailed IPR management plan

Strategic plan for communication and dissemination (opens in new window)

Initial plan for strategic communication and dissemination, to be yearly updated (internally)

Design Five Use Cases (opens in new window)

A report detailing the design for each of the use cases

Data Management Plan, V2 (opens in new window)

Detailed data management plan, including the plans for open source and open access publishing, V2

Data Management Plan (opens in new window)

Detailed data management plan, including the plans for open source and open access publishing

Publications

How Reliable Are Automatic Evaluation Methods for Instruction-Tuned LLMs? (opens in new window)

Author(s): Ehsan Doostmohammadi, Oskar Holmström, Marco Kuhlmann
Published in: Findings of the Association for Computational Linguistics: EMNLP 2024, 2024
Publisher: Association for Computational Linguistics
DOI: 10.18653/V1/2024.FINDINGS-EMNLP.367

FoQA: A Faroese Question-Answering Dataset

Author(s): Annika Simonsen, Dan Saattrup Nielsen, Hafsteinn Einarsson
Published in: 2025
Publisher: Proceedings of the Third Workshop on Resources and Representations for Under-Resourced Languages and Domains (RESOURCEFUL-2025)

Tokenizer Choice For LLM Training: Negligible or Crucial? (opens in new window)

Author(s): Mehdi Ali, Michael Fromm, Klaudia Thellmann, Richard Rutmann, Max Lübbering, Johannes Leveling, Katrin Klug, Jan Ebert, Niclas Doll, Jasper Buschhoff, Charvi Jain, Alexander Weber, Lena Jurkschat, Hammam Abdelwahab, Chelsea John, Pedro Ortiz Suarez, Malte
Published in: Findings of the Association for Computational Linguistics: NAACL 2024, 2024
Publisher: Association for Computational Linguistics
DOI: 10.18653/V1/2024.FINDINGS-NAACL.247

Investigating Multilingual Instruction-Tuning: Do Polyglot Models Demand for Multilingual Instructions? (opens in new window)

Author(s): Alexander Arno Weber, Klaudia Thellmann, Jan Ebert, Nicolas Flores-Herr, Jens Lehmann, Michael Fromm, Mehdi Ali
Published in: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024
Publisher: Association for Computational Linguistics
DOI: 10.18653/V1/2024.EMNLP-MAIN.1159

From text to knowledge graph: Comparing relation extraction methods in a practical context

Author(s): Bakker, R. M., & Di Scala, D. L.
Published in: First International Workshop on Generative Neuro-Symbolic AI, co-located with ESWC, Issue Vol. 4, p. 7, 2024
Publisher: CEUR-WS

Memory and Bandwidth are All You Need for Fully Sharded Data Parallel (opens in new window)

Author(s): Jiangtao Wang, Jan Ebert, Oleg Filatov, Stefan Kesselheim
Published in: ICML'24 Workshop on Advancing Neural Network Training (WANT), 2025
Publisher: arXiV
DOI: 10.48550/ARXIV.2504.03655

How to Tune a Multilingual Encoder Model for Germanic Languages: A Study of PEFT, Full Fine-Tuning, and Language Adapters

Author(s): Romina Oji, Jenny Kunz
Published in: Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025), 2025
Publisher: University of Tartu Library

Do Multilingual Large Language Models Mitigate Stereotype Bias? (opens in new window)

Author(s): Shangrui Nie, Michael Fromm, Charles Welch, Rebekka Görge, Akbar Karimi, Joan Plepi, Nazia Mowmita, Nicolas Flores-Herr, Mehdi Ali, Lucie Flek
Published in: 2024
Publisher: Association for Computational Linguistics
DOI: 10.18653/V1/2024.C3NLP-1.6

Train More Parameters But Mind Their Placement: Insights into Language Adaptation with PEFT

Author(s): Jenny Kunz
Published in: Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025), 2025
Publisher: University of Tartu Library

The Impact of Language Adapters in Cross-Lingual Transfer for NLU

Author(s): Jenny Kunz, Oskar Holmström
Published in: Proceedings of the 1st Workshop on Modular and Open Multilingual NLP (MOOMIN 2024), 2024
Publisher: Association for Computational Linguistics

Encoder vs Decoder: Comparative Analysis of Encoder and Decoder Language Models on Multilingual NLU Tasks

Author(s): Dan Saattrup Nielsen, Kenneth Enevoldsen, Peter Schneider-Kamp
Published in: 2025
Publisher: Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025)

Ontology Learning from Text: an Analysis on LLM Performance

Author(s): Bakker, R. M., Di Scala, D. L., & de Boer, M. H. T.
Published in: Proceedings of the 3rd NLP4KGC International Workshop on Natural Language Processing for Knowledge Graph Creation, colocated with Semantics, Issue pp. 17-19, 2024
Publisher: CEUR-WS

Searching for OpenAIRE data...

There was an error trying to search data from OpenAIRE

No results available

My booklet 0 0