Description du projet
Étudier l’impartialité des modèles linguistiques que les applications utilisent pour comprendre le langage
Le traitement automatique de la langue naturelle (NLP) permet aux dispositifs numériques d’analyser, de comprendre et de synthétiser le langage humain, qu’il s’agisse de texte ou de parole. La plupart des systèmes sont basés sur des modèles de langage utilisant un large corpus de données d’entraînement dérivées automatiquement de sources Internet. Toutefois, cela les expose à des préjugés, des stéréotypes et des exclusions non contrôlés. Le projet FairER, financé par l’UE, étudiera les modèles de langage NLP et les stratégies de solution dans un contexte multilingue. Il déterminera leur objectivité et leur inclusivité, non seulement en termes démographiques (par exemple, la race, le sexe, l’âge), mais aussi au niveau de l’alphabétisation. Ce travail devrait améliorer l’équité des applications de NLP et servir de base à des recherche ultérieures.
Objectif
Most of us use technology related to natural language processing (NLP) such as Google Search or virtual assistants in phones and other devices on a daily basis. Large-scale pre-trained language models hereby play a crucial role as they often form the basis of those technologies. Those models are trained on a large amount of training data (e.g. the entire English Wikipedia and the Brown corpus) which makes it impossible to curate the training corpus and potential stereotypes and biases will be implemented into the model, often without researchers noticing. This can lead to problematic and unfair behaviour towards certain demographics, often those who already suffer from implicit biases in society.
With FairER, I aim to get a deeper understanding of the inner workings of these language models. In particular, I want to investigate how well their solution strategies align with those of humans and whether this depends on certain demographic attributes such as gender, race, age but also reading abilities and level of education. I will also probe those language models for fairness and inclusiveness, i.e. find out whether the performance of an NLP application depends on demographic attributes of the user. Furthermore, I will conduct this project in a multilingual setting and apply interpretability methods to better understand the rationale behind a model’s decision.
The main impact of FairER will be a better understanding of how language models treat different demographics. These insights will help to improve the fairness and inclusiveness of NLP applications. Furthermore, the datasets I will record and publish along with the code will encourage other researchers to replicate my findings and continue this line of research. Ultimately, this project will have both a scientific and societal impact on the NLP community and users of NLP applications.
Champ scientifique
Programme(s)
- HORIZON.1.2 - Marie Skłodowska-Curie Actions (MSCA) Main Programme
Régime de financement
MSCA-PF - MSCA-PFCoordinateur
1165 Kobenhavn
Danemark