European Commission logo
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS

Incorporating Demographic Factors into Natural Language Processing Models

Periodic Reporting for period 2 - INTEGRATOR (Incorporating Demographic Factors into Natural Language Processing Models)

Periodo di rendicontazione: 2022-09-01 al 2024-02-29

Language technology is all around us, in the form of smart speakers, translation tools, and the ever-expanding chatbots and language models. You've probably interacted with at least one language model since waking up. All of this technology, however, assumes that language is a uniform thing. We do not, however, use the same language with friends as we do in a business meeting, with children or adults, or with dialect and standard speakers. As human speakers, we are constantly adjusting how we speak and how we listen to the people around us.
Language technology, on the other hand, does not. As a result, it only works well for a small subset of the population: those whose languages have been modeled. We and others have demonstrated that even simple language technology is ineffective for dialect speakers, women, young people, and a variety of other underrepresented groups.

The INTEGRATOR project aims to change all of that. We are collaborating with experts from various fields to make language technology more equitable, less biased, and more inclusive. The project develops the theory, data sets, algorithms, and models required to achieve those objectives.
The main goal is to algorithmically incorporate demographic factors into NLP models in order to improve performance and mitigate demographic bias in language technology.
The second goal is to theoretically ground the work by identifying which demographic factors influence which NLP applications.
The ultimate goal is to provide data sets and data representations.

While NLP has evolved significantly in the years since this project was conceived, the issues it addresses have become more pressing in many ways. Large language models, such as GPT-4, have emerged as a driving force in NLP. While they have replaced many previous technologies and rendered some traditional tasks (and some aspects of the proposed work) obsolete, they continue to operate on the same assumptions as previous language technology.

Throughout the project, we have made significant contributions to the understanding and treatment of bias in scientific literature. These contributions have resulted in 30 publications, an expanding network of collaborators, successful placement of former project members, and an impact on current language technology.
This project's work has been presented in nearly 30 different venues, including keynote speeches at workshops and invited talks at universities.
During the exciting first half of the project, I accomplished several ground-breaking research and technological milestones that were perfectly aligned with all of our project objectives.

Our ultimate goal is to transform NLP models by seamlessly incorporating demographic factors. By doing so, we can achieve unprecedented performance while addressing the issue of demographic bias in language technology head-on. We have made numerous significant contributions to this exciting endeavor. "Entropy-based attention regularization frees unintended bias mitigation from lists" introduced an entirely new debiasing methodology that takes advantage of the attention mechanism within Transformer-based neural networks. These cutting-edge techniques allow us to break free from the constraints of traditional approaches to unintended bias mitigation.
Furthermore, it provides a tangible way to assess bias. The paper "Can Demographic Factors Improve Text Classification? Revisiting Demographic Adaptation in the Age of Transformers" delves into the cutting-edge technological approaches that have emerged in recent years. It not only reveals these ground-breaking discoveries but also delves into their profound implications.

Our second goal is to delve deeper into identifying demographic factors influencing various NLP applications. We aim to develop meaningful and effective metrics that will improve the overall understanding and performance of these applications. We published several seminal papers that laid the groundwork for bias research in natural language processing. They outline the hidden social factors that shape language and how we can model them effectively. We also explore identity-inclusive natural language processing in "Welcome to the Modern World of Pronouns: Identity-Inclusive Natural Language Processing Beyond Gender," and "What about 'em'? How Commercial Machine Translation Fails to Handle (Neo-)Pronouns". We examine the shortcomings of commercial machine translation in dealing with (neo-)pronouns. These papers have identified key bias areas in current technology, revolutionizing our understanding of and response to bias in NLP.
In "HONEST: Measuring Hurtful Sentence Completion in Language Models" and "Measuring Harmful Sentence Completion in Language Models for LGBTQIA+ Individuals," we present concrete metrics and novel methods for revealing the true extent of bias in language models.

The revolutionary shift to pretrained generative language models has completely altered the landscape of NLP technology, having a significant impact on this field. These models have been trained on the vast expanse of the internet. Their sole purpose is to expertly and precisely complete any missing words. These models have a remarkable ability to tackle complex NLP tasks such as understanding and classification. However, fine-tuning them has become problematic because, with a few exceptions, they no longer rely on annotated data sets for specific languages or tasks. These ground-breaking models have completely transformed the field, making representation learning, one of the initial partial goals of the project, sadly obsolete.
Nonetheless, demographically annotated data sets are still a compelling resource for rigorous testing and application in computational social science. In "Twitter-Demographer: A Flow-based Tool to Enrich Twitter Data," we publish an easy-to-use tool for researchers and professionals from various fields to tap into the full potential of Twitter data like never before. This tool ensures privacy and security by including advanced features such as pseudoanonymization and safety-by-design. Unfortunately, the recent suspension of new Twitter data has severely limited its potential.
The project has already had a significant impact on changing the way NLP approaches bias while also raising awareness about the powerful influence of socio-demographic factors on language technology. The grant agreement work has sparked exciting collaborations with external stakeholders. We are actively collaborating with Bocconi's Institute for European Policymaking, led by the experienced researcher Daniel Gros. Furthermore, we have connected with experts from various fields and industries. These interactions have enriched our research and provided access to potentially transformative opportunities. Because of the project's remarkable success, I was given an exclusive invitation to a prestigious SAPEA meeting.

We created an innovative and ground-breaking debiasing methodology by drawing inspiration from the powerful attention mechanism in Transformer-based neural networks. The method has not one but two significant advantages: First and foremost, it significantly reduces the model's propensity to favor specific identity groups. This model, on the other hand, not only generates a list of the most revealing terms but also provides researchers with actionable insights.
Another seminal paper has transformed the field of "socially aware" NLP, capturing the interest and enthusiasm of the entire community. This outstanding work has paved the way for a completely new direction in NLP research.
Numerous papers have uncovered novel metrics and cutting-edge methods for accurately measuring bias.
Several papers published under the grant agreement have already received over 100 citations in the last two years, resulted in a workshop and exciting new collaborations. They lay the groundwork for bias research, allowing for a more systematic and comprehensive investigation of socio-demographic and socio-cultural bias in NLP models.

We have made exciting advances in the new field of large generative language models, shedding light on their safety and biases. Even though it has not yet been published, a preprint is already making waves. It has directly influenced the development of two industrial models.

The overwhelming enthusiasm of leading NLP researchers for the grant's theme attests to the rapidly growing awareness and interest in this topic. We are very excited to work on several projects that will delve into various aspects of the proposal. I am confident that these advances will pave the way for revolutionary discoveries in the future, completely revolutionizing our approach to NLP.

Another achievement of the project is the promotion of two of the initial postdocs to academic faculty positions in Germany and Italy, respectively. Independent of the scientific progress, the project has already helped advance the careers of two junior researchers.