Periodic Reporting for period 2 - BIAS (Mitigating Diversity Biases of AI in the Labor Market)
Periodo di rendicontazione: 2023-11-01 al 2025-04-30
BIAS will extensively engage end-users during the design and development processing, including robust co-creation methodologies. This will ensure that unfair biases can be properly described within specific employment contexts as well as making the resulting technology useful in practice.
BIAS will also carry out extensive fieldwork, both advancing our knowledge of how technology is used in the workplace and laying the groundwork for future AI research that is more responsive to real-world concerns.
Many of these aspects make the BIAS project especially innovative. There is very little extant research on using Case-Based Reasoning in HRM contexts, although the potential benefits are substantial. Additionally, NLP and CBR are not often researched together, especially in the domain of bias identification and mitigation. Designing a system that brings together both technologies is novel.
Most research on bias identification and mitigation in language models has been confined to English. However, BIAS is focusing on many European languages (especially Dutch, Estonian, German, Icelandic, Italian, Norwegian, and Turkish).
BIAS partners conducted two rounds of national co-creation workshops. Additionally, the workshops facilitated discussions on fairness and diversity bias in the labor market, thereby offering a holistic perspective on the technology's implications and identifying essential requirements for the effective and trustworthy design of the Debiaser.
The starting blocks of the Debiaser was sculpted, with initial data gathering preparation, and data transfer prepared, as well as programming expertise tuned in to the complex data material. One aspect of technical investigations is how to determine and reduce bias in machine learning applications relying on textual data such as cover letters. To address this in a systematic way, a framework has been developed based on real-world cover letters. The other aspect investigated in the NLP work is the reflection of societal stereotypes in word embeddings and language models. The BIAS Detection Framework leverages the knowledge about bias in humans in the different partner countries, derived from the BIAS co-creation activities, to provide new methods to measure bias. Finally, different aspects on downstream applications have been investigated. On one side, applying value-sensitive design, allowing to consider values in an early stage of the AI development process. On the other side, given the recent advances in the field of blackbox models only usable via prompting, first results of the case of chatbot interactions as a downstream application have been presented at a conference. Materials from the co-creation activities and other interdisciplinary discussions have been extracted and generated to form the foundation of the bias detection modules. Initial business cases for the implementation of the Debiaser in HRM contexts have been developed.
Through interviews and ethnographic fieldwork, the BIAS project contributes to knowledge on the perspectives of employers, employees, developers, job seekers, and workers’ representatives on AI and bias in the labor market, while also expanding ethnographic research on bias, AI and the workplace. Interviews and longer-term ethnographic observations around specific case studies were conducted in 5 countries: Iceland, Italy, The Netherlands, Norway and Türkiye. Following fieldwork, an analytical framework for data coding, including a project-specific codebook, was developed. This was done through a series of coding workshops, which resulted in a codebook to be used to guide data coding.
Capacity building programs the concept of gender and intersectionality; ethics; AI and gender biases; AI and other diversity bias (race, class and age); tools to prevent, identify and avoid bias in AI in recommender and personalization systems; and algorithmic decision making were developed following a learning needs assessment with key stakeholders. Three versions have been developed: for AI developers, for Human Resource Management practitioners, and for Trade unions and Civil Society organizations. Both in person and online training formats are designed and implemented. One round of trainings has been given in all consortium partner countries, with a second round following revisions to the curricula, to be given in the fall of 2025.