Skip to main content
Przejdź do strony domowej Komisji Europejskiej (odnośnik otworzy się w nowym oknie)
polski polski
CORDIS - Wyniki badań wspieranych przez UE
CORDIS

Fairness and Intersectional Non-Discrimination in Human Recommendation

Periodic Reporting for period 1 - FINDHR (Fairness and Intersectional Non-Discrimination in Human Recommendation)

Okres sprawozdawczy: 2022-11-01 do 2024-04-30

FINDHR is an interdisciplinary project that seeks to prevent, detect, and mitigate discrimination in AI. Our research is contextualized within the technical, legal, and ethical problems of algorithmic hiring and the domain of human resources, but also shows how to manage discrimination risks in a broad class of applications involving human recommendation.

Through a context-sensitive, interdisciplinary approach, we develop new technologies to measure discrimination risks, create fairness-aware rankings and interventions, and provide multi-stakeholder actionable interpretability.
The project aims at producing new technical guidance to perform impact assessment and algorithmic auditing, a protocol for equality monitoring, and a guide for fairness-aware AI software development. We will also design and deliver specialized skills training for developers and auditors of AI systems.

We ground our project in EU regulation and policy. As tackling discrimination risks in AI requires processing sensitive data, a targeted legal analysis of tensions between data protection regulation (including the GDPR) and
anti-discrimination regulation in Europe is performed. Underrepresented groups are engaged through multiple mechanisms including consultation with experts and participatory action research. In our research, technology, law, and ethics are interwoven.

The consortium includes leaders in algorithmic fairness and explainability research (UPF, UVA, UNIPI, MPI-SP), pioneers in the auditing of digital services (AlgorithmWatch, ETICAS), and two industry partners that are leaders in their respective markets (Adevinta, RAND), complemented by experts in technology regulation (RU) and cross-cultural digital ethics (UU), and two NGOs dedicated to fighting discrimination against women (WIDE+) and vulnerable populations (PRAKSIS).

All outputs will be released as open access publications, open source software, open datasets, and open courseware.
Impact Assessment and Auditing Framework: A comprehensive method to assess and identify bias in AI recruitment systems, including pre-processing, in-processing, and post-processing phases.

Equality Monitoring Protocol: Describes requirements for monitoring AI recruitment systems to ensure compliance with EU Non-Discrimination Law and GDPR.

Software Development Guide: Aimed at product managers and developers, detailing criteria for product design and development, and integrating bias detection techniques.

Expert Reports: Analyzes discrimination in algorithmic recruiting, with reports available at www.findhr.eu.

Data Donation Campaign: Received more than 1000 complete submissions, mostly in Spanish.

Anti-Discrimination Course: Includes a masterclass and a 30-hour course on ethics, legal perspectives, and fair rankings. More info at www.findhr.eu and www.findhr.unipi.it.
Andrea Pugnana, Salvatore Ruggieri: A Model-Agnostic Heuristics for Selective Classification. AAAI 2023: 9461-9469. https://doi.org/10.1609/aaai.v37i8.26133(odnośnik otworzy się w nowym oknie).
Jose M. Alvarez, Antonio Mastropietro, Salvatore Ruggieri. The Initial Screening Order Problem. https://arxiv.org/abs/2307.15398(odnośnik otworzy się w nowym oknie). Submitted to EAAMO 2024.
Zuiderveen Borgesius, F., Baranowska, N., Hacker, P., & Fabris, A. (2024). Non-discrimination law in Europe: A primer for non-lawyers. Retrieved from https://arxiv.org/abs/2404.08519 [to be submitted in June, 2024].
Simson, J., Fabris, A., & Kern, C. (2024). Lazy data practices harm fairness research. arXiv:2404.17293v1 [cs.LG]. Retrieved from https://arxiv.org/abs/2404.17293(odnośnik otworzy się w nowym oknie).
Rus, C., Yates, A., & de Rijke, M. (2024). A Study of Pre-processing Fairness Intervention Methods for Ranking People. In European Conference on Information Retrieval (pp. 336-350).
Rus, C., de Rijke, M., & Yates, A. (2023). Counterfactual Representations for Intersectional Fair Ranking in Recruitment.
Rus, C., Poerwawinata, G., Yates, A., & de Rijke, M. (2024). AnnoRank: A Comprehensive Web-Based Framework for Collecting Annotations and Assessing Rankings. [to be submitted in June, 2024].
Zuiderveen Borgesius, F. J., Hacker, P., Baranowska, N., & Fabris, A. (2024). Non-discrimination law in Europe: a primer for non-lawyers. Retrieved from https://arxiv.org/abs/2404.08519(odnośnik otworzy się w nowym oknie).
Fabris, A., Baranowska, N., Dennis, M. J., Graus, D., Hacker, P., Saldivar, J., Zuiderveen Borgesius, F., & Biega, A. J. (2024). Fairness and bias in algorithmic hiring: A multidisciplinary survey. arXiv. https://arxiv.org/abs/2309.13807(odnośnik otworzy się w nowym oknie).
Randstad. (2024). The labor market and AI. Randstad Position Paper.
FINDHR project logo
Moja broszura 0 0