Periodic Reporting for period 1 - FINDHR (Fairness and Intersectional Non-Discrimination in Human Recommendation)
Reporting period: 2022-11-01 to 2024-04-30
Through a context-sensitive, interdisciplinary approach, we develop new technologies to measure discrimination risks, create fairness-aware rankings and interventions, and provide multi-stakeholder actionable interpretability.
The project aims at producing new technical guidance to perform impact assessment and algorithmic auditing, a protocol for equality monitoring, and a guide for fairness-aware AI software development. We will also design and deliver specialized skills training for developers and auditors of AI systems.
We ground our project in EU regulation and policy. As tackling discrimination risks in AI requires processing sensitive data, a targeted legal analysis of tensions between data protection regulation (including the GDPR) and
anti-discrimination regulation in Europe is performed. Underrepresented groups are engaged through multiple mechanisms including consultation with experts and participatory action research. In our research, technology, law, and ethics are interwoven.
The consortium includes leaders in algorithmic fairness and explainability research (UPF, UVA, UNIPI, MPI-SP), pioneers in the auditing of digital services (AlgorithmWatch, ETICAS), and two industry partners that are leaders in their respective markets (Adevinta, RAND), complemented by experts in technology regulation (RU) and cross-cultural digital ethics (UU), and two NGOs dedicated to fighting discrimination against women (WIDE+) and vulnerable populations (PRAKSIS).
All outputs will be released as open access publications, open source software, open datasets, and open courseware.
Equality Monitoring Protocol: Describes requirements for monitoring AI recruitment systems to ensure compliance with EU Non-Discrimination Law and GDPR.
Software Development Guide: Aimed at product managers and developers, detailing criteria for product design and development, and integrating bias detection techniques.
Expert Reports: Analyzes discrimination in algorithmic recruiting, with reports available at www.findhr.eu.
Data Donation Campaign: Received more than 1000 complete submissions, mostly in Spanish.
Anti-Discrimination Course: Includes a masterclass and a 30-hour course on ethics, legal perspectives, and fair rankings. More info at www.findhr.eu and www.findhr.unipi.it.
Jose M. Alvarez, Antonio Mastropietro, Salvatore Ruggieri. The Initial Screening Order Problem. https://arxiv.org/abs/2307.15398(opens in new window). Submitted to EAAMO 2024.
Zuiderveen Borgesius, F., Baranowska, N., Hacker, P., & Fabris, A. (2024). Non-discrimination law in Europe: A primer for non-lawyers. Retrieved from https://arxiv.org/abs/2404.08519 [to be submitted in June, 2024].
Simson, J., Fabris, A., & Kern, C. (2024). Lazy data practices harm fairness research. arXiv:2404.17293v1 [cs.LG]. Retrieved from https://arxiv.org/abs/2404.17293(opens in new window).
Rus, C., Yates, A., & de Rijke, M. (2024). A Study of Pre-processing Fairness Intervention Methods for Ranking People. In European Conference on Information Retrieval (pp. 336-350).
Rus, C., de Rijke, M., & Yates, A. (2023). Counterfactual Representations for Intersectional Fair Ranking in Recruitment.
Rus, C., Poerwawinata, G., Yates, A., & de Rijke, M. (2024). AnnoRank: A Comprehensive Web-Based Framework for Collecting Annotations and Assessing Rankings. [to be submitted in June, 2024].
Zuiderveen Borgesius, F. J., Hacker, P., Baranowska, N., & Fabris, A. (2024). Non-discrimination law in Europe: a primer for non-lawyers. Retrieved from https://arxiv.org/abs/2404.08519(opens in new window).
Fabris, A., Baranowska, N., Dennis, M. J., Graus, D., Hacker, P., Saldivar, J., Zuiderveen Borgesius, F., & Biega, A. J. (2024). Fairness and bias in algorithmic hiring: A multidisciplinary survey. arXiv. https://arxiv.org/abs/2309.13807(opens in new window).
Randstad. (2024). The labor market and AI. Randstad Position Paper.