European Commission logo
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS

Fairness and Intersectional Non-Discrimination in Human Recommendation

Description du projet

Gérer la discrimination dans le recrutement par algorithme

Le recrutement par algorithme est l’usage des outils fondés sur l’intelligence artificielle (IA) pour trouver et sélectionner des candidats. Comme d’autres applications d’IA, il est susceptible de perpétuer la discrimination. Tenant compte des aspects technologiques, juridiques et éthiques, le projet FINDHR, financé par l’UE, facilitera la prévention, la détection et la gestion de la discrimination dans le recrutement par algorithme et les domaines étroitement liés impliquant la recommandation humaine. FINDHR entend développer de nouvelles manières de déterminer le risque de discrimination, de produire des résultats moins partiaux et d’intégrer de manière significative l’expertise humaine. En outre, il vise à créer des procédures pour le développement de logiciels, le suivi et la formation. Une fois terminés, les publications, les logiciels, le didacticiel et les ensembles de données du projet seront mis gratuitement à disposition du public dans le cadre de licences libres et ouvertes.

Objectif

FINDHR is an interdisciplinary project that seeks to prevent, detect, and mitigate discrimination in AI. Our research will be contextualized within the technical, legal, and ethical problems of algorithmic hiring and the domain of human resources, but will also show how to manage discrimination risks in a broad class of applications involving human recommendation.

Through a context-sensitive, interdisciplinary approach, we will develop new technologies to measure discrimination risks, to create fairness-aware rankings and interventions, and to provide multi-stakeholder actionable interpretability. We will produce new technical guidance to perform impact assessment and algorithmic auditing, a protocol for equality monitoring, and a guide for fairness-aware AI software development. We will also design and deliver specialized skills training for developers and auditors of AI systems.

We ground our project in EU regulation and policy. As tackling discrimination risks in AI requires processing sensitive data, we will perform a targeted legal analysis of tensions between data protection regulation (including the GDPR) and anti-discrimination regulation in Europe. We will engage with underrepresented groups through multiple mechanisms including consultation with experts and participatory action research.

In our research, technology, law, and ethics are interwoven. The consortium includes leaders in algorithmic fairness and explainability research (UPF, UVA, UNIPI, MPI-SP), pioneers in the auditing of digital services (AW, ETICAS), and two industry partners that are leaders in their respective markets (ADE, RAND), complemented by experts in technology regulation (RU) and cross-cultural digital ethics (EUR), as well as worker representatives (ETUC) and two NGOs dedicated to fighting discrimination against women (WIDE+) and vulnerable populations (PRAK).

All outputs will be released as open access publications, open source software, open datasets, and open courseware.

Coordinateur

UNIVERSIDAD POMPEU FABRA
Contribution nette de l'UE
€ 709 838,00
Adresse
PLACA DE LA MERCE, 10-12
08002 Barcelona
Espagne

Voir sur la carte

Région
Este Cataluña Barcelona
Type d’activité
Higher or Secondary Education Establishments
Liens
Coût total
€ 709 838,00

Participants (12)

Partenaires (1)