Skip to main content
Ir a la página de inicio de la Comisión Europea (se abrirá en una nueva ventana)
español es
CORDIS - Resultados de investigaciones de la UE
CORDIS

Safeguarding Equality in the European Algorithmic Society: Tackling Discrimination in Algorithmic Profiling through EU Equality Law

Periodic Reporting for period 1 - PROFILE (Safeguarding Equality in the European Algorithmic Society: Tackling Discrimination in Algorithmic Profiling through EU Equality Law)

Período documentado: 2020-09-05 hasta 2022-09-04

As the number and reach of artificial intelligence (AI) applications are rapidly growing, empirical evidence and global awareness of the systemic risks of discrimination linked to these technologies have increased over the past years, often signaled by the keyword ‘algorithmic bias’. Big data and machine learning techniques feed into algorithmic profiling and predictive analytics systems that support an increasing number of decisions in key aspects of life such as the labour market, education, healthcare, housing, credit, etc. Algorithmic systems thereby condition life opportunities, affect the distribution of valuable goods and opportunities and create in- and exclusion. Through the prism of these technologies, individuals and social groups become algorithmic subjects classified based on assembled data fragments along moving and cryptic lines of differentiation. Their preferences, habits and behaviors, but also their social membership, status, gender or cultural affiliation inform algorithmic clustering that freezes identities and exacerbates differences. Despite the apparent fluidity of algorithmic clusters and the ability of machine learning systems to continuously adapt to changing environments, structural inequalities remain a remarkably stable fixture of algorithmic reconfigurations. Decisions mediated by machines enforce a form of algorithmic determinism that validates the discriminatory status quo and exacerbates systemic discrimination.

PROFILE has explored the role of law, and in particular EU equality law, in confronting these issues. Its overarching objective was to identify the key legal challenges linked to addressing algorithmic discrimination and to assess how the legal framework for equality protection in the European Union can adequately address machine-mediated inequalities. PROFILE has addressed the problem through three central inquiries:
1) How does algorithmic discrimination differ from human discrimination?
2) How do these new forms of technology-driven discrimination disrupt and challenge the anti-discrimination legal framework in place in the EU?
3) What regulatory solutions can be proposed within and outside EU anti-discrimination law in order to efficiently remedy algorithmic discrimination?
1) Research findings

PROFILE explored the key legal challenges linked to addressing algorithmic discrimination and assessed how the legal framework for equality protection in the EU can adequately address machine-mediated inequalities.
• WP1 mapped the specific forms of discrimination arising from algorithmic profiling and the deployment of other algorithmic systems. It theorised the notion of 'algorithmic discrimination' by establishing a taxonomy of algorithmic harms. It confirmed that algorithmic systems generate and amplify complex forms of inequality.
• WP2 assessed how some forms of technology-driven discrimination escape the antidiscrimination legal framework in place. In particular, it showed how some forms of algorithmically induced behavioural discrimination, proxy discrimination and intersectional discrimination do not find an easy redress in EU equality law. In addition, WP2 showed how the normative equilibria anchored in EU anti-discrimination rules are disrupted by the ‘algorithmization’ of numerous areas of life. Problems arise, for instance, in relation to hierarchies of protection in EU equality law, uncertainties about the definition of protected grounds, the cardinal dichotomy between direct and indirect discrimination, liability issues in the human-machine relationship, and proof and comparability in the ‘black box’.
• WP3 explored how concepts of EU Equality Law could be re-visited to address algorithmic discrimination. In particular, it proposed that the notion of 'instruction to discriminate' be legally engineered to create a duty to reasonably prevent algorithmic bias from leading to discrimination and to clarify liability issues for algorithmic discrimination in machine-learning systems.

2) Dissemination

During the MSCA fellowship, I delivered more than 60 talks, trainings, seminars and guest lectures across Europe to disseminate the findings of PROFILE to various audiences including academics, law and policy professionals (lawyers, judges, CSOs, policy-makers…), students at undergraduate and postgraduate level, and the general public. Some of these dissemination activities were conducted online when Covid-19 regulations in place at the time did not allow me to travel, and some others were conducted in person when travel was possible.

3) Organisation of events

I organised 2 scientific events as part of PROFILE. The first, entitled ‘EU law through the lens of data science and computational approaches’, was organized at iCourts, University of Copenhagen, Denmark on 8 April 2022 and gathered 19 scholars and practitioners from Europe working in different areas of EU law, law and technology and data science. The objective was to discuss data science and computational analysis as a methodology for analyzing EU law and to critically reflect on new research directions in the field. The second event, entitled ‘The Law is not a decision tree’, was a research retreat that took place on 12-13 December 2022 at the University of Edinburgh, United Kingdom. The aim was to investigate the intersection of notions of bias and fairness as approached in computer science and notions of discrimination and equality as approached in law. The event gathered 5 researchers and allowed us to work together on a paper that translates these notions across disciplines.
The ambition of PROFILE was both to advance legal research on AI and human rights and to concretely inform societal and policy responses in relation to the challenge of AI regulation. Understanding the risks of algorithmic discrimination and assessing existing safeguards was both urgent and important in light of AI’s rapid development to avoid the high social, moral and economic costs of discrimination and inequality. While discussions about algorithmic bias flourished in the context of US law at the time PROFILE started, research on the discriminatory consequences of algorithmic profiling was needed in the ambit of European human rights law. PROFILE fulfilled these aims on different planes. In terms of scientific impact, PROFILE clarified how EU anti-discrimination law could address algorithmic discrimination and it highlighted its shortcomings in so doing. In various scientific publications, it showed how algorithmic technologies produce distinct harms that cast doubt on the adequateness of existing anti-discrimination law remedies and force us to re-think existing forms of legal protection and re-define legal concepts. It proposed different legal avenues for better redressing algorithmic discrimination in Europe. In terms of the wider societal impact of the project, PROFILE has informed the work of the European Union and the Council of Europe on AI regulation through several presentations attended by policy-makers, which made concrete recommendations for better addressing algorithmic discrimination at the legal and policy level. The trainings delivered to law practitioners and policy professionals will hopefully also contribute to strengthening capabilities to address algorithmic discrimination in Europe, for instance by lawyers and judges in courts, by equality bodies, by NGOs on the ground and by policy-makers in Europe.
Logo advertising my comedy show at the ‘Cabaret of Dangerous Ideas’, Edinburgh Fringe Festival 2021
Ringvorlesung Technikrecht ‘When computers say no’, Friedrich-Alexander University, Germany, 2022
My comedy show ‘All algorithms are bastards ’ disseminating PROFILE at the Fringe Festival 2021
Mi folleto 0 0