Project description
Giving a voice to those affected by AI
AI promises to revolutionise our world, but it also causes harm. From racial and gender bias to wrongful arrests and surveillance, AI systems often reinforce existing inequalities. These technologies can discriminate against marginalised groups, leading to unfair treatment. Many of the people most affected have little say in how AI is developed or used. In this context, the ERC-funded PARTIALJUSTICE project aims to change this by putting the voices of these communities at the centre of AI development. It focuses on participatory algorithmic justice, a new approach that involves marginalised groups in creating solutions to AI’s harmful effects. Through fieldwork and workshops, the project helps identify problems and designs interventions.
Objective
AI comes with great promises, but more critically, it comes with demonstrated harms, from racial and gender bias, to discrimination, wrongful arrests, defamation, surveillance, or the extractive labor and practices required for its creation. I contend that the most concerning aspect of AI is the exacerbation of existing structural inequalities and creation of new ones. Two prevailing responses have emerged to the social and ethical problems of AI: ethicists and social scientists have focused largely on principle-based approaches, and developers have focused on technical fixes for addressing problems in AI models. This project speaks to an urgent gap between these approaches: it centers the experiences of marginalized communities and develops interventions into AI concerns that are designed by and for those communities.
This project develops the novel approach of participatory algorithmic justice. Participatory algorithmic justice defines a concept and standards of practice for collaborative research to better understand who and what AI harms, but also how these harms should be redressed. The project investigates how economic, cultural, and political harms from AI are experienced by structurally marginalized groups through multi-sited, intersectional ethnographic fieldwork. This fieldwork informs participatory design workshops to develop specific interventions into problems identified by research collaborators. This research is brought to life through an innovative combination of graphic storytelling and a public-facing mapping platform. Participatory algorithmic justice brings the voices, priorities, and concerns of those affected by AI to the forefront of debates over what kinds of AI people want to live with. Tackling a critical problem of global technology justice, this project is a crucial intervention for communities, researchers, and developers to redress AI harms.
Fields of science (EuroSciVoc)
CORDIS classifies projects with EuroSciVoc, a multilingual taxonomy of fields of science, through a semi-automatic process based on NLP techniques.
CORDIS classifies projects with EuroSciVoc, a multilingual taxonomy of fields of science, through a semi-automatic process based on NLP techniques.
You need to log in or register to use this function
Keywords
Programme(s)
- HORIZON.1.1 - European Research Council (ERC) Main Programme
Topic(s)
Funding Scheme
HORIZON-ERC - HORIZON ERC GrantsHost institution
80333 Muenchen
Germany