European Commission logo
español español
CORDIS - Resultados de investigaciones de la UE
CORDIS

Bayesian Models and Algorithms for Fairness and Transparency

Descripción del proyecto

Un nuevo método bayesiano para tratar datos de un modo justo, legal y transparente

El Reglamento General de Protección de Datos (RGPD) indica que los datos deben tratarse de un modo legal, justo y transparente. Es por eso que el proyecto financiado con fondos europeos BayesianGDPR se ha propuesto integrar de un modo transparente los principios jurídicos y no discriminatorios del RGPD en los sistemas de aprendizaje automático. Con este fin, los investigadores emplearán un método bayesiano para modelizar todas las fuentes de incertidumbre, y tomarán en consideración las opiniones de los humanos y las futuras consecuencias de sus contribuciones. BayesianGDPR suministrará herramientas específicas para que las empresas que trabajan con tecnologías de aprendizaje automático puedan cumplir con los principios no discriminatorios del RGPD y otras leyes similares. Los resultados del proyecto influirán en la investigación sobre legislación informática y su integración en el contexto predominante de la práctica jurídica, además de fomentar una mayor confianza en los sistemas de aprendizaje automático por parte de la opinión pública.

Objetivo

"EU's GDPR prescribes that ""Personal Data shall be processed lawfully, fairly, and in a transparent manner."" The vision of this BayesianGDPR project is to integrate into automated machine learning systems using a novel Bayesian approach, in a transparent manner, the legal non-discriminatory principles of GDPR, taking into account feedback from humans and future consequences of their outputs. We aim to achieve this ambitious vision by 1) developing a machine learning framework for addressing fairness in classification problems and beyond, and under uncertainty about data, models, and predictions about future data (algorithmic fairness under uncertainty), 2) extending the framework to a setting where data points arrive over time, and models have to be dynamically updated when taking general feedback (feedback-driven setting), and 3) ensuring a human could understand how non-discrimination is defined and achieved by using, among others, uncertainty estimates for building interpretable models and/or explicitly explaining about changes being made to the models to enforce non-discriminatory principles (transparency in fairness). The BayesianGDPR project is ""doubly timely""; not just in terms of the criticality of the fairness and transparency in machine learning at this point in time, but also because recent breakthroughs in scalability have finally made it feasible to explore Bayesian approaches that are uniquely capable of addressing one of the most central aspects of the problem, i.e. uncertainty. BayesianGDPR will, in the short term, ensure that organisations relying on machine learning technologies are provided with concrete tools to comply with the non-discriminatory principles of GDPR and similar laws. In the medium term, it will impact research in computational law, and its integration into mainstream legal practice. In the long term, it will also ensure continued confidence of the general public in the deployment of machine learning systems."

Régimen de financiación

ERC-STG - Starting Grant

Institución de acogida

THE UNIVERSITY OF SUSSEX
Aportación neta de la UEn
€ 1 329 947,00
Dirección
SUSSEX HOUSE FALMER
BN1 9RH Brighton
Reino Unido

Ver en el mapa

Región
South East (England) Surrey, East and West Sussex Brighton and Hove
Tipo de actividad
Higher or Secondary Education Establishments
Enlaces
Coste total
€ 1 329 947,00

Beneficiarios (2)