Descripción del proyecto
Incorporación de equidad y mitigación de sesgos en la inteligencia artificial
La inteligencia artificial (IA) es muy prometedora para resolver problemas empresariales y sociales, pero también corre el riesgo de discriminar inadvertidamente a grupos minoritarios y marginados. El proyecto MAMMOth, financiado con fondos europeos, aborda este sesgo centrándose en la mitigación de la discriminación múltiple en los datos tabulares, de red y multimodales. En colaboración con expertos en informática e IA, el equipo del proyecto creará herramientas para una IA que tenga en cuenta la equidad y que garantice la responsabilidad respecto a atributos protegidos como el género, la raza y la edad. Además, colaborará con comunidades de grupos vulnerables o infrarrepresentadas en las investigaciones de IA a fin de garantizar que las necesidades y los problemas de estos usuarios centren verdaderamente el plan de trabajo. El objetivo final es crear proyectos piloto para aplicaciones de financiación/préstamo, de verificación de identidad y de evaluación académica.
Objetivo
Artificial Intelligence (AI) is increasingly employed by businesses, governments, and other organizations to make decisions with far-reaching impacts on individuals and society. This offers big opportunities for automation in different sectors and daily life, but at the same time it brings risks for discrimination of minority and marginal population groups on the basis of the so-called protected attributes, like gender, race, and age. Despite the large body of research to date, the proposed methods work in limited settings, under very constrained assumptions, and do not reflect the complexity and requirements of real world applications.
To this end, the MAMMOth project focuses on multi-discrimination mitigation for tabular, network and multimodal data. Through its computer science and AI experts, MAMMOth aims at addressing the associated scientific challenges by developing an innovative fairness-aware AI-data driven foundation that provides the necessary tools and techniques for the discovery and mitigation of (multi-)discrimination and ensures the accountability of AI-systems with respect to multiple protected attributes and for traditional tabular data and more complex network and visual data.
The project will actively engage with numerous communities of vulnerable and/or underrepresented groups in AI research right from the start, adopting a co-creation approach, to make sure that actual user needs and pains are at the centre of the research agenda and act as guidance to the project’s activities. A social science-driven approach supported by social science and ethics experts will guide project research, and a science communication approach will increase the outreach of the outcomes.
The project aims to demonstrate through pilots the developed solutions into three relevant sectors of interest: a) finance/loan applications, b) identity verification systems, and c) academic evaluation.
Ámbito científico
Palabras clave
Programa(s)
Régimen de financiación
HORIZON-RIA - HORIZON Research and Innovation ActionsCoordinador
57001 Thermi Thessaloniki
Grecia