Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS

Multi-Attribute, Multimodal Bias Mitigation in AI Systems

Periodic Reporting for period 1 - MAMMOth (Multi-Attribute, Multimodal Bias Mitigation in AI Systems)

Reporting period: 2022-11-01 to 2024-04-30

MAMMOth is a 36 – month (November 2022 – October 2025) Horizon Europe Research and Innovation Action, funded by the European Union under Grant Agreement ID: 101070285. MAMMOth aims to develop an innovative fairness-aware AI-data driven framework that provides a number of tools and techniques for the discovery and mitigation of multi-discrimination, and for promoting the accountability of AI-systems with respect to multiple protected attributes and for traditional tabular data and more complex network and visual data.

The MAMMOth consortium, which is coordinated by the Centre for Research and Technology Hellas (CERTH), includes computer scientists, AI experts, social scientists, public communication experts, ethics and data protection experts, as well as parties who represent communities of vulnerable and/or underrepresented groups in AI research.
MAMMOth will make available both standalone open-source methods and an integrated open source “bias toolkit” that will combine new methods with third-party fairness libraries and components.

In addition to the developed research methods, algorithms and tools, the project has already engaged with communities of vulnerable and/or underrepresented groups in AI research (e.g. the LGBTIQ community, minority groups, migrants, people with disabilities), implementing a co-creation strategy to ensure that genuine needs and pains are at the center of the research agenda. Furthermore, the multi-disciplinary approach of the project, which is supported by social science and ethics experts, ensures that MAMMOth is grounded in valid social science and humanity principles, and will move beyond a simplistic data-driven view of AI bias. It will therefore contribute to the discovery of the possible underlying sources of bias and discrimination.

The MAMMOth tools are designed for three sectors of interest:
1. Algorithm-based decision making in finance: The goal is to identify attributes contributing to AI bias in credit scoring and debt repayment, and to develop and test an algorithmic decision-making system that reduces bias in financial services.
2. Decision making in face verification systems: The goal is to address inequalities in the access of minorities to online services using remote face verification, e.g. in the context of digital identity authentication/Know Your Customer (KYC) procedures.
3. Bias in academic collaborations and citations: The goal is to investigate how intersectional biases in search engines like Google Scholar affect the visibility of scholars and measure their impact on the academic network.
The project has made significant progress across its scientific objectives, with a particular emphasis on the following:
1. Redefining Bias: The project has advanced the understanding of bias by considering multiple protected characteristics, transcending the limitations of single-attribute fairness-aware learning. This multi-dimensional approach has been integrated into an operational framework that incorporates legal and societal perspectives to define and assess bias more comprehensively.
2. Standardised AI Solutions: A significant accomplishment of the project is the creation of the FairBench library, which provides standard APIs for measuring group fairness or bias in AI systems. This versatile toolkit provides an array of fairness building blocks that can be used to systematically explore bias and fairness, aiding in the development of more equitable AI systems.
3. Technology for Bias Evaluation and Mitigation: The project has developed innovative methods (e.g. FairBranch, FLAC) to evaluate and mitigate bias, including quantifying bias under fuzzy logic. The research focuses on the belief values of discrimination and the possibility of encountering protected group members, offering a more nuanced and comprehensive assessment of AI fairness.
4. Reliability, Traceability, and Explainability: In line with the goal of ensuring reliable, traceable, and explainable AI solutions, the project has leveraged Explainable Artificial Intelligence (XAI) methodologies. By examining AI models handling facial images, the project has explored the presence of bias and offered insights into model behaviour through visualisations and global focus analysis.
5. Availability, deployment and awareness raising of unbiased and bias-preventing AI solutions: By including experts and stakeholders in the MAMMOth bias toolkit's design, a co-creation process contributed to address the inherent application limits of technical and research work on AI fairness and encouraged awareness about MAMMOth topics among the research community and affected communities.
Upon completion, the main project results will include:
• A thorough study of AI bias with emphasis on multimodal and multi-attribute fairness.
• An open-source MAMMOth bias toolkit, which will offer a user-friendly way for assessing and mitigating bias in complex datasets, with newly implemented algorithms, and easy-to-use library design with connections to popular bias libraries (e.g. AIF360).
• Establishment of best practices for incorporating bias-aware AI in credit scoring and face verification applications.
• Identification of the sources of biases in academic collaborations and citations (e.g. Google Scholar) and development of mitigation strategies for ranking algorithms.
• Training material about MAMMOth topics.
MAMMOth in a Nutshell
My booklet 0 0