Skip to main content
Ir a la página de inicio de la Comisión Europea (se abrirá en una nueva ventana)
español español
CORDIS - Resultados de investigaciones de la UE
CORDIS

ASSESSMENT AND ENGINEERING OF EQUITABLE, UNBIASED, IMPARTIAL AND TRUSTWORTHY AI SYSTEMS

CORDIS proporciona enlaces a los documentos públicos y las publicaciones de los proyectos de los programas marco HORIZONTE.

Los enlaces a los documentos y las publicaciones de los proyectos del Séptimo Programa Marco, así como los enlaces a algunos tipos de resultados específicos, como conjuntos de datos y «software», se obtienen dinámicamente de OpenAIRE .

Resultado final

Data management plan (se abrirá en una nueva ventana)

This deliverable will contain the action for managing and protecting the data collected during the project and the agreements to use data jointly involving partners that have not participated in data collection.

Requirements (se abrirá en una nueva ventana)

This deliverable reports the requirement for the methodology, awareness & diagnosis, repair & mitigation sub-components.

Methodology for creating synthetic datasets (se abrirá en una nueva ventana)

This deliverable provides an overview of data generation methods and functional data synthesizer tool(s) for the provided reference data sets.

Fair-by-design software engineering methodologies and architecture. Preliminary compendium (se abrirá en una nueva ventana)

This deliverable provides the first version of fair-by-design software engineering methodologies to design and develop fair AI systems that adhere to EGTAI.

First dissemination, communication and exploitation plan (se abrirá en una nueva ventana)

A detailed communication and dissemination plan will be defined in the first months of the project with the objective to build a strong and recognizable identity. This plan will be updated throughout the project based on the evaluation of its impacts. It will include a detailed planning of all communication actions including key messages, target audiences and key performance indicators. Moreover, exploitation strategy to find the right path to continued operation of AEQUITAS activities and ensure a long-term impact after the end of the project. Exploitable assets developed by the research partners will be assessed for sustainable exploitation on social impact (e.g. users acceptance), policy impact (e.g. recommendation to adapt the legislation) and business impact (e.g. open-source licensing).

Second dissemination, communication and exploitation plan (se abrirá en una nueva ventana)

Second iteration of D8.1

Project Handbook (se abrirá en una nueva ventana)

The Project Handbook brings together a wide range of general operational information including contact details, roles and responsibilities of the partners according to the governance structure, operational and reporting processes, templates, procedures for the preparation of deliverables

Architecture design of AEQUITAS (se abrirá en una nueva ventana)

This deliverable will describe the architecture design and technologies to be used in AEQUITAS

First report on dissemination and communication activities (se abrirá en una nueva ventana)

A detailed list of activities of activities of dissemination and communication of project partners for first half of the project

Social, legal and policy landscapes of AI-fairness 1st version (se abrirá en una nueva ventana)

This deliverable provides a preliminary overview of the necessary social, legal and policy elements for AEQUITAS consisting of: (i) a preliminary insight in the main manifestations of AI unfairness in society, (ii) the level of awareness and understanding, and narratives of AI-fairness in society; (iii) a preliminary methodology to identify the relevant stakeholders to involve in the design process of AI, a; (iv) a preliminary overview of existing and anticipated rules and regulations dealing with AI-fairness; (v) a preliminary overview of relevant policy developments around AI-fairness; and (vi) a preliminary AI-fairness methodology to follow in the design of AI systems, from a social, legal and policy perspective. Because the social, legal and policy landscapes of AI-fairness are constantly evolving, this deliverable provides updated versions deliverable 6.1.

Fair-by-design sociological, legal methodologies, preliminary compendium (se abrirá en una nueva ventana)

This deliverable provides a very preliminary version of social and legal methodologies to follow in the design of AI systems. It will be exploited in the early stage of the project to collect requirements in WP2.

Diagnostic tools for bias-1st version (se abrirá en una nueva ventana)

This deliverable provides the first version of state-of-the-art techniques to detect and measure undesirable biases contained in AI systems.

Educational and awareness raising tools on social and legal elements of AI fairness (se abrirá en una nueva ventana)

This deliverable provides 3 internal knowledge sessions to inform the project partners on the social and legal elements of AI-fairness at crucial moments of the project (M03 to feed into WP2, M06 to feed into WP3, 4 and 5 and M18 to feed into WP7). It also provides open knowledge and awareness raising resources such as explainers, infographics, whitepapers, and expert sessions on the social and legal elements of AI fairness aimed at external stakeholders.

Data, algorithms, and interpretation bias mitigation methods 1st version (se abrirá en una nueva ventana)

This deliverable provides the first version of state-of-the-art techniques to repair and mitigate undesirable biases contained in data, algorithm as well as in socio-technical factors.

Publicaciones

Ensuring Fairness Stability for Disentangling Social Inequality in Access to Education: the FAiRDAS General Method (se abrirá en una nueva ventana)

Autores: Eleonora Misino; Roberta Calegari; Michele Lombardi; Michela Milano
Publicado en: 2024
Editor: Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI-24)
DOI: 10.24963/IJCAI.2024/820

A Cognitive Approach to Model Intelligent Collaboration in Human-Robot Interaction

Autores: Cantucci F.; Falcone R.
Publicado en: 2023
Editor: "Proceedings of the 24th Workshop ""From Objects to Agents"""

Unlocking Insights and Trust: The Value of Explainable Clustering Algorithms for Cognitive Agents

Autores: Federico Sabbatini, Roberta Calegari
Publicado en: 2023
Editor: WOA 2023 – 24th Workshop “From Objects to Agents”

AI-fairness: The FairBridge Approach to Practically Bridge the Gap Between Socio-legal and Technical Perspectives (se abrirá en una nueva ventana)

Autores: Andrea Borghesi, Giovanni Ciatto, Mattia Matteini, Roberta Calegari, Laura Sartori, Maria Rebrean, Catelijne Muller
Publicado en: Proceedings of the Annual Hawaii International Conference on System Sciences, Proceedings of the 57th Hawaii International Conference on System Sciences, 2025
Editor: Hawaii International Conference on System Sciences
DOI: 10.24251/HICSS.2025.777

State Feedback Enhanced Graph Differential Equations for Multivariate Time Series Forecasting (se abrirá en una nueva ventana)

Autores: Jiaxu Cui; Qipeng Wang; Yiming Zhao; Bingyi Sun; Pengfei Wang; Bo Yang
Publicado en: International Joint Conference on Artificial Intelligence Organization, 2024
Editor: Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
DOI: 10.24963/IJCAI.2025/820

FAiRDAS: Fairness-Aware Ranking as Dynamic Abstract System

Autores: Eleonora Misino, Roberta Calegari, Michele Lombardi, Michela Milano
Publicado en: 2023
Editor: Proceedings of the 1st Workshop on Fairness and Bias in AI co-located with 26th European Conference on Artificial Intelligence (ECAI 2023)

Long-Term Fairness Strategies in Ranking with Continuous Sensitive Attributes

Autores: Giuliani L.; Misino E.; Calegari R.; Lombardi M.
Publicado en: 2024
Editor: Proceedings of the 2nd Workshop on Bias, Ethical AI, Explainability and the role of Logic and Logic Programming, BEWARE 2023 co-located with the 22nd International Conference of the Italian Association for Artificial Intelligence (AI*IA 2023)

Unveiling Opaque Predictors via Explainable Clustering: The CReEPy Algorithm

Autores: Federico Sabbatini, Roberta Calegari
Publicado en: 2023
Editor: Proceedings of the 2nd Workshop on Bias, Ethical AI, Explainability and the role of Logic and Logic Programming, BEWARE 2023 co-located with the 22nd International Conference of the Italian Association for Artificial Intelligence (AI*IA 2023)

Curriculum–Based Reinforcement Learning for Pedestrian Simulation: Towards an Explainable Training Process

Autores: Vizzari G.; Briola D.; Cecconello T.
Publicado en: 2023
Editor: "Proceedings of the 24th Workshop ""From Objects to Agents"""

Assessing and Enforcing Fairness in the AI Lifecycle (se abrirá en una nueva ventana)

Autores: Roberta Calegari, Gabriel G. Castañé, Michela Milano, Barry O'Sullivan
Publicado en: 2023
Editor: Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23)
DOI: 10.24963/ijcai.2023/735

Enforcing Fairness via Constraint Injection with FaUCI

Autores: Matteo Magnini; Giovanni Ciatto; Roberta Calegari; Andrea Omicini
Publicado en: 2024
Editor: Proceedings of the 2nd Workshop on Fairness and Bias in AI (AEQUITAS 2024), co-located with 27th European Conference on Artificial Intelligence (ECAI 2024), Santiago de Compostela, Spain, October 20, 2024.

AI-fairness and equality of opportunity: a case study on educational achievement

Autores: Marrero A. S.; Marrero G. A.; Bethencourt C.; James L.; Calegari R.
Publicado en: 2024
Editor: Proceedings of the 2nd Workshop on Bias, Ethical AI, Explainability and the role of Logic and Logic Programming, BEWARE 2023 co-located with the 22nd International Conference of the Italian Association for Artificial Intelligence (AI*IA 2023)

Unmasking the Shadows: Leveraging Symbolic Knowledge Extraction to Discover Biases and Unfairness in Opaque Predictive Models

Autores: Sabbatini F.; Calegari R.
Publicado en: 2024
Editor: Proceedings of the 2nd Workshop on Fairness and Bias in AI co-located with 27th European Conference on Artificial Intelligence (ECAI 2024)

A geometric framework for fairness

Autores: Alessandro Maggio, Luca Giuliani, Roberta Calegari, Michele Lombardi, Michela Milano
Publicado en: 2023
Editor: Proceedings of the 1st Workshop on Fairness and Bias in AI co-located with 26th European Conference on Artificial Intelligence (ECAI 2023)

Impact based fairness framework for socio-technical decision making

Autores: Brännström, Mattias; Jiang, Lili; Aler Tubella, Andrea; Dignum, Virginia
Publicado en: 2023
Editor: Proceedings of the 1st workshop on fairness and bias in AIco-located with 26th european conference on artificial intelligence (ECAI 2023)

N-Mates Evaluation: a New Method to Improve the Performance of Genetic Algorithms in Heterogeneous Multi-Agent Systems

Autores: Paolo Pagliuca; Alessandra Vitanza
Publicado en: 2023
Editor: "Proceedings of the 24th Workshop ""From Objects to Agents"""

Symbolic Knowledge Comparison: Metrics and Methodologies for Multi-Agent Systems

Autores: Sabbatini F.; Sirocchi C.; Calegari R.
Publicado en: 2024
Editor: "Proceedings of the 25th Workshop ""From Objects to Agents"""

ExACT Explainable Clustering: Unravelling the Intricacies of Cluster Formation

Autores: Federico Sabbatini, Roberta Calegari
Publicado en: 2023
Editor: International Conference on Principles of Knowledge Representation and Reasoning (KR2023)

Achieving Complete Coverage with Hypercube-Based Symbolic Knowledge-Extraction Techniques (se abrirá en una nueva ventana)

Autores: Federico Sabbatini, Roberta Calegari
Publicado en: 2023
Editor: Proceedings of the 1st Workshop on Fairness and Bias in AI co-located with 26th European Conference on Artificial Intelligence (ECAI 2023)
DOI: 10.1007/978-3-031-50396-2_10

Generalized Disparate Impact for Configurable Fairness Solutions in ML (se abrirá en una nueva ventana)

Autores: Giuliani L.; Misino E.; Lombardi M.
Publicado en: 2023
Editor: Proceedings of the 40th International Conference on Machine Learning, PMLR
DOI: 10.48550/ARXIV.2305.18504

Perspectives and Challenges of Telemedicine and Artificial Intelligence in Pediatric Dermatology (se abrirá en una nueva ventana)

Autores: Daniele Zama; Andrea Borghesi; Alice Ranieri; Elisa Manieri; Luca Pierantoni; Laura Andreozzi; Arianna Dondi; Iria Neri; Marcello Lanari; Roberta Calegari
Publicado en: Children, 2024, ISSN 2227-9067
Editor: Children
DOI: 10.3390/CHILDREN11111401

Untying black boxes with clustering-based symbolic knowledge extraction (se abrirá en una nueva ventana)

Autores: Sabbatini F.; Calegari R.
Publicado en: Intelligenza Artificiale, 2024, ISSN 1724-8035
Editor: IOS Press
DOI: 10.3233/IA-240026

ICE: An Evaluation Metric to Assess Symbolic Knowledge Quality (se abrirá en una nueva ventana)

Autores: Federico Sabbatini, Roberta Calegari
Publicado en: Lecture Notes in Computer Science, AIxIA 2024 – Advances in Artificial Intelligence, 2024
Editor: Springer Nature Switzerland
DOI: 10.1007/978-3-031-80607-0_19

Hierarchical Knowledge Extraction from Opaque Machine Learning Predictors (se abrirá en una nueva ventana)

Autores: Federico Sabbatini, Roberta Calegari
Publicado en: Lecture Notes in Computer Science, AIxIA 2024 – Advances in Artificial Intelligence, 2024
Editor: Springer Nature Switzerland
DOI: 10.1007/978-3-031-80607-0_20

Generation of Clinical Skin Images with Pathology with Scarce Data (se abrirá en una nueva ventana)

Autores: Andrea Borghesi, Roberta Calegari
Publicado en: Studies in Computational Intelligence, AI for Health Equity and Fairness, 2024
Editor: Springer Nature Switzerland
DOI: 10.1007/978-3-031-63592-2_5

AI Fairness Compliance: Operationalizing the Integration of Social and Legal Perspectives into AI Fairness Metrics (se abrirá en una nueva ventana)

Autores: Roberta Calegari
Publicado en: Frontiers in Artificial Intelligence and Applications, ECAI 2025, 2025
Editor: IOS Press
DOI: 10.3233/FAIA250913

Proceedings of the 2nd Workshop on Fairness and Bias in AI (AEQUITAS 2024), co-located with 27th European Conference on Artificial Intelligence (ECAI 2024)

Autores: Roberta Calegari; Virginia Dignum; Barry O'Sullivan
Publicado en: 2024
Editor: Proceedings of the 2nd Workshop on Fairness and Bias in AI co-located with 27th European Conference on Artificial Intelligence (ECAI 2024)

Buscando datos de OpenAIRE...

Se ha producido un error en la búsqueda de datos de OpenAIRE

No hay resultados disponibles

Mi folleto 0 0