CORDIS provides links to public deliverables and publications of HORIZON projects.
Links to deliverables and publications from FP7 projects, as well as links to some specific result types such as dataset and software, are dynamically retrieved from OpenAIRE .
Deliverables
This deliverable will contain the action for managing and protecting the data collected during the project and the agreements to use data jointly involving partners that have not participated in data collection.
This deliverable reports the requirement for the methodology, awareness & diagnosis, repair & mitigation sub-components.
Methodology for creating synthetic datasets (opens in new window)This deliverable provides an overview of data generation methods and functional data synthesizer tool(s) for the provided reference data sets.
Fair-by-design software engineering methodologies and architecture. Preliminary compendium (opens in new window)This deliverable provides the first version of fair-by-design software engineering methodologies to design and develop fair AI systems that adhere to EGTAI.
First dissemination, communication and exploitation plan (opens in new window)A detailed communication and dissemination plan will be defined in the first months of the project with the objective to build a strong and recognizable identity. This plan will be updated throughout the project based on the evaluation of its impacts. It will include a detailed planning of all communication actions including key messages, target audiences and key performance indicators. Moreover, exploitation strategy to find the right path to continued operation of AEQUITAS activities and ensure a long-term impact after the end of the project. Exploitable assets developed by the research partners will be assessed for sustainable exploitation on social impact (e.g. users acceptance), policy impact (e.g. recommendation to adapt the legislation) and business impact (e.g. open-source licensing).
Fair-by-design methodologies (opens in new window)This deliverable provides the final version associated to the D5.1 of fair-by-design methodologies - both social and software engineering - to design and develop fair AI systems that adhere to EGTAI. A first prototype of the fairness-by-design engine will be released as well enabling validation of synthetic data (task 7.2) to start.
Use cases fairness report (opens in new window)This deliverable provides the fairness reports of the use cases.
Second dissemination, communication and exploitation plan (opens in new window)Second iteration of D8.1
Social and legal fair-by-design methodologies 2nd version (opens in new window)This deliverable provides (in M03) a very preliminary version of possible social and legal methodologies to address fairness in the design of AI-systems, at data, classifier, and prediction levels. It will be exploited and optimized in the early stage of the project to collect requirements and KPI’s in WP2, as well as during the development stages of the Awareness & Diagnosis Engine, Reparation & Mitigation Engine and Fairness by Design Engine (WP3, 4 and 5) and the evaluation and validation process (WP7). The final version of this deliverable (in M24) will provide social and legal guidelines, methods, and techniques, enabling testing, experimentation, and evaluation of fairness in the design or evaluation of AI systems, and to guide the development of fair AI systems that adhere to EGTAI, as well as to upcoming AI regulation.
Project Handbook (opens in new window)The Project Handbook brings together a wide range of general operational information including contact details, roles and responsibilities of the partners according to the governance structure, operational and reporting processes, templates, procedures for the preparation of deliverables
Second report on dissemination and communication activities (opens in new window)An update on the D8.4
Social, legal and policy landscapes of AI-fairness 2nd version (opens in new window)This deliverable provides a preliminary overview of the necessary social, legal and policy elements for AEQUITAS consisting of: (i) a preliminary insight in the main manifestations of AI unfairness in society, (ii) the level of awareness and understanding, and narratives of AI-fairness in society; (iii) a preliminary methodology to identify the relevant stakeholders to involve in the design process of AI, a; (iv) a preliminary overview of existing and anticipated rules and regulations dealing with AI-fairness; (v) a preliminary overview of relevant policy developments around AI-fairness; and (vi) a preliminary AI-fairness methodology to follow in the design of AI systems, from a social, legal and policy perspective. Because the social, legal and policy landscapes of AI-fairness are constantly evolving, this deliverable provides updated versions deliverable 6.1.
Architecture design of AEQUITAS (opens in new window)This deliverable will describe the architecture design and technologies to be used in AEQUITAS
First report on dissemination and communication activities (opens in new window)A detailed list of activities of activities of dissemination and communication of project partners for first half of the project
Requirements-2nd version (opens in new window)This deliverable reports the final requirement for the methodology, awareness & diagnosis, repair & mitigation sub-components. This version aims to confirm the requirement jointly organized with the use cases and pilots
Social and legal fair-by-design methodologies (opens in new window)This deliverable provides (in M03) a very preliminary version of possible social and legal methodologies to address fairness in the design of AI-systems, at data, classifier, and prediction levels. It will be exploited and optimized in the early stage of the project to collect requirements and KPI’s in WP2, as well as during the development stages of the Awareness & Diagnosis Engine, Reparation & Mitigation Engine and Fairness by Design Engine (WP3, 4 and 5) and the evaluation and validation process (WP7). The final version of this deliverable (in M24) will provide social and legal guidelines, methods, and techniques, enabling testing, experimentation, and evaluation of fairness in the design or evaluation of AI systems, and to guide the development of fair AI systems that adhere to EGTAI, as well as to upcoming AI regulation.
Social, legal and policy landscapes of AI-fairness 1st version (opens in new window)This deliverable provides a preliminary overview of the necessary social, legal and policy elements for AEQUITAS consisting of: (i) a preliminary insight in the main manifestations of AI unfairness in society, (ii) the level of awareness and understanding, and narratives of AI-fairness in society; (iii) a preliminary methodology to identify the relevant stakeholders to involve in the design process of AI, a; (iv) a preliminary overview of existing and anticipated rules and regulations dealing with AI-fairness; (v) a preliminary overview of relevant policy developments around AI-fairness; and (vi) a preliminary AI-fairness methodology to follow in the design of AI systems, from a social, legal and policy perspective. Because the social, legal and policy landscapes of AI-fairness are constantly evolving, this deliverable provides updated versions deliverable 6.1.
Fair-by-design sociological, legal methodologies, preliminary compendium (opens in new window)This deliverable provides a very preliminary version of social and legal methodologies to follow in the design of AI systems. It will be exploited in the early stage of the project to collect requirements in WP2.
This deliverable unifies the methodologies presented in D5.2 into a single fair-by design engine as a service sub-component.
Reparation and mitigation engine (opens in new window)This deliverable unifies the bias detection and measurement tools into a single diagnosis engine as a service sub-component.
AEQUITAS on-premises tool (opens in new window)This is the software release of the final AEQUITAS framework
Diagnostic tools for bias-1st version (opens in new window)This deliverable provides the first version of state-of-the-art techniques to detect and measure undesirable biases contained in AI systems.
Awareness and diagnosis engine (opens in new window)This deliverable unifies the bias awareness, detection and measurement tools into a single diagnosis engine as a service sub-component.
Educational and awareness raising tools on social and legal elements of AI fairness 2nd version (opens in new window)This deliverable provides 3 internal knowledge sessions to inform the project partners on the social and legal elements of AI-fairness at crucial moments of the project (M03 to feed into WP2, M06 to feed into WP3, 4 and 5 and M18 to feed into WP7). It also provides open knowledge and awareness raising resources such as explainers, infographics, whitepapers, and expert sessions on the social and legal elements of AI fairness aimed at external stakeholders.
Educational and awareness raising tools on social and legal elements of AI fairness (opens in new window)This deliverable provides 3 internal knowledge sessions to inform the project partners on the social and legal elements of AI-fairness at crucial moments of the project (M03 to feed into WP2, M06 to feed into WP3, 4 and 5 and M18 to feed into WP7). It also provides open knowledge and awareness raising resources such as explainers, infographics, whitepapers, and expert sessions on the social and legal elements of AI fairness aimed at external stakeholders.
Data, algorithms, and interpretation bias mitigation methods and mitigation engine prototype (opens in new window)This deliverable provides the final version of state-of-the-art techniques to repair and mitigate undesirable biases contained in data, algorithm as well as in socio-technical factors. Novel techniques will be provided as well. A first prototype of the reparation and mitigation engine will be released as well enabling validation of synthetic data (task 7.2) to start.
Data, algorithms, and interpretation bias mitigation methods 1st version (opens in new window)This deliverable provides the first version of state-of-the-art techniques to repair and mitigate undesirable biases contained in data, algorithm as well as in socio-technical factors.
Data synthesizer (opens in new window)This deliverable provides a bias-controlled version of the data synthesizer which can be used to create various synthetic datasets reflecting various levels of bias and different polarization
Integrated AI-on-Demand Platform Service (opens in new window)This is the software release of the final integrated AI-on-Demand Platform
Diagnostic tools for bias-2nd version and awareness and diagnosis engine prototype (opens in new window)This deliverable provides the final version of state-of-the-art techniques to detect and measure undesirable biases contained in AI systems. Novel techniques will be detected as well. Moreover, it provides guidelines for minimizing the socio-technical factors that contribute to undesirable bias, as well as assessment techniques to identify them when they occur. A first prototype of the awareness and diagnosis engine will be released as well enabling validation of synthetic data (task 7.2) to start.
AEQUITAS on-premises tool-1st prototype (opens in new window)This is the software release of the final AEQUITAS framework validated on synthetic datasets
Publications
Author(s):
Eleonora Misino; Roberta Calegari; Michele Lombardi; Michela Milano
Published in:
2024
Publisher:
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI-24)
DOI:
10.24963/IJCAI.2024/820
Author(s):
Cantucci F.; Falcone R.
Published in:
2023
Publisher:
"Proceedings of the 24th Workshop ""From Objects to Agents"""
Author(s):
Federico Sabbatini, Roberta Calegari
Published in:
2023
Publisher:
WOA 2023 – 24th Workshop “From Objects to Agents”
Author(s):
Andrea Borghesi, Giovanni Ciatto, Mattia Matteini, Roberta Calegari, Laura Sartori, Maria Rebrean, Catelijne Muller
Published in:
Proceedings of the Annual Hawaii International Conference on System Sciences, Proceedings of the 57th Hawaii International Conference on System Sciences, 2025
Publisher:
Hawaii International Conference on System Sciences
DOI:
10.24251/HICSS.2025.777
Author(s):
Jiaxu Cui; Qipeng Wang; Yiming Zhao; Bingyi Sun; Pengfei Wang; Bo Yang
Published in:
International Joint Conference on Artificial Intelligence Organization, 2024
Publisher:
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
DOI:
10.24963/IJCAI.2025/820
Author(s):
Eleonora Misino, Roberta Calegari, Michele Lombardi, Michela Milano
Published in:
2023
Publisher:
Proceedings of the 1st Workshop on Fairness and Bias in AI co-located with 26th European Conference on Artificial Intelligence (ECAI 2023)
Publisher:
Proceedings of the 2nd Workshop on AI bias: Measurements, Mitigation, Explanation Strategies
Author(s):
Giuliani L.; Misino E.; Calegari R.; Lombardi M.
Published in:
2024
Publisher:
Proceedings of the 2nd Workshop on Bias, Ethical AI, Explainability and the role of Logic and Logic Programming, BEWARE 2023 co-located with the 22nd International Conference of the Italian Association for Artificial Intelligence (AI*IA 2023)
Author(s):
Federico Sabbatini, Roberta Calegari
Published in:
2023
Publisher:
Proceedings of the 2nd Workshop on Bias, Ethical AI, Explainability and the role of Logic and Logic Programming, BEWARE 2023 co-located with the 22nd International Conference of the Italian Association for Artificial Intelligence (AI*IA 2023)
Author(s):
Vizzari G.; Briola D.; Cecconello T.
Published in:
2023
Publisher:
"Proceedings of the 24th Workshop ""From Objects to Agents"""
Author(s):
Roberta Calegari, Gabriel G. Castañé, Michela Milano, Barry O'Sullivan
Published in:
2023
Publisher:
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23)
DOI:
10.24963/ijcai.2023/735
Author(s):
Matteo Magnini; Giovanni Ciatto; Roberta Calegari; Andrea Omicini
Published in:
2024
Publisher:
Proceedings of the 2nd Workshop on Fairness and Bias in AI (AEQUITAS 2024), co-located with 27th European Conference on Artificial Intelligence (ECAI 2024), Santiago de Compostela, Spain, October 20, 2024.
Author(s):
Marrero A. S.; Marrero G. A.; Bethencourt C.; James L.; Calegari R.
Published in:
2024
Publisher:
Proceedings of the 2nd Workshop on Bias, Ethical AI, Explainability and the role of Logic and Logic Programming, BEWARE 2023 co-located with the 22nd International Conference of the Italian Association for Artificial Intelligence (AI*IA 2023)
Author(s):
Sabbatini F.; Calegari R.
Published in:
2024
Publisher:
Proceedings of the 2nd Workshop on Fairness and Bias in AI co-located with 27th European Conference on Artificial Intelligence (ECAI 2024)
Author(s):
Alessandro Maggio, Luca Giuliani, Roberta Calegari, Michele Lombardi, Michela Milano
Published in:
2023
Publisher:
Proceedings of the 1st Workshop on Fairness and Bias in AI co-located with 26th European Conference on Artificial Intelligence (ECAI 2023)
Author(s):
Brännström, Mattias; Jiang, Lili; Aler Tubella, Andrea; Dignum, Virginia
Published in:
2023
Publisher:
Proceedings of the 1st workshop on fairness and bias in AIco-located with 26th european conference on artificial intelligence (ECAI 2023)
Author(s):
Paolo Pagliuca; Alessandra Vitanza
Published in:
2023
Publisher:
"Proceedings of the 24th Workshop ""From Objects to Agents"""
Author(s):
Sabbatini F.; Sirocchi C.; Calegari R.
Published in:
2024
Publisher:
"Proceedings of the 25th Workshop ""From Objects to Agents"""
Author(s):
Federico Sabbatini, Roberta Calegari
Published in:
2023
Publisher:
International Conference on Principles of Knowledge Representation and Reasoning (KR2023)
Author(s):
Federico Sabbatini, Roberta Calegari
Published in:
2023
Publisher:
Proceedings of the 1st Workshop on Fairness and Bias in AI co-located with 26th European Conference on Artificial Intelligence (ECAI 2023)
DOI:
10.1007/978-3-031-50396-2_10
Author(s):
Giuliani L.; Misino E.; Lombardi M.
Published in:
2023
Publisher:
Proceedings of the 40th International Conference on Machine Learning, PMLR
DOI:
10.48550/ARXIV.2305.18504
Author(s):
Daniele Zama; Andrea Borghesi; Alice Ranieri; Elisa Manieri; Luca Pierantoni; Laura Andreozzi; Arianna Dondi; Iria Neri; Marcello Lanari; Roberta Calegari
Published in:
Children, 2024, ISSN 2227-9067
Publisher:
Children
DOI:
10.3390/CHILDREN11111401
Author(s):
Sabbatini F.; Calegari R.
Published in:
Intelligenza Artificiale, 2024, ISSN 1724-8035
Publisher:
IOS Press
DOI:
10.3233/IA-240026
Author(s):
Giovanelli, Joseph (Data curator)1 ORCID icon Magnini, Matteo (Data curator)1 ORCID icon James, Liam (Data curator)1 ORCID icon Ciatto, Giovanni (Data curator)1 ORCID icon Marrero, Angel S. (Data manager)2 ORCID icon Borghesi, Andrea (Data curat
Publisher:
Zenodo
DOI:
10.5281/ZENODO.11171863
Author(s):
IFM Research Team
Publisher:
Zenodo
DOI:
10.5281/ZENODO.16252144
Author(s):
Federico Sabbatini, Roberta Calegari
Published in:
Lecture Notes in Computer Science, AIxIA 2024 – Advances in Artificial Intelligence, 2024
Publisher:
Springer Nature Switzerland
DOI:
10.1007/978-3-031-80607-0_19
Author(s):
Federico Sabbatini, Roberta Calegari
Published in:
Lecture Notes in Computer Science, AIxIA 2024 – Advances in Artificial Intelligence, 2024
Publisher:
Springer Nature Switzerland
DOI:
10.1007/978-3-031-80607-0_20
Author(s):
Andrea Borghesi, Roberta Calegari
Published in:
Studies in Computational Intelligence, AI for Health Equity and Fairness, 2024
Publisher:
Springer Nature Switzerland
DOI:
10.1007/978-3-031-63592-2_5
Author(s):
Roberta Calegari
Published in:
Frontiers in Artificial Intelligence and Applications, ECAI 2025, 2025
Publisher:
IOS Press
DOI:
10.3233/FAIA250913
Author(s):
Roberta Calegari; Virginia Dignum; Barry O'Sullivan
Published in:
2024
Publisher:
Proceedings of the 2nd Workshop on Fairness and Bias in AI co-located with 27th European Conference on Artificial Intelligence (ECAI 2024)
Searching for OpenAIRE data...
There was an error trying to search data from OpenAIRE
No results available