Skip to main content
Weiter zur Homepage der Europäischen Kommission (öffnet in neuem Fenster)
Deutsch de
CORDIS - Forschungsergebnisse der EU
CORDIS

Human-Compatible Artificial Intelligence with Guarantees

CORDIS bietet Links zu öffentlichen Ergebnissen und Veröffentlichungen von HORIZONT-Projekten.

Links zu Ergebnissen und Veröffentlichungen von RP7-Projekten sowie Links zu einigen Typen spezifischer Ergebnisse wie Datensätzen und Software werden dynamisch von OpenAIRE abgerufen.

Leistungen

Explainability Methods that Identify and Explain Differences in Fairness between Sensitive Subgroups (öffnet in neuem Fenster)

This report will describe method for producing different explanations individually for each sensitive subgroup affected. We will also present the tradeoffs using techniques developed in WP4 (Task 4.1), techniques from T4.1, and what-if analyses.

An Initial Report from the Living Labs: Stakeholder Analysis, Requirements Analysis, and First Results of Choice Experiments (öffnet in neuem Fenster)

This report will describe the Living Lab (LL) hosted by each use case. These LLs will convene a cross-section of key actors, to form a cross-sectoral working group which will serve as a focal point for interactions with stakeholders of each of the respective use cases. The LLs will comprise stakeholders such as public authorities, local business representatives, NGOs, policy makers, etc., and seek to embed their perspective into the research process, increasing the potential for practical application of project results. Stakeholder engagement within the LLs will be central to economic methods such as cost benefit analysis, choice experiments, and revealed preference analysis, in order to develop effective financial instruments to incentivise behaviour change, and inform policy development.

Report on the Open-Source Software Developed (öffnet in neuem Fenster)

Open-source software developed within WP2-WP6 will be maintained in a single family of libraries under the Apache license, with principled design and management.This report will present the family of libraries and the summarize the process of its development.

Second Periodical Management Report (öffnet in neuem Fenster)

The periodical management report will have two parts. (1) The Technical Report's Part B is a project status report. It presents a narrative of the progression of the work done, from project objectives and work packages, to tasks’ progress descriptions. It also covers detected risks and the associated mitigation measures. Also, any deviations from original planning should be explained here. (2) The Financial Report refers to the consumption of the project costs incurred during the period.

Initial Report on Dissemination and Exploitation (öffnet in neuem Fenster)

This report will present market analyses and the refinement of business potential for the innovation generated by the project’s work. Interim research results will be combined with the insight from the business analysis in defining and classifying the innovative features of products and services enabled by the beyond-the-state-of-art contributions of the project. Competitive analysis will contribute information about alternative offerings available in the market and provide insight into possible synergies with existing offerings.This will underlie the exploitation and dissemination plans of the consortium.

Report on User-induced Feedback Loops (öffnet in neuem Fenster)

Fairness in industrial applications is a critical area that will involve human-in-the-loop at various stages. This report will describe methods to introduce guard rails or safety measures when allowing users to influence an AI system. Several challenges exist in online learning scenarios when reward signals come from users. Preference elicitation on possible trade-offs on user feedback will also be described.

Methods to Measure and Assess Model Safety in the Context of Fairness (öffnet in neuem Fenster)

This report will describe methods and metrics for model transparency that directly include safety assessments, with particular focus on fairness dimensions. The ultimate measure of explainability is, indeed, if humans find it useful, but less attention has been paid to important reasons of safety and preventing unexpected harms. The report will elaborate upon those.

Methods for Explainability that Incorporate Fairness Measures (öffnet in neuem Fenster)

This report describes novel explainability methods to provide explanations on fairness of the outcomes of AI algorithms, tying the produced explanations to respective fairness definitions. In particular, we will focus on how we will incorporate additional factors in the optimization objective of explainability algorithms such as counterafactuals or SHAP, in order to account for in changes of fairness definition configurations. This will also describe their initial implementations.

First Periodical Management Report (öffnet in neuem Fenster)

The periodical management report will have two parts. (1) The Technical Report's Part B is a project status report. It presents a narrative of the progression of the work done, from project objectives and work packages, to tasks’ progress descriptions. It also covers detected risks and the associated mitigation measures. Also, any deviations from original planning should be explained here. (2) The Financial Report refers to the consumption of the project costs incurred during the period.

Best Practices of Fair AI provisions in Terms of Service / End User Agreements (öffnet in neuem Fenster)

This report will suggest and develop model contract clauses in relation to FairAI. Notably it will show how to incorporate the a-priori and post-hoc techniques developed in the project into the contracts. It will also assess existing AI-based service providers’ contract on the basis of their treatment of Fairness, Explainability, and Trust in AI.

A Study on Fair AI Policies and Regimes (öffnet in neuem Fenster)

In order to ground the research and development in the relevant regulation of the EU and selected third countries (esp. in the WOR use case, where many clients are multi-nationals with large operations in the US), this report will present a comprehensive overview of AI-Fairness policy (to be updated as a live document), providing guidance and suggestions to the consortium. It will report on workshops with regulators dealing with AI fairness. It will explain the regulatory tensions (e.g. between property and fairness protection regimes) and suggest potential strategies of resolution. It will develop lists of best regulatory practices. It will suggest and develop model contract clauses in relation to FairAI and assess existing AI-based service providers’ contract on their basis.

Pareto Front Sampling and Visualization (öffnet in neuem Fenster)

The tradeoff between different measures of fairness is captured by the so-called Pareto front, a visual representing the most one can achieve of a particular measure without sacrificing another. With many measures of production costs, machine learning inference quality, and fairness, the Pareto front is a high-dimensional object. We consider both low-dimensional projections of the Pareto front and the visual representations of the relative position of the proposed solution on the low-dimensional projections of the Pareto front.

An Open-Source Toolkit for Human-guided Automation of Fairness in AI (öffnet in neuem Fenster)

Using the tools developed in the previous work packages, we intend to integrate the stochastic optimization procedures, stochastic control methods, and reinforcement learning based black box algorithms developed in a package built to high standards of scientific software engineering. We expect to enable interfacing the Backend with the AI Fairness 360 Toolkit. Using the Fair AI Backend, we intend to develop an interface with high quality GUI and visualization tools to enable end users to seamlessly input data, perform risk-free simulations, visualize the pareto front, and use reinforcement learning to suggest a prior for optimal decisions of AI solution design and implementation.

An Open-Source Toolbox for Explainability (öffnet in neuem Fenster)

We will develop an open-source toolbox for the methods developed in T6.1 and T6.2, either within AI Explainability 360 or independently. To this end, we will deploy multiple proxy models, adapting their sampling process to represent the characteristics of different subgroups, and with the aim to identify and explain discrepancies on the decisions and the explanations between subgroups.

Web Page and Related Infrastructure (öffnet in neuem Fenster)

The Project Website will be the base for the dissemination activities that will be projected through the website. These will include the project details, project progress, promotional videos, participation in scientific conferences and business venues. All publications derived from the project will be linked from the project website in their arxiv and archival forms to maximize the project promotion. Moreover, the project promotion plans will include the setup and maintenance of LinkedIn groups dedicated to the project promotion, and a print fact sheet.

Data Management Plan Update (öffnet in neuem Fenster)

An updated version of the data management plan will be made available in time for the mid-term review.

Data Management Plan (öffnet in neuem Fenster)

The first version of the DMP will be submitted within the first 6 months of the project.

Final Data Management Plan Update (öffnet in neuem Fenster)

An updated version of the data management plan will be made available in time for the final review, as per the Research Data Management Plans During The Project Life Cycle guidelines.

Veröffentlichungen

Group-blind optimal transport to group parity and its constrained variants (öffnet in neuem Fenster)

Autoren: Zhou, Quan; Marecek, Jakub
Veröffentlicht in: 2023
DOI: 10.48550/arxiv.2310.11407

A Sequential Quadratic Programming Method for Optimization with Stochastic Objective Functions, Deterministic Inequality Constraints and Robust Subproblems (öffnet in neuem Fenster)

Autoren: Qiu, Songqiang; Kungurtsev, Vyacheslav
Veröffentlicht in: 2023
DOI: 10.48550/arxiv.2302.07947

A novel framework for handling sparse data in traffic forecast (öffnet in neuem Fenster)

Autoren: Zygouras, Nikolaos; Gunopulos, Dimitrios
Veröffentlicht in: Proceedings of the 30th International Conference on Advances in Geographic Information Systems, 2022
DOI: 10.48550/arxiv.2301.05292

Explaining Knock-on Effects of Bias Mitigation (öffnet in neuem Fenster)

Autoren: Nizhnichenkov, Svetoslav; Nair, Rahul; Daly, Elizabeth; Mac Namee, Brian
Veröffentlicht in: 2023
DOI: 10.48550/arxiv.2312.00765

Riemannian Stochastic Approximation for Minimizing Tame Nonsmooth Objective Functions (öffnet in neuem Fenster)

Autoren: Aspman, Johannes; Kungurtsev, Vyacheslav; Seraji, Reza Roohi
Veröffentlicht in: 2023
DOI: 10.48550/arxiv.2302.00709

Piecewise Polynomial Regression of Tame Functions via Integer Programming (öffnet in neuem Fenster)

Autoren: Bareilles, Gilles; Aspman, Johannes; Nemecek, Jiri; Marecek, Jakub
Veröffentlicht in: ICLR 2025 Workshops, 2025
DOI: 10.48550/ARXIV.2311.13544

Efficient Fairness-Performance Pareto Front Computation (öffnet in neuem Fenster)

Autoren: Kozdoba, Mark; Perets, Binyamin; Mannor, Shie
Veröffentlicht in: NeurIPS 2025, 2025
DOI: 10.48550/ARXIV.2409.17643

Prediction-driven resource provisioning for serverless container runtimes (öffnet in neuem Fenster)

Autoren: Tomaras, Dimitrios; Tsenos, Michail; Kalogeraki, Vana
Veröffentlicht in: 2023 IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS), 2023
Herausgeber: IEEE
DOI: 10.48550/ARXIV.2410.19215

Generalizing while preserving monotonicity in comparison-based preference learning models (öffnet in neuem Fenster)

Autoren: Julien Fageot, Peva Blanchard, Gilles Bareilles, Lê-Nguyên Hoang
Veröffentlicht in: NeurIPS 2025, 2025
Herausgeber: NeurIPS
DOI: 10.48550/ARXIV.2506.08616

Fairness in Ranking: Robustness through Randomization without the Protected Attribute (öffnet in neuem Fenster)

Autoren: Kliachkin, Andrii; Psaroudaki, Eleni; Marecek, Jakub; Fotakis, Dimitris
Veröffentlicht in: 2024 IEEE 40th International Conference on Data Engineering Workshops (ICDEW), 2024
DOI: 10.48550/ARXIV.2403.19419

Closed-Loop View of the Regulation of AI: Equal Impact across Repeated Interactions (öffnet in neuem Fenster)

Autoren: Zhou, Quan; Ghosh, Ramen; Shorten, Robert; Marecek, Jakub
Veröffentlicht in: 2024 IEEE 40th International Conference on Data Engineering Workshops (ICDEW), 2024
Herausgeber: IEEE
DOI: 10.48550/ARXIV.2209.01410

GLANCE: Global Actions in a Nutshell for Counterfactual Explainability (öffnet in neuem Fenster)

Autoren: Kavouras, Loukas; Psaroudaki, Eleni; Tsopelas, Konstantinos; Rontogiannis, Dimitrios; Theologitis, Nikolaos; Sacharidis, Dimitris; Giannopoulos, Giorgos; Tomaras, Dimitrios; Markou, Kleopatra; Gunopulos, Dimitrios; Fotakis, Dimitris; Emiris, Ioannis
Veröffentlicht in: 2024
DOI: 10.48550/ARXIV.2405.18921

Interpretable Differencing of Machine Learning Models

Autoren: Swagatam Haldar, Diptikalyan Saha, Dennis Wei, Rahul Nair, Elizabeth M. Daly
Veröffentlicht in: UAI 2023, 2023
Herausgeber: Association for Uncertainty in Artificial Intelligence

Hybrid Methods in Polynomial Optimisation

Autoren: Johannes Aspman, Gilles Bareilles, Vyacheslav Kungurtsev, Jakub Marecek, Martin Takáč
Veröffentlicht in: Foundations of Computational Mathematics, 2023
Herausgeber: Foundations of Computational Mathematics

Time-Varying Multi-Objective Optimization: Tradeoff Regret Bounds (öffnet in neuem Fenster)

Autoren: Shafiei, Allahkaram; Kungurtsev, Vyacheslav; Marecek, Jakub
Veröffentlicht in: LION 2025, 2025
DOI: 10.48550/ARXIV.2211.09774

Energy Efficient Scheduling for Serverless Systems (öffnet in neuem Fenster)

Autoren: Tsenos, Michail; Peri, Aristotelis; Kalogeraki, Vana
Veröffentlicht in: 2023 IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS), 2023
Herausgeber: IEEE
DOI: 10.48550/ARXIV.2410.06695

Cookie Consent Has Disparate Impact on Estimation Accuracy

Autoren: Erik Miehling, Rahul Nair, Elizabeth Daly, Karthikeyan Natesan Ramamurthy, Robert Redmond
Veröffentlicht in: NeurIPS 2023, 2023, ISSN 1049-5258
Herausgeber: neurips.cc

Optimal Transport for Fairness: Archival Data Repair using Small Research Data Sets (öffnet in neuem Fenster)

Autoren: Langbridge, Abigail; Quinn, Anthony; Shorten, Robert
Veröffentlicht in: 2024 IEEE 40th International Conference on Data Engineering Workshops (ICDEW), 2024
DOI: 10.48550/ARXIV.2403.13864

TIMBER: On supporting data pipelines in Mobile Cloud Environments (öffnet in neuem Fenster)

Autoren: Tomaras, Dimitrios; Tsenos, Michail; Kalogeraki, Vana; Gunopulos, Dimitrios
Veröffentlicht in: 2024 25th IEEE International Conference on Mobile Data Management (MDM), 2024
DOI: 10.48550/ARXIV.2410.18106

Practical Privacy Preservation in a Mobile Cloud Environment (öffnet in neuem Fenster)

Autoren: Tomaras, Dimitrios; Tsenos, Michail; Kalogeraki, Vana
Veröffentlicht in: 23rd IEEE International Conference on Mobile Data Management (MDM), 2022
DOI: 10.48550/arxiv.2302.04463

A Framework for Feasible Counterfactual Exploration incorporating Causality, Sparsity and Density (öffnet in neuem Fenster)

Autoren: Markou, Kleopatra; Tomaras, Dimitrios; Kalogeraki, Vana; Gunopulos, Dimitrios
Veröffentlicht in: 2024 IEEE 40th International Conference on Data Engineering Workshops (ICDEW), 2024
DOI: 10.48550/ARXIV.2404.13476

Generating Likely Counterfactuals Using Sum-Product Networks (öffnet in neuem Fenster)

Autoren: Nemecek, Jiri; Pevny, Tomas; Marecek, Jakub
Veröffentlicht in: ICLR 2025, 2024
DOI: 10.48550/ARXIV.2401.14086

Optimization or Architecture: How to Hack Kalman Filtering (öffnet in neuem Fenster)

Autoren: Greenberg, Ido; Yannay, Netanel; Mannor, Shie
Veröffentlicht in: NeurIPS 2023, 2023
DOI: 10.48550/ARXIV.2310.00675

Orchestrating the Execution of Serverless Functions in Hybrid Clouds (öffnet in neuem Fenster)

Autoren: Peri, Aristotelis; Tsenos, Michail; Kalogeraki, Vana
Veröffentlicht in: 2023 IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS), 2023
DOI: 10.48550/ARXIV.2410.06721

Co-creating a globally interpretable model with human input

Autoren: Rahul Nair
Veröffentlicht in: ICML 2023 Workshop Artificial Intelligence & Human-Computer Interaction, 2023, ISSN 2640-3498
Herausgeber: MLResearchPress

Interpretable Differencing of Machine Learning Models (öffnet in neuem Fenster)

Autoren: Haldar, Swagatam; Saha, Diptikalyan; Wei, Dennis; Nair, Rahul; Daly, Elizabeth M.
Veröffentlicht in: Proceedings of Machine Learning Research, 2023
DOI: 10.48550/ARXIV.2306.06473

Auditing for Spatial Fairness

Autoren: Dimitris Sacharidis, Giorgos Giannopoulos, George Papastefanatos, Kostas Stefanidis
Veröffentlicht in: 26th International Conference on Extending Database Technology (EDBT), 2023
Herausgeber: EDBT

Taming Binarized Neural Networks and Mixed-Integer Programs (öffnet in neuem Fenster)

Autoren: Aspman, Johannes; Korpas, Georgios; Marecek, Jakub
Veröffentlicht in: AAAI 2024, 2024
DOI: 10.48550/arxiv.2310.04469

Fairness Aware Counterfactuals for Subgroups (öffnet in neuem Fenster)

Autoren: Kavouras, Loukas; Tsopelas, Konstantinos; Giannopoulos, Giorgos; Sacharidis, Dimitris; Psaroudaki, Eleni; Theologitis, Nikolaos; Rontogiannis, Dimitrios; Fotakis, Dimitris; Emiris, Ioannis Z.
Veröffentlicht in: NeurIPS 2023, 2023
DOI: 10.48550/ARXIV.2306.14978

Predictability and fairness in load aggregation and operations of virtual power plants (öffnet in neuem Fenster)

Autoren: Jakub Mareček, Michal Roubalik, Ramen Ghosh, Robert N. Shorten, Fabian R. Wirth
Veröffentlicht in: Automatica, 2023, ISSN 0005-1098
Herausgeber: Pergamon Press Ltd.
DOI: 10.1016/j.automatica.2022.110743

Journal of Artificial Intelligence Research (öffnet in neuem Fenster)

Autoren: Quan Zhou, Jakub Mareček, Robert Shorten
Veröffentlicht in: The Journal of Artificial Intelligence Research (JAIR), 2023, ISSN 1076-9757
Herausgeber: Morgan Kaufmann Publishers, Inc.
DOI: 10.1613/jair.1.14050

International Journal of Control (öffnet in neuem Fenster)

Autoren: Wynita M. Griggs; Ramen Ghosh; Jakub Mareček; Robert N. Shorten
Veröffentlicht in: International Journal of Control, 2025, ISSN 0020-7179
Herausgeber: Taylor and Francis
DOI: 10.1080/00207179.2025.2469281

Automatica (öffnet in neuem Fenster)

Autoren: Vyacheslav Kungurtsev, Jakub Marecek, Ramen Ghosh, Robert Shorten
Veröffentlicht in: Automatica, 2023, ISSN 0005-1098
Herausgeber: Pergamon Press Ltd.
DOI: 10.1016/j.automatica.2023.110946

International Journal of Control (öffnet in neuem Fenster)

Autoren: Ferraro, P; Yu, JY; Ghosh, R; Alam, SE; Marecek, J; Wirth, F; Shorten, R
Veröffentlicht in: International Journal of Control, 2023, ISSN 0020-7179
Herausgeber: Taylor & Francis
DOI: 10.48550/ARXIV.2209.13273

Journal of Chemical Information and Modeling (öffnet in neuem Fenster)

Autoren: Zamanos, Andreas; Ioannakis, George; Emiris, Ioannis
Veröffentlicht in: Journal of Chemical Information and Modeling, 2024, ISSN 1549-9596
Herausgeber: American Chemical Society
DOI: 10.1021/ACS.JCIM.3C01559

Mathematical Methods of Operations Research (öffnet in neuem Fenster)

Autoren: Allahkaram Shafiei, Vyacheslav Kungurtsev, Jakub Marecek
Veröffentlicht in: Mathematical Methods of Operations Research, Ausgabe Volume 99, pages 77–114, (2024), ISSN 1432-2994
Herausgeber: Springer Verlag
DOI: 10.1007/S00186-024-00852-5

PLoS ONE (öffnet in neuem Fenster)

Autoren: Quan Zhou, Jakub Mareček, Robert Shorten
Veröffentlicht in: PLoS One, 2023, ISSN 1932-6203
Herausgeber: Public Library of Science
DOI: 10.1371/journal.pone.0281443

Suche nach OpenAIRE-Daten ...

Bei der Suche nach OpenAIRE-Daten ist ein Fehler aufgetreten

Es liegen keine Ergebnisse vor

Mein Booklet 0 0