Skip to main content
Vai all'homepage della Commissione europea (si apre in una nuova finestra)
italiano it
CORDIS - Risultati della ricerca dell’UE
CORDIS

Human-Compatible Artificial Intelligence with Guarantees

CORDIS fornisce collegamenti ai risultati finali pubblici e alle pubblicazioni dei progetti ORIZZONTE.

I link ai risultati e alle pubblicazioni dei progetti del 7° PQ, così come i link ad alcuni tipi di risultati specifici come dataset e software, sono recuperati dinamicamente da .OpenAIRE .

Risultati finali

Explainability Methods that Identify and Explain Differences in Fairness between Sensitive Subgroups (si apre in una nuova finestra)

This report will describe method for producing different explanations individually for each sensitive subgroup affected. We will also present the tradeoffs using techniques developed in WP4 (Task 4.1), techniques from T4.1, and what-if analyses.

An Initial Report from the Living Labs: Stakeholder Analysis, Requirements Analysis, and First Results of Choice Experiments (si apre in una nuova finestra)

This report will describe the Living Lab (LL) hosted by each use case. These LLs will convene a cross-section of key actors, to form a cross-sectoral working group which will serve as a focal point for interactions with stakeholders of each of the respective use cases. The LLs will comprise stakeholders such as public authorities, local business representatives, NGOs, policy makers, etc., and seek to embed their perspective into the research process, increasing the potential for practical application of project results. Stakeholder engagement within the LLs will be central to economic methods such as cost benefit analysis, choice experiments, and revealed preference analysis, in order to develop effective financial instruments to incentivise behaviour change, and inform policy development.

Report on the Open-Source Software Developed (si apre in una nuova finestra)

Open-source software developed within WP2-WP6 will be maintained in a single family of libraries under the Apache license, with principled design and management.This report will present the family of libraries and the summarize the process of its development.

Second Periodical Management Report (si apre in una nuova finestra)

The periodical management report will have two parts. (1) The Technical Report's Part B is a project status report. It presents a narrative of the progression of the work done, from project objectives and work packages, to tasks’ progress descriptions. It also covers detected risks and the associated mitigation measures. Also, any deviations from original planning should be explained here. (2) The Financial Report refers to the consumption of the project costs incurred during the period.

Initial Report on Dissemination and Exploitation (si apre in una nuova finestra)

This report will present market analyses and the refinement of business potential for the innovation generated by the project’s work. Interim research results will be combined with the insight from the business analysis in defining and classifying the innovative features of products and services enabled by the beyond-the-state-of-art contributions of the project. Competitive analysis will contribute information about alternative offerings available in the market and provide insight into possible synergies with existing offerings.This will underlie the exploitation and dissemination plans of the consortium.

Report on User-induced Feedback Loops (si apre in una nuova finestra)

Fairness in industrial applications is a critical area that will involve human-in-the-loop at various stages. This report will describe methods to introduce guard rails or safety measures when allowing users to influence an AI system. Several challenges exist in online learning scenarios when reward signals come from users. Preference elicitation on possible trade-offs on user feedback will also be described.

Methods to Measure and Assess Model Safety in the Context of Fairness (si apre in una nuova finestra)

This report will describe methods and metrics for model transparency that directly include safety assessments, with particular focus on fairness dimensions. The ultimate measure of explainability is, indeed, if humans find it useful, but less attention has been paid to important reasons of safety and preventing unexpected harms. The report will elaborate upon those.

Methods for Explainability that Incorporate Fairness Measures (si apre in una nuova finestra)

This report describes novel explainability methods to provide explanations on fairness of the outcomes of AI algorithms, tying the produced explanations to respective fairness definitions. In particular, we will focus on how we will incorporate additional factors in the optimization objective of explainability algorithms such as counterafactuals or SHAP, in order to account for in changes of fairness definition configurations. This will also describe their initial implementations.

First Periodical Management Report (si apre in una nuova finestra)

The periodical management report will have two parts. (1) The Technical Report's Part B is a project status report. It presents a narrative of the progression of the work done, from project objectives and work packages, to tasks’ progress descriptions. It also covers detected risks and the associated mitigation measures. Also, any deviations from original planning should be explained here. (2) The Financial Report refers to the consumption of the project costs incurred during the period.

Best Practices of Fair AI provisions in Terms of Service / End User Agreements (si apre in una nuova finestra)

This report will suggest and develop model contract clauses in relation to FairAI. Notably it will show how to incorporate the a-priori and post-hoc techniques developed in the project into the contracts. It will also assess existing AI-based service providers’ contract on the basis of their treatment of Fairness, Explainability, and Trust in AI.

A Study on Fair AI Policies and Regimes (si apre in una nuova finestra)

In order to ground the research and development in the relevant regulation of the EU and selected third countries (esp. in the WOR use case, where many clients are multi-nationals with large operations in the US), this report will present a comprehensive overview of AI-Fairness policy (to be updated as a live document), providing guidance and suggestions to the consortium. It will report on workshops with regulators dealing with AI fairness. It will explain the regulatory tensions (e.g. between property and fairness protection regimes) and suggest potential strategies of resolution. It will develop lists of best regulatory practices. It will suggest and develop model contract clauses in relation to FairAI and assess existing AI-based service providers’ contract on their basis.

Pareto Front Sampling and Visualization (si apre in una nuova finestra)

The tradeoff between different measures of fairness is captured by the so-called Pareto front, a visual representing the most one can achieve of a particular measure without sacrificing another. With many measures of production costs, machine learning inference quality, and fairness, the Pareto front is a high-dimensional object. We consider both low-dimensional projections of the Pareto front and the visual representations of the relative position of the proposed solution on the low-dimensional projections of the Pareto front.

An Open-Source Toolkit for Human-guided Automation of Fairness in AI (si apre in una nuova finestra)

Using the tools developed in the previous work packages, we intend to integrate the stochastic optimization procedures, stochastic control methods, and reinforcement learning based black box algorithms developed in a package built to high standards of scientific software engineering. We expect to enable interfacing the Backend with the AI Fairness 360 Toolkit. Using the Fair AI Backend, we intend to develop an interface with high quality GUI and visualization tools to enable end users to seamlessly input data, perform risk-free simulations, visualize the pareto front, and use reinforcement learning to suggest a prior for optimal decisions of AI solution design and implementation.

An Open-Source Toolbox for Explainability (si apre in una nuova finestra)

We will develop an open-source toolbox for the methods developed in T6.1 and T6.2, either within AI Explainability 360 or independently. To this end, we will deploy multiple proxy models, adapting their sampling process to represent the characteristics of different subgroups, and with the aim to identify and explain discrepancies on the decisions and the explanations between subgroups.

Web Page and Related Infrastructure (si apre in una nuova finestra)

The Project Website will be the base for the dissemination activities that will be projected through the website. These will include the project details, project progress, promotional videos, participation in scientific conferences and business venues. All publications derived from the project will be linked from the project website in their arxiv and archival forms to maximize the project promotion. Moreover, the project promotion plans will include the setup and maintenance of LinkedIn groups dedicated to the project promotion, and a print fact sheet.

Data Management Plan Update (si apre in una nuova finestra)

An updated version of the data management plan will be made available in time for the mid-term review.

Data Management Plan (si apre in una nuova finestra)

The first version of the DMP will be submitted within the first 6 months of the project.

Final Data Management Plan Update (si apre in una nuova finestra)

An updated version of the data management plan will be made available in time for the final review, as per the Research Data Management Plans During The Project Life Cycle guidelines.

Pubblicazioni

Group-blind optimal transport to group parity and its constrained variants (si apre in una nuova finestra)

Autori: Zhou, Quan; Marecek, Jakub
Pubblicato in: 2023
DOI: 10.48550/arxiv.2310.11407

A Sequential Quadratic Programming Method for Optimization with Stochastic Objective Functions, Deterministic Inequality Constraints and Robust Subproblems (si apre in una nuova finestra)

Autori: Qiu, Songqiang; Kungurtsev, Vyacheslav
Pubblicato in: 2023
DOI: 10.48550/arxiv.2302.07947

A novel framework for handling sparse data in traffic forecast (si apre in una nuova finestra)

Autori: Zygouras, Nikolaos; Gunopulos, Dimitrios
Pubblicato in: Proceedings of the 30th International Conference on Advances in Geographic Information Systems, 2022
DOI: 10.48550/arxiv.2301.05292

Explaining Knock-on Effects of Bias Mitigation (si apre in una nuova finestra)

Autori: Nizhnichenkov, Svetoslav; Nair, Rahul; Daly, Elizabeth; Mac Namee, Brian
Pubblicato in: 2023
DOI: 10.48550/arxiv.2312.00765

Riemannian Stochastic Approximation for Minimizing Tame Nonsmooth Objective Functions (si apre in una nuova finestra)

Autori: Aspman, Johannes; Kungurtsev, Vyacheslav; Seraji, Reza Roohi
Pubblicato in: 2023
DOI: 10.48550/arxiv.2302.00709

Piecewise Polynomial Regression of Tame Functions via Integer Programming (si apre in una nuova finestra)

Autori: Bareilles, Gilles; Aspman, Johannes; Nemecek, Jiri; Marecek, Jakub
Pubblicato in: ICLR 2025 Workshops, 2025
DOI: 10.48550/ARXIV.2311.13544

Efficient Fairness-Performance Pareto Front Computation (si apre in una nuova finestra)

Autori: Kozdoba, Mark; Perets, Binyamin; Mannor, Shie
Pubblicato in: NeurIPS 2025, 2025
DOI: 10.48550/ARXIV.2409.17643

Prediction-driven resource provisioning for serverless container runtimes (si apre in una nuova finestra)

Autori: Tomaras, Dimitrios; Tsenos, Michail; Kalogeraki, Vana
Pubblicato in: 2023 IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS), 2023
Editore: IEEE
DOI: 10.48550/ARXIV.2410.19215

Generalizing while preserving monotonicity in comparison-based preference learning models (si apre in una nuova finestra)

Autori: Julien Fageot, Peva Blanchard, Gilles Bareilles, Lê-Nguyên Hoang
Pubblicato in: NeurIPS 2025, 2025
Editore: NeurIPS
DOI: 10.48550/ARXIV.2506.08616

Fairness in Ranking: Robustness through Randomization without the Protected Attribute (si apre in una nuova finestra)

Autori: Kliachkin, Andrii; Psaroudaki, Eleni; Marecek, Jakub; Fotakis, Dimitris
Pubblicato in: 2024 IEEE 40th International Conference on Data Engineering Workshops (ICDEW), 2024
DOI: 10.48550/ARXIV.2403.19419

Closed-Loop View of the Regulation of AI: Equal Impact across Repeated Interactions (si apre in una nuova finestra)

Autori: Zhou, Quan; Ghosh, Ramen; Shorten, Robert; Marecek, Jakub
Pubblicato in: 2024 IEEE 40th International Conference on Data Engineering Workshops (ICDEW), 2024
Editore: IEEE
DOI: 10.48550/ARXIV.2209.01410

GLANCE: Global Actions in a Nutshell for Counterfactual Explainability (si apre in una nuova finestra)

Autori: Kavouras, Loukas; Psaroudaki, Eleni; Tsopelas, Konstantinos; Rontogiannis, Dimitrios; Theologitis, Nikolaos; Sacharidis, Dimitris; Giannopoulos, Giorgos; Tomaras, Dimitrios; Markou, Kleopatra; Gunopulos, Dimitrios; Fotakis, Dimitris; Emiris, Ioannis
Pubblicato in: 2024
DOI: 10.48550/ARXIV.2405.18921

Interpretable Differencing of Machine Learning Models

Autori: Swagatam Haldar, Diptikalyan Saha, Dennis Wei, Rahul Nair, Elizabeth M. Daly
Pubblicato in: UAI 2023, 2023
Editore: Association for Uncertainty in Artificial Intelligence

Hybrid Methods in Polynomial Optimisation

Autori: Johannes Aspman, Gilles Bareilles, Vyacheslav Kungurtsev, Jakub Marecek, Martin Takáč
Pubblicato in: Foundations of Computational Mathematics, 2023
Editore: Foundations of Computational Mathematics

Time-Varying Multi-Objective Optimization: Tradeoff Regret Bounds (si apre in una nuova finestra)

Autori: Shafiei, Allahkaram; Kungurtsev, Vyacheslav; Marecek, Jakub
Pubblicato in: LION 2025, 2025
DOI: 10.48550/ARXIV.2211.09774

Energy Efficient Scheduling for Serverless Systems (si apre in una nuova finestra)

Autori: Tsenos, Michail; Peri, Aristotelis; Kalogeraki, Vana
Pubblicato in: 2023 IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS), 2023
Editore: IEEE
DOI: 10.48550/ARXIV.2410.06695

Cookie Consent Has Disparate Impact on Estimation Accuracy

Autori: Erik Miehling, Rahul Nair, Elizabeth Daly, Karthikeyan Natesan Ramamurthy, Robert Redmond
Pubblicato in: NeurIPS 2023, 2023, ISSN 1049-5258
Editore: neurips.cc

Optimal Transport for Fairness: Archival Data Repair using Small Research Data Sets (si apre in una nuova finestra)

Autori: Langbridge, Abigail; Quinn, Anthony; Shorten, Robert
Pubblicato in: 2024 IEEE 40th International Conference on Data Engineering Workshops (ICDEW), 2024
DOI: 10.48550/ARXIV.2403.13864

TIMBER: On supporting data pipelines in Mobile Cloud Environments (si apre in una nuova finestra)

Autori: Tomaras, Dimitrios; Tsenos, Michail; Kalogeraki, Vana; Gunopulos, Dimitrios
Pubblicato in: 2024 25th IEEE International Conference on Mobile Data Management (MDM), 2024
DOI: 10.48550/ARXIV.2410.18106

Practical Privacy Preservation in a Mobile Cloud Environment (si apre in una nuova finestra)

Autori: Tomaras, Dimitrios; Tsenos, Michail; Kalogeraki, Vana
Pubblicato in: 23rd IEEE International Conference on Mobile Data Management (MDM), 2022
DOI: 10.48550/arxiv.2302.04463

A Framework for Feasible Counterfactual Exploration incorporating Causality, Sparsity and Density (si apre in una nuova finestra)

Autori: Markou, Kleopatra; Tomaras, Dimitrios; Kalogeraki, Vana; Gunopulos, Dimitrios
Pubblicato in: 2024 IEEE 40th International Conference on Data Engineering Workshops (ICDEW), 2024
DOI: 10.48550/ARXIV.2404.13476

Generating Likely Counterfactuals Using Sum-Product Networks (si apre in una nuova finestra)

Autori: Nemecek, Jiri; Pevny, Tomas; Marecek, Jakub
Pubblicato in: ICLR 2025, 2024
DOI: 10.48550/ARXIV.2401.14086

Optimization or Architecture: How to Hack Kalman Filtering (si apre in una nuova finestra)

Autori: Greenberg, Ido; Yannay, Netanel; Mannor, Shie
Pubblicato in: NeurIPS 2023, 2023
DOI: 10.48550/ARXIV.2310.00675

Orchestrating the Execution of Serverless Functions in Hybrid Clouds (si apre in una nuova finestra)

Autori: Peri, Aristotelis; Tsenos, Michail; Kalogeraki, Vana
Pubblicato in: 2023 IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS), 2023
DOI: 10.48550/ARXIV.2410.06721

Co-creating a globally interpretable model with human input

Autori: Rahul Nair
Pubblicato in: ICML 2023 Workshop Artificial Intelligence & Human-Computer Interaction, 2023, ISSN 2640-3498
Editore: MLResearchPress

Interpretable Differencing of Machine Learning Models (si apre in una nuova finestra)

Autori: Haldar, Swagatam; Saha, Diptikalyan; Wei, Dennis; Nair, Rahul; Daly, Elizabeth M.
Pubblicato in: Proceedings of Machine Learning Research, 2023
DOI: 10.48550/ARXIV.2306.06473

Auditing for Spatial Fairness

Autori: Dimitris Sacharidis, Giorgos Giannopoulos, George Papastefanatos, Kostas Stefanidis
Pubblicato in: 26th International Conference on Extending Database Technology (EDBT), 2023
Editore: EDBT

Taming Binarized Neural Networks and Mixed-Integer Programs (si apre in una nuova finestra)

Autori: Aspman, Johannes; Korpas, Georgios; Marecek, Jakub
Pubblicato in: AAAI 2024, 2024
DOI: 10.48550/arxiv.2310.04469

Fairness Aware Counterfactuals for Subgroups (si apre in una nuova finestra)

Autori: Kavouras, Loukas; Tsopelas, Konstantinos; Giannopoulos, Giorgos; Sacharidis, Dimitris; Psaroudaki, Eleni; Theologitis, Nikolaos; Rontogiannis, Dimitrios; Fotakis, Dimitris; Emiris, Ioannis Z.
Pubblicato in: NeurIPS 2023, 2023
DOI: 10.48550/ARXIV.2306.14978

Predictability and fairness in load aggregation and operations of virtual power plants (si apre in una nuova finestra)

Autori: Jakub Mareček, Michal Roubalik, Ramen Ghosh, Robert N. Shorten, Fabian R. Wirth
Pubblicato in: Automatica, 2023, ISSN 0005-1098
Editore: Pergamon Press Ltd.
DOI: 10.1016/j.automatica.2022.110743

Journal of Artificial Intelligence Research (si apre in una nuova finestra)

Autori: Quan Zhou, Jakub Mareček, Robert Shorten
Pubblicato in: The Journal of Artificial Intelligence Research (JAIR), 2023, ISSN 1076-9757
Editore: Morgan Kaufmann Publishers, Inc.
DOI: 10.1613/jair.1.14050

International Journal of Control (si apre in una nuova finestra)

Autori: Wynita M. Griggs; Ramen Ghosh; Jakub Mareček; Robert N. Shorten
Pubblicato in: International Journal of Control, 2025, ISSN 0020-7179
Editore: Taylor and Francis
DOI: 10.1080/00207179.2025.2469281

Automatica (si apre in una nuova finestra)

Autori: Vyacheslav Kungurtsev, Jakub Marecek, Ramen Ghosh, Robert Shorten
Pubblicato in: Automatica, 2023, ISSN 0005-1098
Editore: Pergamon Press Ltd.
DOI: 10.1016/j.automatica.2023.110946

International Journal of Control (si apre in una nuova finestra)

Autori: Ferraro, P; Yu, JY; Ghosh, R; Alam, SE; Marecek, J; Wirth, F; Shorten, R
Pubblicato in: International Journal of Control, 2023, ISSN 0020-7179
Editore: Taylor & Francis
DOI: 10.48550/ARXIV.2209.13273

Journal of Chemical Information and Modeling (si apre in una nuova finestra)

Autori: Zamanos, Andreas; Ioannakis, George; Emiris, Ioannis
Pubblicato in: Journal of Chemical Information and Modeling, 2024, ISSN 1549-9596
Editore: American Chemical Society
DOI: 10.1021/ACS.JCIM.3C01559

Mathematical Methods of Operations Research (si apre in una nuova finestra)

Autori: Allahkaram Shafiei, Vyacheslav Kungurtsev, Jakub Marecek
Pubblicato in: Mathematical Methods of Operations Research, Numero Volume 99, pages 77–114, (2024), ISSN 1432-2994
Editore: Springer Verlag
DOI: 10.1007/S00186-024-00852-5

PLoS ONE (si apre in una nuova finestra)

Autori: Quan Zhou, Jakub Mareček, Robert Shorten
Pubblicato in: PLoS One, 2023, ISSN 1932-6203
Editore: Public Library of Science
DOI: 10.1371/journal.pone.0281443

È in corso la ricerca di dati su OpenAIRE...

Si è verificato un errore durante la ricerca dei dati su OpenAIRE

Nessun risultato disponibile

Il mio fascicolo 0 0