Periodic Reporting for period 1 - AutoFair (Human-Compatible Artificial Intelligence with Guarantees)
Período documentado: 2022-10-01 hasta 2024-03-31
-- Comprehensive and flexible certification of fairness. At one end, we can develop novel systems with a priori guarantees on certain bias measures as hard constraints in the training process. At the other end, we can consider post hoc comprehensible but thorough presentation of all of the tradeoffs involved in existing systems.
-- User-in-the-loop in continuous iterative engagement among AI systems, their developers and users. We seek to both inform the users thoroughly in regards to the possible algorithmic choices and their expected effects, and at the same time to learn their preferences in regards to different fairness measures and subsequently guide decision making bringing together the benefits of automation in a human-compatible manner.
-- Toolkits for the automatic identification of various types of bias, and their joint compensation by automatically optimizing various and potentially conflicting objectives (fairness/accuracy/runtime/resources), visualising the tradeoffs, and making it possible to communicate the tradeoffs to the industrial user, government agency, NGO, or members of the public, where appropriate.
USE CASES
-- workable.com is the world’s leading hiring platform, where companies find, evaluate and hire better candidates, faster. Clearly, individual and group fairness among the candidates is crucial for their continued custom.
-- dateio.eu is a fintech company processing credit-card data in 20 markets, incl. running a card-linked marketing platform delivering targeted cash-back offers to banks’ clients.
WP3:
Giorgos Giannopoulos et al.: Fairness in AI: challenges in bridging the gap between algorithms and law. FAIR 2024.
Dimitris Sacharidis, Giorgos Giannopoulos, George Papastefanatos, Kostas Stefanidis: Auditing for Spatial Fairness. EDBT 2023.
Quan Zhou, Jakub Marecek, Robert N. Shorten: Subgroup fairness in two-sided markets. PLoS One, Volume 18(2), 2023, e0281443.
Sarah Boufelja Y., Anthony Quinn, Martin Corless, Robert Shorten: Fully Probabilistic Design for Optimal Transport. Communications on Optimization Theory.
WP4:
Inge Vejsbjerg, Elizabeth M. Daly, Rahul Nair, Svetoslav Nizhnichenkov: Interactive Human-Centric Bias Mitigation. Demonstration at AAAI 2024.
Andrii Kliachkin, Eleni Psaroudaki, Jakub Marecek, Dimitris Fotakis: Fairness in Ranking: Robustness through Randomization without the Protected Attribute. FAIR 2024.
Swagatam Haldar, Diptikalyan Saha, Dennis Wei, Rahul Nair, Elizabeth M. Daly: Interpretable Differencing of Machine Learning Models. UAI 2023.
Rahul Nair: Co-creating a globally interpretable model with human input. ICML 2023 (workshop paper).
Abigail Langbridge, Anthony Quinn, Robert Shorten: Optimal Transport for Fairness: Archival Data Repair using Small Research Data Sets. FAIR 2024.
Vyacheslav Kungurtsev, Jakub Marecek, Ramen Ghosh, Robert N. Shorten: On the Ergodic Control of Ensembles in the Presence of Non-linear Filters. Automatica, Volume 152, June 2023, 110946.
Jakub Marecek, Michal Roubalik, Ramen Ghosh, Robert N. Shorten, Fabian R. Wirth: Predictability and fairness in load aggregation and operations of virtual power plants. Automatica, Volume 147, January 2023, 110743.
WP5:
Quan Zhou, Jakub Marecek, Robert N. Shorten: Fairness in Forecasting of Observations of Linear Dynamical Systems. Journal of AI Research, Vol. 76 (2023).
Pietro Ferraro, Jia Yuan Yu, Ramen Ghosh, Syed Eqbal Alam, Jakub Marecek, Fabian Wirth, Robert Shorten: On Unique Ergodicity of Coupled AIMD Flows. International Journal of Control, to appear.
Johannes Aspman, Gilles Bareilles, Vyacheslav Kungurtsev, Jakub Marecek: Hybrid Methods in Polynomial Optimisation. FOCM 2023 (poster).
Francisco Facchinei, Vyacheslav Kungurtsev: Stochastic Approximation for Expectation Objective and Expectation Inequality-Constrained Nonconvex Optimization.
Fabio V. Difonzo, Vyacheslav Kungurtsev, Jakub Marecek: Stochastic Langevin Differential Inclusions with Applications to Machine Learning.
Quan Zhou, Ramen Ghosh, Robert Shorten, Jakub Marecek: Closed-Loop View of the Regulation of AI: Equal Impact across Repeated Interactions.
Johannes Aspman, Vyacheslav Kungurtsev, Reza Roohi Seraji: Riemannian Stochastic Approximation for Minimizing Tame Nonsmooth Objective Functions.
Songqiang Qiu, Vyacheslav Kungurtsev: A Sequential Quadratic Programming Method for Optimization with Stochastic Objective Functions, Deterministic Inequality Constraints and Robust Subproblems.
WP6:
Loukas Kavouras et al: Fairness Aware Counterfactuals for Subgroups. NeurIPS 2023.
Johannes Aspman, Georgios Korpas, Jakub Marecek: Taming Binarized Neural Networks and Mixed-Integer Programs. AAAI 2024.
Jiri Nemecek, Tomas Pevny, Jakub Marecek: Improving the Validity of Decision Trees as Explanations. Submitted, with a presentation at the Uncertainty meets Explainability Workshop @ ECML-PKDD 2023.
Giorgos Giannopoulos et al.: FALE: Fairness-Aware ALE Plots for Auditing Bias in Subgroups. Submitted, with a presentation at the Uncertainty meets Explainability Workshop @ ECML-PKDD 2023.
We develop methods for both developing machine learning components (including neural network training) with fairness enforcement as well as ensuring that the composition of AI pipelines out of elements with fairness guarantees exhibits fairness guarantees. This is a crucial advance in modularity and interoperability of fairness.
We develop methods for the derivation of fair counterparts of many elements of AI pipelines via randomization. This is crucial for the transition to fair AI pipelines. This will enable fairness-inducing governance of the existing AI pipelines. See WP4.
We develop tools and processes for the design, testing and validation, deployment and uptake, auditing, and certification (where relevant) of related software-engineering methodologies. We are contributing to AI Fairness 330 and AI Explainability 360, two projects of the Linux Software Foundation.
We develop web interfaces highlighting the tradeoffs in fairness in AI, which allow for what-if analyses, with visuals suitable for businesses, end users, and policy makers.