European Commission logo
English English
CORDIS - EU research results
CORDIS

Fair predictions in health

Project description

Ethical considerations of ML in healthcare

One fifth of the EU population has some form of disability. The EU is committed to improving the social and economic situation of these persons. Ensuring that they have access to culture, as consumers or contributors, is also important. Machine learning (ML) is increasingly used in clinical care to improve diagnosis, therapy options and effectiveness of the health system. However, ML models use historically gathered information and exclude parts of the population that experience social, racial or gender discrimination, failing to provide fairness in predictive models and posing ethical concerns. The EU-funded FPH project will map the ethical theories related to the distribution of resources in healthcare and link them to fair ML. It will understand how usual moral concepts can be perceived in probabilistic terms and if existing allegations of fair models in AI are solid in respect to different philosophical perceptions of probability, causality and counterfactuals and demonstrate the relevance of these philosophical perceptions.

Objective

In clinical care, machine learning is progressively used to enhance diagnosis, therapy choice, and effectiveness of the health system. Because machine-learning models learn from historically gathered information, populations that have suffered past human and structural biases (e.g. unequal access to education or resources) — called protected groups — are susceptible to damage from inaccurate projections or resource allocations, reinforcing health inequalities. For example, racial and gender differences exist in the way clinical data are produced and these can be transferred as biases in the models. Several techniques of algorithmic fairness have been suggested in the literature on machine learning to ameliorate the performance of machine learning with respect to its fairness. The debate in statistics and machine learning has however failed to provide a principled approach for choosing concepts of bias, prejudice, discrimination, and fairness in predictive models, with a clear link to ethical theory discussed within philosophy.
The specific scientific objectives of this research project are:
O1: ethical theory: mapping the ethical theories that are relevant for the allocation of resources in health care and draw connections with the literature in fair machine learning
O2: probabilistic ethics: understand how standard moral concepts such as responsibility, merit, need, talent, equality, and benefit can be understood in probabilistic terms
O3: epistemology of causality: understand if current claims made by counterfactual and causal models of fairness in AI are robust with respect to different philosophical understandings of probability, causality, and counterfactuals
O4: application: to show the relevance these philosophical ideas by applying them to a limited number of paradigmatic cases of the application of predictive algorithms in health care.

Coordinator

POLITECNICO DI MILANO
Net EU contribution
€ 183 473,28
Address
PIAZZA LEONARDO DA VINCI 32
20133 Milano
Italy

See on map

Region
Nord-Ovest Lombardia Milano
Activity type
Higher or Secondary Education Establishments
Links
Total cost
€ 183 473,28