Periodic Reporting for period 1 - SIMTWIN (Health Simulations: Ethical and Societal Challenges of Digital Twins)
Reporting period: 2023-06-01 to 2025-11-30
SIMTWIN’s central objective is to provide a comprehensive and in-depth analysis of the normative challenges implied in order to develop an integrated theory of health simulations. In proposing an empirically-based and normatively robust framework for an ethical and societal assessment of this technology, SIMTWIN will enable the design of practical modes of controllability for the use of DT in health.
a) In line with WP1, we have prioritized public engagement and knowledge transfer through open and accessible formats. As part of our efforts to organize regular meetings and workshops, maintain digital outreach, and host public scenario events, a public lecture series titled “Lunch Series” has been established. This ongoing series brings together international researchers working at the intersection of health, AI, and ethics, providing a welcoming and intellectually open space for them to share their work. To date, thirteen speakers have participated in the series, which is offered in both online and in-person formats and is open to the public. Another initiative aimed at public dialogue and ethical reflection is the “Diskutier Ma(h)l” series. These events invite professors and researchers to present politically and ethically relevant topics in an informal setting that encourages public interaction over a shared meal. The format has steadily gained traction, attracting a growing number of participants from outside the academic sphere and supporting our goal of fostering inclusive and accessible ethical discourse around emerging technologies.
b) As part of WP2: we have contributed to academic dialogue around Digital Twins, particularly through talks and presentations. In alignment with Task 2.3 which calls for two talks or conferences per year to present the results of SIMTWIN, we have given a number of talks that explore the intersection of Digital Twins, Digital Brain Twins, and disability.
We have also presented multiple posters at international conferences, focusing on themes that connect Digital Twins with linguistic and brain simulations. Moreover, we have received invitations to upcoming conferences, where we will speak on the ethical and conceptual implications of Digital Twins, with particular emphasis on human dignity and brain simulation.
c) In line with WP3, we conduct a study on stakeholder attitudes toward AI-based contactless sensor technologies in healthcare. The findings, based on five stakeholder groups, offer insights into ethical and social concerns and have been submitted for publication and currently it is accessible as a preprint.
d) Under WP4: Analysis of Implied Normative Concepts, our work has centered on identifying and analyzing the normative assumptions embedded in Digital Twin technologies. This work has led to a number of peer-reviewed publications focusing on topics such as the ethical dimensions of decommissioning Digital Twins, the role of personalized patient preference predictors in healthcare decision-making, and conceptual models of disability grounded in ecological-enactive theory.
e) In line with WP6, regarding the tasks of governance conceptualization and analyzing governance frameworks, we have multiple publications on Solidarity Gaps, Digital Sovereignty, and Human-centered AI.
1. Decommissioning Digital Doppelgängers
“Bytes the Dust: Normative Notions in Decommissioning Digital Doppelgängers” in The American Journal of Bioethics offers one of the first ethical analyses of what it means to decommission digital doppelgängers, digital twins that replicate psychological aspects of individuals. The article introduces a nuanced framework distinguishing between digital twins as proxies and as extensions of personhood, and explores the normative implications for preservation, repurposing, and destruction. This work has helped set the agenda for future debates on digital afterlife, posthumous privacy, and the lifecycle management of AI-based personal representations.
2. Ecological-Enactive Model of Disability
“Beyond Pathology: Bringing the Ecological-Enactive Model of Disability to Neuroethics and Mental Health Conditions” (AJOB Neuroscience), advanced the application of the ecological-enactive model to neuroethics, challenging traditional medical and neurodiversity models. This commentary reframes disability as a dynamic interaction between individuals and their environments, emphasizing adaptation and affordances rather than static pathology. The work has been recognized for deepening ethical discussions around mental health, embodiment, and inclusion, and for providing a more nuanced basis for clinical and policy approaches to disability.
3. Solidarity Gaps in Health
“How predictive medicine leads to solidarity gaps in health” in npj Digital Medicine critically examines the societal and ethical consequences of the shift towards AI-driven (P4) medicine. The piece identifies two key “solidarity gaps” that arise when predictive analytics decouple symptoms from access to care, and when responsibility for health is shifted onto individuals. By highlighting the need for solidarity-based governance in the regulation of digital twins and predictive health technologies, this work has influenced ongoing policy discussions at the European level and contributed to the broader debate on digital sovereignty and justice in healthcare.
4. Personalized Patient Preference Predictor
“A Personalized Patient Preference Predictor for Substituted Judgments in Healthcare: Technically feasible and Ethically Desirable” in The American Journal of Bioethics, contributed to the conceptualization and ethical analysis of using AI to predict the treatment preferences of incapacitated patients. The article introduces the notion of a “personalized patient preference predictor” (P4), leveraging advances in machine learning to infer individual preferences from person-specific data. This work addresses autonomy-based objections to earlier models and sets out a research agenda for integrating AI-based predictors into shared decision-making, with significant implications for patient rights and clinical practice.