European Commission logo
polski polski
CORDIS - Wyniki badań wspieranych przez UE
CORDIS

New discrete choice theory for understanding moral decision making behaviour

Periodic Reporting for period 4 - BEHAVE (New discrete choice theory for understanding moral decision making behaviour)

Okres sprawozdawczy: 2022-02-01 do 2022-12-31

Discrete choice theory provides a mathematically rigorous framework to analyse and predict choice behaviour. While many of the theory’s key developments originate from the domain of transportation (mobility, travel behaviour), it is now widely used throughout the social sciences.

The theory has a blind spot for moral choice behaviour. It was designed to analyse situations where people make choices that are optimal given their consumer preferences, rather than situations where people attempt to make choices that are right, given their moral preferences. This neglect of the morality of choice is striking, in light of the fact that many of the most important choices people make, have a moral dimension.

This research program extends discrete choice theory to the domain of moral decision making, by developing and empirically testing new models of moral choice behavior for human decision-makers and artificial intelligence.

It will produce a suite of new mathematical representations of choice behaviour (i.e. choice models), which are designed to capture the decision rules and decision weights that determine how individuals behave in moral choice situations. In these models, particular emphasis is given to heterogeneity in moral decision rules and to the role of social influences. Models will be estimated and validated using data obtained through a series of interviews, surveys and choice experiments.

Empirical analyses will take place in the context of moral choice situations concerning i) co-operative road using and ii) unsafe driving practices.

Estimation results will be used as input for agent based models ('artificial societies'), to identify how social interaction processes lead to the emergence, persistence or dissolution of moral (traffic) equilibria at larger spatio-temporal scales. Furthermore, the developed and empirically validated moral choice models will be used to design so-called artificial moral agents (also called moral AI or moral machines), whose behavior is guided by decision systems based on human morality.

Together, these proposed research efforts promise to generate a major breakthrough in discrete choice theory. In addition, the program will result in important methodological contributions to the empirical study of moral decision making behaviour in general; to new insights into the moral aspects of behaviour in the domains of transportation (road safety and co-operative driving) and public health (pandemic related moral dillemmas); and to the development of moral decision systems for artificial intelligence.
Module 1 is the core of the program. It aims to extend discrete choice theory to the domain of moral decision making, by developing formal micro-econometric models of moral choice behaviour. These models are rooted in moral philosophy and moral psychology, and aim to describe how human decision-makers translate moral principles towards concrete moral actions. The module has a substantial empirical component, allowing us to estimate and validate our models. The empirical testing ground is obtained from case studies in Transportation (co-operative driving and road safety), Health (pandemic related public health dillemmas) and other domains.

Module 2 focuses on society as opposed to decision-making by individuals; it uses the estimated and validated discrete choice models developed in module 1 as input for agent based models (where the agents represent humans), to identify how social interaction processes lead to the emergence, persistence or dissolution of moral equilibria at larger spatio-temporal scales.

Module 3 studies how moral decision rules (including the ones developed in module 1) can be implemented in robots or artificial agents. We aim to contribute to the emerging field of machine ethics by equipping robots with human-inspired moral decision systems.

Module 4 studies interactions between moral artificial agents, in so-called multi-agent systems. This module aims to contribute to recent developments in Artificial Intelligence, and focuses on how systems of increasingly autonomous moral agents interact. Rules developed in module 1 will be used as input for these agents.

In sum: the BEHAVE-research program aims to extend discrete choice theory to the domain of moral decision making; and to employ the developed ‘moral discrete choice models’ in social simulation- and AI (machine ethics) contexts. Real life applications are found in the domains of Transportation and Health. By doing so, we attempt to push the envelope of a variety of research fields, including discrete choice modeling, moral psychology, transportation, health decision making, social simulation and machine ethics. To achieve this ambitions set of goals, we have brought together a top-notch team of researchers with backgrounds ranging from sociology and criminology to artificial intelligence, transportation, public health, and applied mathematics.
slide-workshop-agents.jpg