Skip to main content
European Commission logo print header

Reasoning About Strategic Interaction and Emotions

Final Report Summary - STRATEMOTIONS (Reasoning About Strategic Interaction and Emotions)

In this project I used and enhanced the methodology of epistemic game theory (the theoretical analysis of how interacting agents reason about each other) to study the interplay of emotions and strategic reasoning in dynamic environments. Although emotions may sometime hinder cognition, I focused on their role in motivating behavior, assuming that agents maximize their subjectively expected psychological utility. Since emotions are triggered by beliefs, my study focused on belief-dependent motivation in games, and in particular in dynamic games. My contributions are both methodological and applied to the theoretical and experimental analysis of specific emotions, most notably uncertainty aversion (fear of the unknown), guilt aversion (the desire not to let others down), and anger caused by frustration (goal blockage).
As for the analysis of specific emotions, in my study of uncertainty (or ambiguity) aversion I proved that this powerful motivation tends to hinder experimentation (via actions that let the outcome depend more on the unknown state) and thus the possibility to learn the relevant features of the environment, including the behavior of others. Uncertainty averse agents do not only value knowledge for instrumental reasons. Despite this, more uncertainty aversion makes it easier for agents to get stuck in “certainty traps” in which their behavior, or policy is objectively suboptimal, but subjectively justified by confirmed beliefs. This makes long-run behavior less predictable. Yet, I also identified properties of decision problems and of information feedback that either yield optimal long-run behavior, or at least make the set of possible long-run outcomes (so called, self-confirming equilibria) independent of the degree of uncertainty aversion.
An important role of belief-dependent motivation is that it may give credibility to promises and threats that instead would not be carried out by material-payoff maximizing agents. I studied theoretically and experimentally guilt aversion and anger caused by frustration. I proved that guilt aversion may explain why agents avoid materially beneficial deception. The model explains well-known experimental findings on how deception depends on payoffs, including the payoffs of others. I also showed theoretically and experimentally that guilt aversion makes agents more prone to honor costly promises, and that, when it is recognizable, it makes such promises more credible. I obtained comparative results explaining experimental data by explicitly allowing for the incompleteness of information about co-players personal traits, which is typical of experimental settings, and assuming two or three steps of strategic reasoning instead of relying on orthodox equilibrium analysis, which is ill suited to study subjects’ behavior in experiments.
My analysis also emphasized the role of time in ways that differ from traditional economic theory. While the latter focuses on patience (the willingness to defer consumption to get more later), I show that time is important even if such traditional considerations play no role. For example, anger subsides when (possibly punitive) responses are delayed, and the expectation-dependent reference points affecting disappointment, or guilt aversion, are affected by recent beliefs. Whether successive moves take place in a short time or involve waiting is therefore important independently of agents’ patience.
My methodological contributions go beyond the analysis of emotions in games, as they allow a deeper understanding of the solution concepts used in the applications of game theory, e.g. in policy making, the design of better institutions (mechanism design), or the study of self-enforcing agreements (e.g. between sovereign countries, or colluding firms). Indeed, my analysis can help applied economic theorists and experimental economists to ascertain which solution concept (with its ensuing behavioral predictions) is appropriate for the application at hand.