Skip to main content

Bayesian markets for unverifiable truths

Periodic Reporting for period 4 - BayesianMarkets (Bayesian markets for unverifiable truths)

Reporting period: 2020-07-01 to 2020-12-31

Economic analysis and policy making must often rely on people reporting subjective assessments (self-assessed health, life satisfaction, happiness). Similarly, environmental policies are based on expert opinions about climate change. In policy and research, surveys are conducted to collect expert opinions, or to ask people about their behavior, their expectations, or past experiences. But how can we ensure people answer seriously? How can we make sure that experts report their true estimates?
If the judgment or opinion we try to collect is related to an observable event, solutions exist. For instance, prediction markets (markets on which agents buy and sell bets that give a money amount if a defined event occurs) offer a way to elicit beliefs of agents and to aggregate them to an average belief (the market price). However, prediction markets require that either the event on which the bet is defined or its complement actually occurs. In horse race betting markets, we know, at the end, which horse wins the race and who should be paid. Prediction markets cannot be applied to subjective judgments, or beliefs about unverifiable events.
The project aims to develop methods that reward truth-telling even for completely unverifiable truths. By doing so, we can improve the quality of the data on which economic analysis or policy making is based.
The work performed in the project was organized in 4 parts:
A. The first part is theoretical. We developed a new form of markets, on which people bet on what others think. Their bets reveal what they themselves think. The first paper introducing Bayesian markets was published in a prestigious general-audience journal and has been presented around the world and across disciplines. We developed several variants of the betting mechanisms, which have been used in the other parts of the project and by other researchers in surveys.
B. We implemented the new markets in the lab and online. In this part, we focused on simplifying the methods, to make them easy and intuitive. We introduced new, simple bets, that we called "top-flop bets". These bets can be used to reveal what people like. It can be used for instance in marketing. Imagine you watch a movie before everyone else (or you can test a product before it is officially launched). The movie producer would typically ask you what you thought of the movie, whether you liked it. But would you always tell the truth? You may want to please the movie producer and say it was nice. What we propose is to make you bet on the performance of the movie, e.g. whether it will get better review than another random movie. We showed how to organize this betting to reveal the most important information: whether you really liked the movie.
C. The methods we are working on also help us identify experts. In some cases, their track records is not available, so we do not know how trustworthy they are (and therefore whether they are real experts). So we do not know whose opinions we should follow. Consider a question such as “Is Proposition X true?” On a Bayesian market (developed in part A), people bet on what others believe the answer to that question is. We demonstrated how studying the earnings of the various agents on the market can tell us whether the proposition is actually true or not. We call it a “follow the money” algorithm. We showed theoretically and empirically that following the money on Bayesian markets outperforms following the majority opinion.
D. In part D, we focused on understanding when people tell the truth, whether we can predict if they don't, and whether we can use methods from the other parts to do so. For instance, we first tried to predict whether statements from a public personality (the 45th US president) were factually correct or not, using linguistic deception detection methods. We could with roughly 74% accuracy. We developed an algorithm to do so, which we used in another study as a benchmark, to see whether we could identify experts in deception detection among a group of volunteers and whether they would perform as well as our algorithm. Short answer: no, humans performs worse in such tasks than a relatively simple algorithm.
We introduced a new type of markets, called Bayesian markets, to reveal what people think or like by making them bets on what others think or like.
Our approach is simpler than what has been proposed in the literature so far. We developed several variants with a focus on simplicity and transparency, whereas methods from the literature tend to be difficult to understand for respondents.

One may wonder why this is interesting: maybe people mostly tell the truth. There are two reasons why they don't always do so. First, people do not always carefully consider the question they are asked. In a study, we showed that our market methods make people exert more efforts to provide an informed answer than if they were simply rewarded to provide 'an' answer. Second, people may also intentionally hide their true opinion. We therefore also studied when people lie and when they don't do what they say. Our research on this topic also contributed to the literature on deception (new ways to detect when people lie).

Finally, we expect to develop new ways to identify whose opinion we should trust. The majority is sometimes mistaken and for some problems, it is not clear which expert to believe in. We developed a new algorithm, in which studying earnings of people on our new Bayesian markets informed us about the quality of the information / knowledge people have. Our results contributed to the literature on expert identification and wisdom of crowds.