Skip to main content

Epistemic Utility for Imprecise Probability

Periodic Reporting for period 1 - EPIMP (Epistemic Utility for Imprecise Probability)

Reporting period: 2020-02-01 to 2021-07-31

Scientific inference is principally a matter of using observable data to estimate the parameters of models of interest, e.g. models of the climate system. In traditional Bayesian statistics, uncertainty about model parameters is quantified using a single, precise probability distribution. This approach has proved extremely successful in applications where data is plentiful and model parameters are few. But many models are high dimensional (thousands of parameters), and relevant data is comparatively sparse. In such contexts, imprecise probabilities are required to adequately capture uncertainty. The mathematical foundations of imprecise probability theory (IP) have been in place for 25 years, and IP has proved successful in practice. But IP methods lack rigorous accuracy-centered, philosophical justifications. Traditional Bayesian methods can be justified using epistemic scoring rules, which measure the accuracy of the estimates that they produce. But there has been little work extending these justifications to the IP framework. Thus, the key aim of the proposed research is to develop scoring rules for IP distributions (IP scoring rules), and use them to justify and extend IP methods. There are four main objectives: (1) characterise reasonable IP scoring rules; (2) derive scoring-rule based justifications for existing IP methods; (3) use IP scoring rules to discover novel methods for selecting and updating IP distributions; (4) use IP scoring rules to engineer new deference and aggregation principles for IP distributions. Objectives 1 and 2 will deliver firm foundations for existing IP methods. Objectives 3 and 4 will extend the range of IP methods available for both individual and group inquiry. The results of this project will not only make IP a central focus in contemporary epistemology, and shape ongoing philosophical debates about IP’s role in inference and decision-making, but also furnish new tools aimed at influencing how IP methods are used in practice.

Scientific modelling informs public policy. So it is critically important to adequately capture our uncertainty regarding our models. Pretend as if there's too much uncertainty regarding our models and policy-makers will be left in the dark about how to act. Pretend as if there's little or no uncertainty regarding our models and policy-makers will be left shifting their course of action wildly as more data comes in. Imprecise probabilities provide more adequate tools for capturing severe uncertainty. But we need more than just a grab bag of imprecise probabilistic tools. From a public policy perspective, we need to be able to specify which types of errors policy-makers care more about---e.g. Are false positives worse than false negatives? By how much?---and then we need to use their preferences to select the right imprecise probabilistic tools for the job. We need to manage our uncertainty in a way that's most likely to avoid the worst types of errors. This is precisely what IP scoring rules do for us.
The majority of the 1st reporting period was spent providing a method for constructing reasonable IP scoring rules (objective 1). This is the most mathematically challenging project objective. It is also crucial to project objectives 2-4.
Currently, imprecise Bayesian methods lack rigorous accuracy-centered, philosophical justifications. Traditional Bayesian methods can be justified using what are variously known as epistemic scoring rules, epistemic utility functions or inaccuracy measures. Scoring rules measure the accuracy of the estimates that traditional methods produce, which is roughly a matter of how close those estimates are to the actual values of the quantities of interest. Drawing on the work of de Finetti (1974) and Savage (1971), contemporary Bayesians like Joyce (1998, 2009), Schervish et al. (2009) and Pettigrew (2016) use scoring rules, together with resources from decision theory, to show that traditional Bayesian methods provide decision-theoretically optimal strategies for securing accurate estimates. This approach has provided compelling justifications for a wide range of traditional Bayesian methods and principles: Probabilism, which specifies global coherence constraints on estimates (Joyce 1998, 2009; Predd et al. 2009; Pettigrew 2016); Conditionalization, which specifies how to update one’s estimates in light of new data (Greaves and Wallace 2006); the Principle of Indifference, which specifies appropriate estimates to employ when one lacks relevant information (Pettigrew 2014), and more.

Unfortunately, there has been very little work to date extending these justifications to the IP framework. The project team has now provided the first characterisation of reasonable IP scoring rules (and a method for constructing them). We have successfully achieved objective 1. This represents a hugely significant advance in the state of the art. With this milestone in hand, we are on track to reach our final 3 objectives:

- To use IP scoring rules to derive epistemic justifications for existing IP methods.
- To extend the range of IP tools available for individual inquirers by engineering new methods for selecting and updating IP distributions.
- To facilitate group inquiry by discovering new deference and aggregation principles for IP distributions.

Achieving these objectives will advance the field in two significant ways:

- It will provide the first sustained investigation into the epistemic foundations of imprecise probability theory. This will make IP a central focus in contemporary epistemology and shape ongoing philosophical debates about IP’s role in inference and decision-making.
- It will develop novel IP methods for both individual and group inquiry. This has the potential to influence how IP methods are used in a range of fields, for example, economics, climate science and bioinformatics.
EPIMP Team