Skip to main content

Uncertainty and Precaution: Environmental Decisions Involving Catastrophic Threats

Periodic Reporting for period 3 - UnPrEDICT (Uncertainty and Precaution: Environmental Decisions Involving Catastrophic Threats)

Reporting period: 2018-10-01 to 2020-03-31

The goal of this project is to supplement or replace the precautionary principle with decision guidance that better handles both normative and empirical uncertainty in contexts of speculative but potentially catastrophic consequences. It has been claimed that emerging technologies such as biotechnology or machine intelligence could have catastrophic impacts on human civilization or the biosphere, indicating the need for precaution until scientific uncertainty has been resolved. Yet it is unclear how to apply the precautionary principle to cases where the deeper investigations of scientific uncertainties that it calls for can themselves be a source of catastrophic risk (in e.g. gain-of-function research). Furthermore, the precautionary principle fails to account for moral uncertainty—even though many decisions depend more sensitively on ethical parameters (e.g. our obligations to future generations, discount rates, intrinsic value of nature) than on remaining scientific uncertainties.

Building on recent advances in decision theory, computational modelling, and domain-specific risk assessment techniques, and using tools of analytic and moral philosophy, this project will: (1) develop methods that account for normative uncertainty by combining voting theory with ongoing work on moral uncertainty, identifying parameters that make the largest practical difference; (2) combine these with methods for dealing with sources of empirical ignorance (such as information hazards, anthropic shadows, model uncertainties) into a theoretically well-motivated framework that comprises normative and empirical uncertainty; (3) derive mid-level principles and procedures from this theoretical framework by working through three case studies (information hazards, dual-use biotechnology, and automation and machine intelligence) offering better practical guidance on speculative but potentially catastrophic risks than does the precautionary principle.
The project has made progress on all three work packages during this reporting period, focussing primarily on macrostrategic implications of different forms of uncertainty for governance principles.

In work package 1 (moral uncertainty) Prof Bostrom’s “Policy Desiderata for Superintelligent AI: A Vector Field Approach” outlines a novel approach to macrostrategic coordination despite moral disagreement and uncertainty while Dr Sandberg’s “Modeling the Social Dynamics of Moral Enhancement” explores the meta-stability of various normative beliefs in an environment in which human enhancement enables agents to choose future moral beliefs on the basis of their current moral beliefs and observed behaviour of other agents. Work on normative uncertainty with respect to digital minds has yielded promising results but is not yet ready for publication.

In work package 2 (empirical uncertainty) the primary focus during this reporting period was on anthropic bias as a source of empirical uncertainty. Dr Sandberg et al.’s “Nuclear war near misses and anthropic shadows” (forthcoming) explores whether observer selection bias leads us to underestimate the risk of nuclear war. Similarly, Dr Sandberg et al.’s “Predicting the Destruction of Earth’s Biosphere” (forthcoming) investigates whether our estimates of non-anthropogenic extinction risks might be biased by anthropic reasoning. In addition to this work on anthropic shadow, both Dr Sandberg and Dr Cotton-Barratt characterised risks stemming from information hazards in biotechnological research.

In work package 3 (case studies / applications), Prof Bostrom’s “Vulnerable World Hypothesis” provided a new framework for approaching a wide range of plausible but uncertain risks highlighting the importance of global coordination and governance. This paper, together with the other works on information hazards, provides a well-defined basis for developing both the dedicated information hazards case study as well as defining a starting point for the bio-security case study. In addition, Dr Sandberg is preparing a framework for dealing with information hazards in organisations investigating existential risk, which will have direct policy recommendations for the relevant research community. The third case study on automation and machine intelligence is already well underway with an extensive report by Dr Drexler exploring prospects for advanced AI technologies from the perspective of structured systems development. This report yields a different interpretation of how AI capabilities could further develop. The report was supplemented by a workshop on value alignment of advanced AI.
The project made substantial progress in developing tools to engage with different forms of uncertainty during this reporting period. It explored principles of governance and coordination in the face of disagreement on moral values. The project also highlighted several areas in which anthropic shadow could lead to biased estimates of probabilities for potentially catastrophic events; and explored potential information hazards related to dual-use biotechnology. Lastly, the project yielded a new framework for thinking about increasingly capable AI, published as a FHI Technical Report and available on the project’s website.

Going forward the project will combine research already undertaken with further insights to deliver a number of publications, an agent-based metamodel, as well as a new theoretical and policy framework for decision guidance addressing both normative and empirical uncertainty.

Future publications will cover all work packages and touch upon a wide array of topics. They will include, among others, mapping out future risk potential in biotechnical research (under review in Health Security); work on normative uncertainty with respect to digital minds; a framework for handling information hazards in academic organisations doing existential risk research; strategies for preventing or responding to potential existential risks (under review in Global Policy); an enquiry into human cooperation in the context of implementing and utilising AI capabilities; research into mitigating sensitivity-of-error risks in AI development by exploring the viability of windfall clauses; and investigating the scalability of offence/defence balances under AI development paths (forthcoming in the Journal of Strategic Studies). In total, we expect to publish about six additional peer-reviewed papers and three FHI Research Reports across 2019 and 2020 as part of this project.

The project will also yield input to a book which will not only investigate different types of existential risk but will also explore ethical considerations around those risks and methodological tools to approach their research (“The Precipice” by Toby Ord, publication scheduled for early 2020).

We will further develop an agent-based metamodel of decisions about emerging technology. This framework will allow testing the effect of various decision-criteria (e.g. variants of precautionary or proactionary principle, heuristics, etc.) and emergent effects (e.g. model and criteria diversity among the agents, value of information price dynamics) on social utility. Subject to the quality of results, we aim to publish the agent-based model in a peer-reviewed publication by mid-2020.

The policy framework, which constitutes the final output of the project, will endeavour to replace the precautionary principle in approaching normative and empirical uncertainty. Supplementing this framework, we will conduct several events to test developed ideas and invite stakeholder feedback.
We will conduct a workshop in early 2020 on strategic risks in AI research. The workshop will be exploring the security risks that come with military uses of AI, including the dangers of a “race to the bottom” on AI safety.

We will also hold a second workshop in mid-2020 on new and emerging biosecurity risks. The workshop will explore the concerns associated with dual-use biotechnology which may pose a biosecurity threat through enabling biology to be misused, such as through engineering and de novo creation of deadly pathogens, and discuss possible technical, governance and regulation solutions.
Towards the completion of the project, we will organise a conference on information hazards, bringing together senior subject matter experts from various relevant fields to discuss our proposed mechanisms to evaluate technological uncertainties. This conference will likely take place in mid-2020.