What is "p(doom)"?

The term p(doom) is a shortening of "probability of [AI] doom," i.e., how likely an AI-caused existential catastrophe is to happen.1 In the AI risk discourse, people will often say that they "have a p(doom) of X%," which means they believe an AI-caused existential catastrophe is X% likely to occur.

In this context, “probability” is meant as subjective probability. If someone gives a p(doom) of 20%, they don’t mean that doom has happened 20% of the time in some reference class,2 or that they have a formal model that justifies that exact number. Rather, they’re expressing their own vague intuitive degree of uncertainty. Compare this situation to watching a sports game and thinking it’s 20% likely that team A will win. That means you might be willing to bet at 4 to 1 odds against team A — even if these teams have never played each other before, and you don’t have any past results to count.

Estimates of p(doom) vary greatly, and people with a high p(doom) sometimes get called “doomers.” Because different people might have different "doom" scenarios in mind when using the term, comparing two people's p(doom)s may not always make sense.

Although the word “doom” suggests inevitability, “p(doom)” is not meant to express the probability that a catastrophe is inevitable, but simply the probability that a catastrophe happens.

Why use p(doom)?

Some people share their beliefs in the form of probabilities to avoid ambiguities in ordinary language. “I’m optimistic”3 may mean “p(doom) is below 1%” to one person, and “p(doom) is below 20%” to another. Using non-quantitative terms would hide this large disagreement.

Criticisms of the term

Michael Nielsen claims that people in the AI industry use the concept fatalistically, taking the probability of doom as set in stone regardless of their own choices.

Isaac King notes people mean different things by “p(doom)”:

  • Some include dystopian futures, others only include human extinction.
  • Some only include disasters that happen by, say, 2040.
  • Some use doom to mean x-risks due to misaligned AI, others use it to mean all x-risks arising from powerful AI, including through misuse.

p(doom) includes the probability of AI causing doom, but not the probability of AI preventing doom, such as by preventing a nuclear war that would have ruined human civilization. If we’re deciding whether to build AI, we should be thinking about AI’s net effect on the probability of doom, and how that depends on our strategies. But “net p(doom)” isn’t the full story, either: if both the risks and the benefits are high, there’s a lot of room for good safety strategies to increase the net value, but if both are low, then there’s not.


  1. Other definitions of doom exist — see the “Criticisms of the term” section. ↩︎

  2. ↩︎
  3. For instance, Lina Khan, chair of the FTC, says when asked about her p(doom) that she’s an optimist. The interviewer asks if this means her p(doom) is 0%, and she answers, “no, no, more like 15%”! ↩︎



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.

© AISafety.info, 2022—1970

Aisafety.info is an Ashgro Inc Project. Ashgro Inc (EIN: 88-4232889) is a 501(c)(3) Public Charity incorporated in Delaware.