What is an "AI doomer"?

“AI doomer” is a label for someone who is concerned about AI-caused extinction (or comparable AI catastrophes) — often used pejoratively, by people who take a dim view of such concerns.

It’s worth distinguishing a few kinds of views that might cause someone to be called an “AI doomer”:

  1. “AI-caused extinction is plausible enough to pay attention to and to make tradeoffs to prevent.” This view is held by many AI researchers, engineers, policy analysts, safety researchers, and heads of AI labs, as illustrated by the CAIS statement on AI risk.
  2. “AI-caused extinction is highly probable.” People with this view generally think that although we will probably fail to avoid extinction, it’s worth trying. This includes Eliezer Yudkowsky and others at MIRI.
  3. “AI-caused extinction is inevitable and therefore not worth attempting to prevent.” This sense of “doomer” is often used in other contexts, like climate change, but it’s uncommon for people to be fatalistic about AI risk in this way.

This isn’t a complete typology of “AI doomer,” for which no principled definition exists. For instance, people have used the term to refer to those who believe “AI will cause mass unemployment,” “AI is bad,” or even that “AI progress will hit a wall.”

In a broader context, “doomer” can also refer to anyone who is pessimistic about technological progress (like a Luddite), or the future in general. This results in further ambiguities, since many “AI doomers” are optimistic about progress, but carve out an exception for AI.

Fixing the Ambiguity

While there are no alternatives to the term which have gained traction, Rob Bensinger suggests some better-defined alternatives:

  • “AGI risk fractionalists”: p(doom) < 2%
  • “AGI-wary”: p(doom) around 2-20%
  • “AGI-alarmed”: p(doom) around 20-80%
  • “AGI-grim”: p(doom) > 80%

Bensinger also suggests terms for people’s preferred policies about whether and when to build AGI. Whether we should build AI is only loosely related to how likely AI is to cause extinction, but the word “doomer” often conflates these questions.

Further reading:



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.

© AISafety.info, 2022—1970

Aisafety.info is an Ashgro Inc Project. Ashgro Inc (EIN: 88-4232889) is a 501(c)(3) Public Charity incorporated in Delaware.