What are the differences between a singularity, an intelligence explosion, and a hard takeoff?

These are all terms for future processes that could rapidly result in superintelligence. They have been used by different people, at different times, with overlapping and sometimes changing meanings.

  • A (technological) singularity refers to a hypothetical future time when, because of AI or other technologies, progress becomes extremely fast, resulting in a radically changed world. This term has been used inconsistently and is no longer in much use by the AI alignment community. Different versions have stressed the rate of acceleration of technological advancement (whether exponential, double-exponential, or even hyperbolic), self-improvement feedback loops (an "intelligence explosion"—see below), and the unknowability of strongly superhuman intelligence.

  • An "intelligence explosion" is what I.J. Good called a scenario in which AI becomes smart enough to create even smarter AI, which creates even smarter AI, and so on, recursively self-improving all the way to superintelligence.

  • A hard takeoff is a scenario where the transition to superintelligence happens quickly and suddenly instead of gradually. The opposite is a soft takeoff. Related distinctions include fast versus slow takeoff and discontinuous versus continuous takeoff. A takeoff can be continuous but eventually become very fast, as in the case of Paul Christiano's predictions of a "slow takeoff" that results in hyperbolic growth.

There are some more related concepts that have been used:

  • "Sharp left turn" is Nate Soares' term for an event where an AI system quickly learns to generalize its capabilities to many domains, much like how human intelligence turned out to be applicable to fields like physics and software engineering despite not being “trained for” them by natural selection. He argues the key problem is that the system's alignment would fail to generalize along with its capabilities. A "sharp left turn" need not be caused by recursive self-improvement.

  • "FOOM" is more or less a synonym of "hard takeoff", perhaps based on the sound you might imagine a substance making if it instantly expanded to a huge volume. The term is associated mostly with the Yudkowsky vs. Hanson FOOM debate: both eventually expect very fast change, but Yudkowsky argues for sudden and discontinuous change driven by local recursive self-improvement, while Hanson argues for a more gradual and spread-out process. Hard takeoff does not require recursive self-improvement, and Yudkowsky now thinks regular improvement of AI by humans may cause sufficiently big capability leaps to preempt recursive self-improvement. On the other hand, recursive self-improvement could be gradual (or at least start out that way): Paul Christiano thinks an intelligence explosion is "very likely" despite predicting a slow takeoff.



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.