What are the differences between AGI, transformative AI, and superintelligence?
These terms are all related attempts to define AI capability milestones1 — roughly, "the point at which artificial intelligence becomes truly intelligent." There’s a lot of variation in how different people use them — because we’re pretty confused about what AI will look like, it’s hard to find definitions that are natural.2 But the most standard meanings are something like:
- AGI stands for "artificial general intelligence" and refers to AI programs that aren't just skilled at a narrow task (like playing board games or driving cars)3 but that have a kind of intelligence that they can apply to a similarly wide range of domains as humans. Some call systems like Gato AGI because they can solve many tasks with the same model. However, the term is usually reserved for systems whose general competence is at least human-level, meaning AGI is a potential future development.4 One way to tell if a system is AGI is if it can be taught to perform an arbitrary human job — at least one that can be done remotely, and doesn’t have being human as a direct job requirement.
- Transformative AI, loosely speaking, is any AI powerful enough to transform society.5 Holden Karnofsky defines it as AI that causes at least as big an impact as the Agricultural or Industrial Revolutions, which increased the rate of economic growth many times over. Ajeya Cotra's "Forecasting Transformative AI with biological anchors" describes a "virtual professional," i.e., a program that can do most remote jobs, as an example of a system that would have such an impact.
- Superintelligence is defined by Nick Bostrom as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest." This is a much higher bar than AGI or Transformative AI, but it may not take much longer to reach, e.g., because of recursive self-improvement.
Other terms which are sometimes used:
- Advanced AI is any AI that's much more powerful than current AI. The term is sometimes used as a loose placeholder for the other concepts here.
- Human-level AI is sometimes defined as any AI that can solve all the cognitive problems a human can solve, and sometimes just vaguely means it’s roughly as intelligent as humans on average. Current AI has a very different profile of strengths and weaknesses than humans, and this is likely to remain true of future AI: before AI is at least human-level at all tasks, it will be vastly superhuman at some important tasks while still being weaker at others. For example: a human-level AI could be superhuman at programming while struggling to write a good novel.
- Strong AI was defined by John Searle as the philosophical thesis that computer programs can have "a mind in exactly the same sense human beings have minds," but the term is sometimes used outside this context as more or less interchangeable with "AGI" or "human-level AI."
- Seed AI is any AI with enough AI programming ability to set off a recursive self-improvement process, maybe taking it all the way to superintelligence. An AI might not have to start off as an AGI to have sudden and dangerous impacts in this way.
- Turing Test-passing AI is any AI smart enough to fool human judges into thinking it's human. The level of capability required depends on how intense the scrutiny is: current language models trained to imitate human text can already seem human to a casual observer, despite not having general human-level intelligence. On the other hand, imitating an intelligence can be harder than outperforming it (in the same way that it’s harder to walk exactly like a turtle than to walk faster than a turtle), so it's also possible for smarter-than-human AI to fail the Turing Test.
- APS-AI is a term introduced by Joe Carlsmith in his report on existential risk from power-seeking AI. APS stands for Advanced, Planning, and Strategically aware. "Advanced" means it's more powerful than humans at important tasks; "Planning" means it's an agent that pursues goals by using its world models; "Strategically aware" means it has good models of its strategic situation with respect to humans in the real world. Carlsmith argues that these properties together create the risk of AI takeover.
- PASTA is an acronym for "Process for Automating Scientific and Technological Advancement," introduced by Holden Karnofsky in a series of blog posts. His thesis is that any AI powerful enough to automate human R&D is sufficient for sudden transformative impacts, even if it doesn't qualify as AGI.
- TEDAI is an acronym for “Top-human-Expert-Dominating AI” that was introduced by Ryan Greenblatt and refers to “AIs which strictly dominate top human experts in virtually all cognitive tasks (i.e., doable via remote work) while being at least 2x faster and within a factor of 5 on cost.”
- Uncontrollable AI means an AI that can circumvent or counter any measures humans take to correct its decisions or restrict its influence. An uncontrollable AI doesn’t have to be an AGI or superintelligence. It could, for example, just have powerful hacking skills that make it practically impossible to shut it down or remove it from the internet. An AI could also become uncontrollable by becoming very skilled at manipulating humans.
- The t-AGI framework, proposed by Richard Ngo, benchmarks the difficulty of a task by how long it would take a human to do it, and says an AI is a t-AGI if it can (at all, in any amount of time) do most tasks of difficulty t. For instance, an AI that can recognize objects in an image, answer trivia questions, etc., is a "1-second-AGI,” because it can do most tasks that would take a human one second to do, while an AI that can do things like develop new apps and review scientific papers is a "1-month-AGI."
These definitions have also changed over time. ↩︎
For a dialogue illustrating the difficulty of coming up with good definitions in mathematics, see “Proofs and Refutations” by Imre Lakatos. ↩︎
AI that excels at specific tasks is sometimes called “Narrow AI.” ↩︎
The term AGI suffers from ambiguity, to the point where some people avoid using it. Still, it remains the most common term used to talk about the cluster of concepts used on this page. ↩︎
The term is unrelated to the transformer architecture. ↩︎