Wouldn't humans triumph over a rogue AI because there are more of us?
Humans likely won’t have an advantage in effective numbers, for three main reasons:
-
The AI will likely process information much faster than humans can (transistors are millions of times faster than neurons).
-
The AI might create copies of itself, or cooperate with copies created by other people. As of 2023, it takes much more compute to train an AI than to run it. If this remains true, then any computer that can be used to train a single AGI can be used to run several million instances of this AGI.
-
Human numbers only help if there is cooperation and coordination. It’s much harder to get humans with diverse motives to act together than an AI which can duplicate itself and inspect the thoughts of copies.
AI will think faster
"...plenty of risks arise just from the fact that humans are extremely slow. Transistors can fire about 10 million times faster than human brain cells, so it's possible we'll eventually have digital minds operating 10 million times faster than us, meaning from a decision-making perspective we'd look to them like stationary objects, like plants or rocks."
To give an example of how a digital mind might view humans, Critch shows this video of humans in a subway slowed to one one-hundredth of their original speed.
Science fiction has many examples of beings that think much faster than humans; these are almost always unrealistic in one way or another, but can still be useful for understanding what fast enough thinking can do: Frame By Frame by qntm (short and funny), That Alien Message by Eliezer Yudkowsky (longer and excellent).
The AI will team up with its copies
Unlike humans who can’t easily create identical copies of themselves1, an AGI could create perfect clones of its mind by copying the software to another location.
Holden Karnofsky estimates, based on Ajeya Cotra's biological anchors model for forecasting AI, that if a human-level AI is created through gradient descent, the computer it was trained on could run several hundred million copies of that AI — about 5-10% of Earth's working-age population. If it were to expand to other computers, its capabilities would scale with the number of such computers that are running them.
These copies might be capable of superrational cooperation. (As copies of each other they know that the other AIs will think all the same things they do, improving trust and allowing them to make plans and coordinate without communicating.) This would allow for a unified intelligence distributed across a very wide geographical space.
Karnofsky describes what effect minds being implementable on cheap hardware might have in the essays: The Duplicator: Instant Cloning Would Make the World Economy Explode and Digital People Would Be An Even Bigger Deal.
At least not until whole brain emulation becomes accessible. ↩︎