Isn't capitalism the real unaligned superintelligence?
Science fiction author Ted Chiang and others have compared capitalism to a paperclip-maximizing superintelligence. The analogy goes: corporations can be thought of as superhumanly-powerful agents have a goal of maximizing profit, sometimes at the expense of other things that humans value. If dangerous and amoral "superintelligences" like this already exist, why should we pay special attention to the prospect of unaligned superintelligent AI?
Capitalism is, in some ways, a good analogy for misaligned AI: both are examples of non-human optimization processes that can lead to bad outcomes that nobody intended. However, comparisons that suggest the two are about equally dangerous (or that, somehow, because capitalism is dangerous, AI must not be a real danger) seriously underestimate AI’s potential to be both 1) extremely powerful and 2) totally amoral, beyond any precedent in capitalism or other social systems running on collective human intelligence:
-
Future AI systems could be superhumanly intelligent in ways corporations are not. Corporations can do some huge tasks that decompose into human-sized chunks. (And capitalism can do some huge tasks that decompose into corporation-sized chunks.) Future AI systems could not only do these things, but also reason much faster and in qualitatively more effective ways, and can be easily scaled up by just adding more computing hardware.
-
AI systems are not made of humans, who have mixed motivations and limited ability to coordinate, and may experience moral scruples or leak information. This makes AI systems more able to single-mindedly pursue horribly misaligned goals, without having to worry as much about things like loyalty, morale, or public opinion.
These factors could help AI get enough of a strategic advantage over human governments to overthrow them altogether, enabling the AI to cause harm on a greater scale than has been possible for corporations.