Why would a misaligned superintelligence kill everyone?

While AI is unlikely to be malevolent towards humanity, we might still die as a result of the AI doing instrumental reasoning, with our deaths being either 1) an intentional goal or 2) a side-effect of some other goal:

So the end result could be human extinction.

On the other hand, keeping humans around would take only a small fraction of a superintelligence’s resources. Some have argued that an AI might be willing to pay that cost to keep us around, either if it’s only mostly misaligned and cares about us a little bit, or for various decision-theoretic reasons. Others disagree. And even if it’s true, many of us might still die, and the survivors might not like their situation, and we’d lose out on most of the universe.



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.

© AISafety.info, 2022—1970

Aisafety.info is an Ashgro Inc Project. Ashgro Inc (EIN: 88-4232889) is a 501(c)(3) Public Charity incorporated in Delaware.