Can't humans and AI just live in symbiosis?

Humans and a superintelligence can probably never form a symbiosis, because a superintelligence could find more efficient replacements to anything a human could do. For example, if an AI needs someone to work the power stations, it can just engineer its own robots, or find better sources of energy. Once AIs reach a point where they can optimize their own environments better than any human can, it is difficult to think of a reason it would need humans to achieve its goals. However, AIs that are near-or-only-slightly-above human levels (or superintelligent in only some narrow domains) might be able to form a symbiosis. This outcome seems unlikely on its own, since it requires improbable variables (mentioned below) to occur, but these factors could possibly be steered toward higher likelihoods. The following are various speculative criteria that might be needed to satisfy a symbiosis. This does not make these events likely to actually happen. But, if there was a world where a symbiosis occurred, one or more of these conditions might have to be met.

  1. AI(s) are intelligence constrained. Some form of constraint prevents the progress of artificial intelligence beyond a certain point. They do not have the ability to improve themselves or create more intelligent successors. They rely on humans for domains they are not well suited for.

  2. AI(s) are constrained in physical interaction. Some form of constraint prevents AIs from having much physical interaction with the real world, so they depend on humans for physical tasks, while they handle most of the cognitive labor (an inverse Industrial Revolution). This would likely have to be paired with some form of cognitive constraint, or any sufficiently-intelligent system would probably find a way around this.

  3. AI(s) do not want a superintelligence. AIs reach a point where they are intelligent-enough to create more powerful AIs, but not intelligent-enough to solve alignment. They prevent the further development of AI because a superintelligence might threaten their own goals, but work with humans to handle cognitive tasks they are not suited for.

  4. AI(s) cannot solve power efficiency (potential s-risk). One or many AIs are beyond human intelligence, but do not have the ability to develop more energy-efficient forms of computation than the human brain. This could result in a peaceful, cooperative situation where AIs and humans work “in a loop” to perform cognitive tasks. But this also carries a potential s-risk, as it could result in hacking human brains directly and using them for their own computation. It is hard to say what that would be like for humans experiencing it. But this would likely only be a short-lived symbiosis, since the AI(s) would eventually be able to develop more efficient methods of computation.

  5. Very long timelines: if the road to superintelligence is over the course of centuries, humans and AIs might “co-develop”[1] in strange ways we can’t currently foresee, where the line between them gets blurred overtime.


  1. It’s difficult to say what exactly this would look like, the same way it would be difficult for Tesla to have predicted the building of Teslas, or George Washington to have predicted the rise of social media. People can normally only make predictions about things within our current domains of understanding. The idea of a nuke wasn’t considered until we had a better understanding of physics, for instance. The criteria being that it is some process that enables both AIs and humans to improve in their intelligence capabilities beyond their current ones. Augmentation, whole brain emulation, the creation of new intelligent lifeforms, computers built on a human-neural substrate, and human-computer-interfaces are all possibilities. ↩︎