Why don't we just not build AGI if it's so dangerous?

In March of 2023, the Future of Life Institute put out an open letter calling for a pause on "giant AI experiments" of "at least six months”. Less than a week later, AI researcher Eliezer Yudkowsky, writing in Time magazine, argued that AI labs should "shut it all down."[1] Not building AGI is certainly a live idea on the table.

But this isn’t an easy solution, because avoiding dangers from unaligned AGI requires that no one ever builds unaligned AGI. Coordinating with everyone who could build such a system is difficult since some of these people might not take the necessary precautions.

Most safety-conscious people agree that it would be unwise to purposefully create an artificial general intelligence now, before we have found a way to be certain it will act purely in our interests. One worrying possibility is that our existing, narrow AI systems require only minor tweaks, or even just more computer power, to achieve general intelligence. Furthermore, the pace of research in the field suggests that there's a lot of low-hanging fruit left to pick, and the results of this research produce better, more effective AI in a landscape of strong competitive pressure to produce systems that are as competitive as possible. Finally, each individual actor might worry that if they stop researching AGI, they’ll be overtaken by others who are more reckless. Some work to solve these kinds of coordination problems is being done in the field of AI governance.


  1. Some people have been even less diplomatic. ↩︎