Why don't we just not build AGI if it's so dangerous?
In March of 2023, the Future of Life Institute put out an open letter calling for a pause on "giant AI experiments" of "at least six months”. Less than a week later, AI researcher Eliezer Yudkowsky, writing in Time magazine, argued that AI labs should "shut it all down."1 Not building AGI is certainly a live idea on the table.
But this isn’t a simple proposal to implement, because avoiding dangers from unaligned AGI requires that no one ever builds unaligned AGI. There are strong competitive pressures to produce more capable AI, and individual companies or labs might worry that if they stop researching AGI, they’ll be overtaken by others who are more willing to push forward. Additionally, AI researchers who make their living researching AI might be (understandably) reluctant to simply stop.
The field of AI governance includes work on solving these kinds of coordination problems.
Some people have been even less diplomatic. ↩︎