Would a slowdown in AI capabilities development decrease existential risk?

People concerned about AI safety are deeply divided on whether it’s a good idea to slow AI capabilities development. Many AI alignment and strategy researchers think slowing down or pausing would reduce the risk that we create an unaligned superintelligence. For this reason and others, many signed an open letter in March 2023 advocating a pause on training models more powerful than GPT-4 (the state of the art at the time). Some, including Eliezer Yudkowsky, have argued for even stricter and longer-lasting moratoriums .

Arguments in favor of slowing down

Slowing down would give us more time to study current AI systems and think. Current neural networks are mostly black boxes: although humans create the rules by which networks turn huge amounts of data into their inner workings, we have very little idea of what those inner workings end up being. Interpretability research is lagging far behind capabilities research. We don't currently understand how we’d align a superintelligent AI, but future efforts to solve the alignment problem may bear fruit, and every extra year adds to our chances. Nate Soares has emphasized that basic theoretical research may be hard to parallelize. We can try to throw a lot of researchers at the problem when we get close to AGI and have early systems to experiment with, but that may not be enough: developing fundamentally new ideas may require individual researchers to build out their thoughts for several years or decades.

Delaying AGI would give us more time to use current or near-term technologies to help with alignment. For instance, human intelligence enhancement and whole brain emulation could help us with alignment research. Extra time would also be welcome to build collective thinking, truth-finding, and coordination tools. Finally, more time might allow us to discover new approaches that nobody has thought of yet.

Arguments against slowing down

There are also arguments that a slowdown might decrease safety. For example, many worry that if some actors slow down, that will just improve the relative position of those who don’t. For example, a moratorium among American AI companies would not slow down Chinese AI efforts as much, making it more likely that the first AGI will be developed in China, and some worry that it would be subject to lower safety standards there, or would be given values they consider dangerous. Those who favor slowing down respond that, as long as we don’t know how to give superintelligent AI any set of values, building it is likely to result in human extinction regardless of who gets there first. They also argue that Chinese AI is relatively strictly regulated and pretty far behind, meaning any advanced AI in China would most likely be adapted from American research anyway. Historically, people have sometimes falsely believed they were in arms races.

Another argument that slowing down is less safe is that it can make the AI capabilities improvement curve less smooth. If we get capabilities improvements earlier, we'll have more time to experiment with aligning them. If we delay them until later, we might build up a "hardware overhang". Restrictions on model scaling would probably give us more time overall, but might shift capabilities research into algorithmic improvement. If these restrictions were then lifted, the resulting burst of capabilities could create a “hard takeoff” that would be hard for human actors to adjust to. Paul Christiano believes that, while slowing down now is probably good, the more important issue is whether we’ll be able to slow down later, at more dangerous capability levels.

Advanced AI doesn’t just create existential risk, but could help reduce other existential risks as well; some argue that this means slowing down makes us less safe overall. Asteroids are often mentioned in this context, but the civilization-killing kind only comes along once in many millions of years. Engineered pathogens, nuclear war, and economic collapse or permanent stagnation are stronger examples of non-AI existential risk: while civilization would probably recover from them, there’s a chance it wouldn’t, and even if it did, there would be immense suffering, and maybe our values would be changed for the worse. Still, these risks aren’t that probable in any given year; a reduction in existential risk from AI could easily outweigh them if it’s measured in single or double digit percentages per year.

Finally, of course, it’s worth noting that slowing down AI means delaying the major non-existential-scale benefits (and harms) that the technology would bring. AI-driven advances in areas like medicine have the potential to be of great humanitarian importance, but slowdown proponents argue they’re worth temporarily forgoing to reduce the risk of human extinction. Most slowdown proponents do want AGI to be developed eventually, when we have a better handle on how to do so safely.



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.