Can't we limit damage from AI systems in the same ways we limit damage from companies?

Suppose you ask a household robot to stick a knife in your dishwasher, but it malfunctions and sticks the knife in you instead. That's bad, but you or your surviving family members can sue the manufacturer and buy a different brand next time. Over time, manufacturers are gradually incentivized to make robots that don't suffer from such failure modes.

One proponent of this view is the economist Robin Hanson, who has argued that we'll use mechanisms like competition and liability to limit the harms of AI systems just like we use them to limit damage from companies.

This argument relies on AI systems following the existing legal order, instead of bypassing, hijacking, or destroying it altogether. You can't switch to a competing product if the product just disassembled the military and courts, or “neutralized” all humans.

Hanson isn't concerned about an “AI coup" because he expects AI progress to be smooth, with many minor, widely distributed advances. But if progress involves major advances happening sporadically, agents at the frontiers of progress could gain a decisive advantage over agents which are behind. A lot of the disagreement here is about how well we can extrapolate the dynamics of AI based on the progression of historical technological advancement in fields like agriculture and industry.

Even if AI development progresses in a "smooth" manner, there may be a tipping point when AI systems become able to coordinate among themselves to take power from humans. Paul Christiano has described such a scenario in “What failure looks like”; the ensuing discussion has focused on whether this scenario is realistic in the light of our past experience with agency failures.

For these reasons, AI systems may be able to become much more powerful than corporations, very quickly. If they are able to do so, this might mean that the methods we use to limit damage from corporations may not apply.



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.