What are the win conditions for ending AI risk?

In order to avoid AI risk we must avoid the creation of an unaligned Artificial General Intelligence (AGI). This requires that all of these conditions be met:

  • If an AGI is built, we must figure out how to align it.

  • We must ensure that the team that builds the first AGI pays the alignment tax. For this to happen

    • This tax should be as low as possible.

    • Race dynamics should be avoided. Windfall clauses could be used to lower the incentive to win the race as well as build social acceptability.

  • To avoid unrest, some thought should be dedicated to who controls the AGI before it is built.

  • Building an aligned AGI does not stop another actor from building an unaligned one later. A pivotal act is one way to ensure this does not happen, global coordination could be another.

Another option to coordinate to ensure no AGI gets built, although it might be hard to ensure that this moratorium lasts.