What is aisafety.info about?

This site is primarily about existential risk from future misaligned advanced AI. That means we focus on dangers that are:

  • on the level of human extinction, rather than smaller-scale

  • caused by highly-advanced AI systems that can outsmart humans

  • caused by systems that are likely to be created in the future, rather than systems that exist today (e.g., ChatGPT)

  • caused by AI failing to do what its human designers and users intended, rather than by AI following instructions to do harmful things.

Our goal is to inform people about the risk of human extinction due to AI, rather than to advocate for any particular policy to address it. The site's content is intended to reflect the views of AI safety researchers in general. When there is substantial disagreement within the field (which is often the case), we attempt to mention all major positions.