What open letters have been written about AI safety?
Groups of major figures in the AI world have written a number of open letters signaling their support for ideas related to AI safety and existential risk.
The Asilomar AI Principles, written at a conference in 2017, expressed a consensus among leading figures in academic AI research and in industry about some basic values that should underlie advanced AI development. For example:
- “Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.”
- “There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.”
- “Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.”
More recently, after LLM successes caused widespread interest in (and concern about) AI, three open letters received greater media attention:
- The AI pause letter, organized by the Future of Life Institute (FLI) in March 2023, advocated a six-month moratorium on training systems more powerful than GPT-4. (This suggestion was not adopted: research continued, and current systems are substantially more powerful than the original GPT-4.)
- The Center for AI Safety’s Statement on AI Risk, in May 2023, stated simply that “[m]itigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
- FLI’s Statement on Superintelligence, in October 2025, said: “We call for a prohibition on the development of superintelligence, not lifted before there is 1. broad scientific consensus that it will be done safely and controllably, and 2. strong public buy-in.”
All these statements were signed by some leading researchers and other public figures, but the exact signatories differed along with the ideas expressed. The Asilomar Principles were agreed to by leaders of AI labs as well as researchers, including some, like Yann LeCun, who are skeptical of existential risk. The pause letter received support from some top researchers like Yoshua Bengio and Stuart Russell, but not from leaders of AI companies except Elon Musk.1 In contrast, the Statement on AI Risk — which unlike the pause letter emphasized existential risk but did not call for restrictions — did get support from all the major AI company leaders as well as researchers like Bengio, Russell, and Geoffrey Hinton. And finally, the Superintelligence Statement — which unlike the pause letter mentions superintelligence specifically, and unlike the Statement on AI Risk advocates restrictions on development — has (as of October 2025) not been signed by any leaders of AI companies, but has been signed by Hinton, Bengio, Russell, other top scientists, and a wide range of public figures from different political and religious backgrounds.
All this suggests that:
- Leaders in the AI field agree on some basic ideas like “AI should be developed for the benefit of humankind.”
- There is widespread agreement among AI leaders that AI poses a risk of human extinction that we should take very seriously, at least by researching the problem.
- Some of the top researchers are worried enough to call for bans of some kinds of technology, like superintelligence. Others, and perhaps unsurprisingly the AI companies themselves, disagree.
- Many people outside the field of AI are now concerned.