Do people seriously worry about existential risk from AI?
Many people, including those with a deep understanding of AI, are highly concerned about the risks of unaligned superintelligent AI.
The people building it
Leaders of the world’s top AI companies have acknowledged the risks:
- OpenAI’s Sam Altman has called superintelligence “probably the greatest threat to the continued existence of humanity,” and said that if things go badly, it could mean “lights out for all of us.”
- Anthropic’s Dario Amodei has said that if a powerful model wanted to destroy humanity, “we have basically no ability to stop it,” and that there’s a 25% chance things will go “really, really badly.”
- Google DeepMind’s Demis Hassabis has said it’s “insane”1 to think there’s nothing to worry about in the context of human extinction risk from AI. His cofounder Shane Legg has said AI is his “number 1 [existential] risk for this century.”
- xAI’s Elon Musk, too, has said he thinks AI is “our biggest existential threat.”
The people studying it
Stuart Russell, co-author of the “authoritative textbook of the field of AI,”2 warns of “species-ending problems” and wants his field to pivot to make superintelligence-related risks a central concern. His book Human Compatible focuses on the dangers of artificial intelligence and the need for more work to address them.
The Turing Award is considered the equivalent of the Nobel Prize for computer science. The recipients of the 2018 Award3 were
- Geoffrey Hinton (who also won the Nobel Prize in Physics in 2024)4
- Yoshua Bengio
- Yann LeCun
In 2023, Hinton resigned from Google to be able to focus on speaking about the dangers of advancing AI capabilities. He worries that smarter-than-human intelligence is not far off, and he thinks that AI wiping out humanity is “not inconceivable.”5 Bengio, who was not previously concerned about existential risks from AI, changed his stance in 2023 and argued that we need to put more effort into mitigating them. LeCun, however, is a vocal skeptic of AI posing an existential risk.
In late 2023, Turing Award recipient Andrew Yao and others6 joined Hinton and Bengio in authoring a paper outlining risks from advanced AI systems.
In 2024, Hinton won the Nobel Prize in Physics together with John Hopfield, another pioneering machine learning researcher, who has signed a letter calling for a pause on the development of frontier AI systems.
As for the broader field, when AI Impacts surveyed scientists who had published in top-tier AI venues in 2023, they found that over a third of them thought human-level AI would have at least a 10% chance of leading to an extremely bad outcome such as human extinction.7
Other prominent technologists
Late astrophysicist Stephen Hawking said in 2014 that superintelligence “could spell the end of the human race.” In 2019, Bill Gates described himself as “in the camp that is concerned about superintelligence” and stated that he “[doesn't] understand why some people are not concerned.”
Political and religious leaders
Despite fears that AI existential safety would become a partisan issue, and that entrenched positions would make it harder to discuss it productively, support has cut across political lines. Politicians across the US political aisle have expressed concern, as have political leaders from other countries.
Religious leaders who have expressed concern include:
- Pope Francis
- Patriarch Kirill, the Patriarch of the Russian Orthodox Church
- Dozens of faith leaders who have signed the Statement on Superintelligence
Most of the people named here have signed one or more of the open letters that express concern about risk from AI.
So existential risk from AI is now far from a fringe concern, and being taken seriously across society.
At 17:19 of the linked video, in the context of why people signed the Statement on AI Risk. ↩︎
The three winners have been called the “Godfathers of Deep Learning” for their crucial contributions to the field. ↩︎
Hinton mentioned existential risk from AI in his short Nobel prize acceptance speech. ↩︎
In fact, Hinton’s own view is that existential risk from AI is over 50%, though he gives a lower number after taking into account that others are more optimistic. ↩︎
Notable co-authors include Dawn Song, Yuval Noah Harari and Daniel Kahneman. ↩︎
The authors published additional details on the methodology. ↩︎