Do people seriously worry about existential risk from AI?
Many people with a deep understanding of AI are highly concerned about the risks of unaligned superintelligent AI.
In 2023, leaders from the world’s top AI labs, along with some of the most prominent academic AI researchers, signed a statement that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Signatories included the founders of major AGI companies: Sam Altman1, Dario Amodei2, Shane Legg3 and Demis Hassabis4.
Stuart Russell, AI expert and co-author of the “authoritative textbook of the field of AI”5, warns of “species-ending problems” and wants his field to pivot to make superintelligence-related risks a central concern. His book Human Compatible focuses on the dangers of artificial intelligence and the need for more work to address them.
The Turing Award is considered the equivalent of the Nobel Prize for AI. The recipients of the 2018 Award6 were
-
Geoffrey Hinton
-
Yoshua Bengio
-
Yann LeCun
In 2023, Hinton resigned from Google to be able to focus on speaking about the dangers of advancing AI capabilities. He worries that smarter-than-human intelligence is not far off, and he thinks that AI wiping out humanity is “not inconceivable”.7 Bengio, who was not previously concerned about existential risks from AI, changed his stance in 2023 and argued that we need to put more effort into mitigating them. LeCun, however, is a vocal skeptic of AI posing an existential risk.
In late 2023, Turing Award recipient Andrew Yao and others8 joined Hinton and Bengio in authoring a paper outlining risks from advanced AI systems.
In 2024, Hinton won the Nobel Prize in Physics together with John Hopfield, another pioneering machine learning researcher, who has signed a letter calling for a pause on frontier AI systems.
Many other science and technology leaders have worried about superintelligence for years. Late astrophysicist Stephen Hawking said in 2014 that superintelligence “could spell the end of the human race.” In 2019, Bill Gates described himself as “in the camp that is concerned about superintelligence” and stated that he “[doesn't] understand why some people are not concerned”. Russell, Hinton, Bengio, and Gates have all signed the Statement on AI Risk letter.
Altman had previously claimed that if things go poorly, it could be “lights out for all of us”. ↩︎
Amodei has spoken publicly about the existential risks from AI. ↩︎
Legg has stated that he believes superintelligent AI will be “something approaching absolute power” and “the number one risk for this century”. ↩︎
Hassabis has also talked about the risks. ↩︎
The three winners have been called the “Godfathers of Deep Learning” for their crucial contributions to the field. ↩︎
In fact, Hinton’s own view is that existential risk from AI is over 50%, though he gives a lower number after taking into account that others are more optimistic. ↩︎
Notable co-authors include Dawn Song, Yuval Noah Harari and Daniel Kahneman. ↩︎