12: Experts are highly concerned

The case we sketch out here is non-technical. So you may wonder: have experts looked into the technicalities and found the concerns to be legitimate? If the answer were “no”, you might dismiss the whole topic as science fiction instead of thinking about it further. But it turns out the answer is “yes”.

The people building AI themselves say it’s dangerous:

  • An open letter, published in May 2023, said: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”1 It was signed by, among others, the founders of all the main companies building cutting-edge (“frontier”) AI.

  • Sam Altman, CEO of OpenAI, wrote in 2015 that “Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity”2 and said in 2023 that “The bad case … is, like, lights out for all of us”.3 (However, Altman has said less about the potential dangers recently, and it’s unclear how worried he is now.)

  • Dario Amodei, CEO of Anthropic, has said that he thinks the “chance that something goes really quite catastrophically wrong on the scale of human civilization might be somewhere between 10 to 25%”.4

  • Google DeepMind co-founders Demis Hassabis and Shane Legg have both emphasized that existential risks from misaligned AI are serious.

  • Ilya Sutskever, before leaving OpenAI to found a company called “Safe Superintelligence”, co-led OpenAI’s “Superalignment” team with Jan Leike. In the blog post that announced it, they wrote that “the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction”.5

  • Elon Musk, founder and CEO of xAI, when asked “How real is the prospect of killer robots annihilating humanity”, answered that it’s “20% likely, maybe 10%”, on a time frame of “5 to 10 years”.6

You might argue that they’re just saying these things to hype up their technology. That’s an extraordinary claim! Companies rarely find it useful to say their own products can cause huge harm. The more obvious explanation is that they believe it.

Still, it makes sense to listen to sources with less of a conflict of interest. Leading academic AI researchers, who are not selling anything, agree the risks are major:

  • Geoffrey Hinton has won a Nobel Prize in Physics for his AI work as well as a Turing Award. He has said, “I actually think the [existential] risk is more than 50%”, but adjusts that down to “10 to 20%” because others disagree.7 He quit Google in 2023 to be able to speak freely about existential risk from AI.

  • Yoshua Bengio, another Turing Award winner who is sometimes called a “Godfather of AI” along with Hinton and Yann LeCun, has given a “20% probability that it turns catastrophic”.8 (Hinton, Bengio, and Sutskever are the three most cited researchers in AI.)

  • Stuart Russell, an author of the most used textbook in AI, has warned about these risks for a long time. His book Human Compatible discusses the alignment problem. He has said: “The stakes couldn’t be higher: if we don’t control our own civilisation, we have no say in whether we continue to exist.”9

Many also disagree — other leading AI scientists such as LeCun and Andrew Ng have dismissed existential risk from AI.1011 But worrying about it is now a very mainstream position.

Researchers at academic, nonprofit, and corporate groups who specialize in thinking about AI alignment and the existential risks from AI are also extremely worried. When asked about the probability of existential disaster from misaligned AI in a survey, they gave a median of 30%.12

This strong concern in the research community has not led to coherent measures to mitigate these risks. The next article will talk about why that is, and what the strategic landscape looks like.


  1. https://www.safe.ai/work/statement-on-ai-risk ↩︎

  2. https://blog.samaltman.com/machine-intelligence-part-1 ↩︎

  3. https://www.businessinsider.com/chatgpt-openai-ceo-worst-case-ai-lights-out-for-all-2023-1 ↩︎

  4. https://www.indy100.com/science-tech/ai-extinction-chance-humans ↩︎

  5. https://openai.com/index/introducing-superalignment/ ↩︎

  6. https://finance.yahoo.com/news/elon-musk-tells-ted-cruz-152045158.html ↩︎

  7. https://xrisknews.com/geoffrey-hintons-pdoom-is-over-50/ ↩︎

  8. https://www.abc.net.au/news/2023-07-15/whats-your-pdoom-ai-researchers-worry-catastrophe/102591340 ↩︎

  9. https://www.the-independent.com/tech/ai-chatgpt-danger-warning-stuart-russell-b2338210.html ↩︎

  10. https://techcrunch.com/2024/10/12/metas-yann-lecun-says-worries-about-a-i-s-existential-threat-are-complete-b-s/ ↩︎

  11. https://siliconangle.com/2023/10/31/google-brain-founder-andrew-ng-says-threat-ai-causing-human-extinction-overblown/ ↩︎

  12. https://www.alignmentforum.org/posts/QvwSr5LsxyDeaPK5s/existential-risk-from-ai-survey-results ↩︎



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.

© AISafety.info, 2022—1970

Aisafety.info is an Ashgro Inc Project. Ashgro Inc (EIN: 88-4232889) is a 501(c)(3) Public Charity incorporated in Delaware.