Survey: 36% of Researchers Fear ‘Nuclear Level’ AI Catastrophe

Photo: Al Jazeera

The Facts

  • In a survey conducted by Stanford University, 36% of researchers said they believe Artificial Intelligence (AI) could lead to a “nuclear-level catastrophe,” underscoring concerns in the sector about the risks posed by rapidly advancing technology.

  • Stanford’s 2023 Artificial Intelligence Index Report, which was conducted by researchers from three different universities, asked participants to agree or disagree with the statement, "It is possible that decisions made by AI or machine learning systems could cause a catastrophe this century that is at least as bad as an all-out nuclear war."


The Spin

Narrative A

Despite just 41% of researchers believing AI should be regulated, something obviously has to be done based on the rest of this study’s results. If one-third of researchers are warning that AI could lead to a major catastrophe, and nearly three-quarters believe AI could soon lead to revolutionary societal change, then it’s time to take a pause and figure out how to avoid the dangerous results AI could produce.

Narrative B

AI is the future, and pausing or trying to set back its development won't solve any problems. AI offers a revolutionary means to address some of the world's biggest challenges, including inequity and even climate change, and it must be kept on its current track. Rather than trying to reign it in, the tricky areas of the technology simply need to be identified, and work can be done to improve them while AI continues to develop at its current pace.

Nerd narrative

There is a 50% chance the first weakly general AI system will be devised, tested, and publicly announced by March 2026, according to the Metaculus prediction community.



Articles on this story

Sign up to our daily newsletter