Survey: 36% of Researchers Fear ‘Nuclear Level’ AI Catastrophe

    Survey: 36% of Researchers Fear ‘Nuclear Level’ AI Catastrophe
    Last updated Apr 14, 2023
    Image credit: Al Jazeera


    • In a survey conducted by Stanford University, 36% of researchers said they believe Artificial Intelligence (AI) could lead to a “nuclear-level catastrophe,” underscoring concerns in the sector about the risks posed by rapidly advancing technology.
    • Stanford’s 2023 Artificial Intelligence Index Report, which was conducted by researchers from three different universities, asked participants to agree or disagree with the statement, "It is possible that decisions made by AI or machine learning systems could cause a catastrophe this century that is at least as bad as an all-out nuclear war."
    • In addition, the report found that 73% of researchers in natural language processing — the branch of computer science concerned with developing AI — believe the technology might soon spark “revolutionary societal change.” Although an overwhelming majority of researchers believe the future net impact will be positive, concerns remain that AI will develop capabilities faster than humans can manage it.
    • According to the nonprofit AIAAIC ["AI, Algorithmic and Automation Incidents and Controversies"] database, controversial incidents involving AI have increased 26 times since 2012, including 2022 deep fake videos of Ukraine President Volodymyr Zelenskyy surrendering, and US prisons using call-monitoring technology on inmates.
    • AI is developing quickly, and the research is advancing from generative AI to creating Artificial General Intelligence (AGI), according to 57% of researchers surveyed. Artificial general intelligence is an AI system that can mimic or even outperform the brain's capabilities; there is little consensus over when and if AGI could happen.
    • Last month, SpaceX and Tesla CEO Elon Musk and Apple co-founder Steve Wozniak were among more than one thousand signatories of an open letter from the Future of Life Institute calling for a six-month pause on training AI systems beyond the level of Open AI’s chatbot GPT-4. The letter said, “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”


    Narrative A

    Despite just 41% of researchers believing AI should be regulated, something obviously has to be done based on the rest of this study’s results. If one-third of researchers are warning that AI could lead to a major catastrophe, and nearly three-quarters believe AI could soon lead to revolutionary societal change, then it’s time to take a pause and figure out how to avoid the dangerous results AI could produce.

    Narrative B

    AI is the future, and pausing or trying to set back its development won't solve any problems. AI offers a revolutionary means to address some of the world's biggest challenges, including inequity and even climate change, and it must be kept on its current track. Rather than trying to reign it in, the tricky areas of the technology simply need to be identified, and work can be done to improve them while AI continues to develop at its current pace.

    Nerd narrative

    There is a 50% chance the first weakly general AI system will be devised, tested, and publicly announced by March 2026, according to the Metaculus prediction community.

    Articles on this story

    Sign up to our newsletter!