WEF: AI Misinformation, Disinformation Greatest Short-Term Threat

    WEF: AI Misinformation, Disinformation Greatest Short-Term Threat
    Photo: Harold Cunningham/Getty Images News via Getty Images

    The Facts

    • In its 2024 global risk report, which surveyed over 1.4K experts and leaders, the World Economic Forum (WEF) says the spread of artificial intelligence (AI)-related disinformation and misinformation is the biggest threat over the next two years, particularly regarding upcoming elections in 75 countries, including the US, South Africa, Mexico, and India.

    • The report argues that "easy-to-use" AI models have "already enabled" disinformation, including threats ranging from "sophisticated voice cloning to counterfeit websites." While other global threats, such as climate change, were ranked higher in the ten-year outlook, AI-related "falsified information" ranked above the climate and societal polarization in the short term.


    The Spin

    Narrative A

    The world is heading down a very dark short- and long-term path. We all know that the climate catastrophes awaiting us years from now are something to watch out for, but in the meantime — if we want to have the tools to tackle such long-term threats — the international community must first deter the threats of AI and its ability to blur fact and fiction. Nefarious actors from governments, private organizations, and criminal groups will all seek to sway public opinion in the upcoming election years, which calls for immediate concrete defense mechanisms.

    Narrative B

    While the risks of AI shouldn't be ignored, this charged report — which echoes the fearmongering of many leading experts — overblows the technology's role in the spread of false information, which has been an issue long before AI came to the forefront. Rather than pushing AI doomerism, there should be a focus on tackling the core issue that fuels mis- and disinformation: The erosion of the public's trust in foundational institutions that, once synonymous with impartiality, no longer are.

    Nerd narrative

    There is a 68% chance that a major cyberattack, virus, worm, or similar threat that utilizes Large Language Models (LLMs) in some significant way will occur before Jan.1, 2025, according to Metaculus community prediction.



    Articles on this story

    Sign up to our daily newsletter