Report: OpenAI's GPT-4 Generates More Misinformation Than Predecessor

    Report: OpenAI's GPT-4 Generates More Misinformation Than Predecessor
    Photo: Axios

    The Facts

    • A new report released by NewsGuard says OpenAI's newest generative artificial intelligence (AI) tool, GPT-4, is more likely to spread misinformation, when prompted, than its predecessor GPT-3.5.

    • Though OpenAI said the updated technology was 40% more likely to produce factual responses than GPT-3.5 in internal testing, NewsGuard claims it was more willing to generate prominent false narratives more frequently and persuasively.


    The Spin

    Left narrative

    While thinking it's promoting peer-reviewed research, ChatGPT has been proven susceptible to manipulation and made false claims regarding issues such as gun safety for children and testosterone levels. If the algorithm is already prone to manipulation from the web, one can only imagine the danger posed by conspiracy theorists who could potentially game the system to promote their worldview in the guise of objective fact.

    Right narrative

    When the mainstream media claims ChatGPT is promoting "misinformation," it purposefully leaves out which side the chatbot leans politically. While its so-called disclosure statement says it's politically neutral, GPT quietly embeds liberal ideology into its algorithm, so people think left-wing talking points are the truth while right-wing beliefs are "dangerous fake news."

    Nerd narrative

    There’s a 97.4% chance that OpenAI’s ChatGPT will be available for free public use by the May 2024, according to the Metaculus prediction community.



    Political split

    LEFT

    RIGHT

    Sign up to our daily newsletter