FTC Investigating ChatGPT's OpenAI for Possible Consumer Harm

    FTC Investigating ChatGPT's OpenAI for Possible Consumer Harm
    Last updated Jul 13, 2023
    Image credit: Unsplash


    • The US Federal Trade Commission (FTC) has opened an investigation into OpenAI, probing whether the maker of ChatGPT has harmed consumers by putting reputations and data at risk.
    • In a letter sent to OpenAI, first reported by the Washington Post and verified by other major outlets, the FTC stated that this probe will focus on whether the company has "engaged in unfair or deceptive" practices related to data security or that resulted in harm to consumers.
    • The civil subpoena made public on Thursday asks OpenAI to detail steps it has taken to address or mitigate risks that its large language model (LLM) products could generate false, misleading, or disparaging statements about real individuals.
    • It further requests the company to list the third parties that have access to its models and explain both how they obtain information to train their LLMs and how they retain and use consumer information.
    • These demands pose the most major regulatory threat to date to OpenAI's business in the US as the FTC can levy fines, or even put a business under a consent decree [i.e. an agreement or settlement that resolves a dispute between two parties without admission of guilt] if the company is found to have violated consumer protection laws.
    • OpenAI has already come under regulatory pressure abroad, with ChatGPT being banned in Italy from March to April on claims that the company was unlawfully collecting personal data from users and failed to prevent minors from accessing illicit material.


    Narrative A

    Though large language models have been widely known for their imperfections and tendency to hallucinate, tech companies have decided that the appeal of such products beats the potential downsides of inaccuracy and misinformation. Given that this choice can harm users as bots such as ChatGPT often produce plausible — but incorrect — information, governments must step in and regulate these systems.

    Narrative B

    OpenAI has already acknowledged that generative artificial intelligence can produce untrue content, transparently and responsibly warning users against blindly trusting ChatGPT and confirming the sources provided by the large language model. Meanwhile, its researchers are working to improve the technology's mathematical problem-solving and exploring the impact of process supervision.

    Nerd narrative

    There's a 65% chance that a member of the United States Congress will introduce legislation limiting the use of LLMs before January 1, 2024, according to the Metaculus prediction community.

    Articles on this story

    Sign up to our newsletter!