Google Warns Its Staff About Chatbots

    Google Warns Its Staff About Chatbots
    Last updated Jun 15, 2023
    Image credit: Wikimedia Commons


    • As it begins marketing its own chatbot, Bard, around the world, Google parent company Alphabet Inc. is warning its employees about how to safely use artificial intelligence (AI) chatbots, including Bard, according to a Reuters report that cites four anonymous sources.
    • The company reportedly urged its software engineers to avoid direct use of computer code that chatbots can create, as it says AI can reproduce the data it receives during training, risking a potential leak. This recently happened to a Samsung engineer who uploaded code to ChatGPT.
    • Such concerns have become prevalent throughout the corporate world, with many global companies implementing guardrails on AI chatbots. A survey of 12K US professionals found that 43% were using ChatGPT or other AI tools, often without telling their boss.
    • Apple last month barred its employees from using both ChatGPT and another service called Github’s Copilot, with Amazon also banning staffers from sharing any code or confidential information with ChatGPT. Banks, including JPMorgan Chase, Bank of America, and Citigroup, have issued similar restrictions.
    • As Google looks to advance its global rollout of Bard, it's currently not allowed in the EU, with the Irish Data Protection Commission, the bloc's digital regulator, recently ruling that Google hadn’t adequately detailed measures that would protect citizens’ privacy.
    • Some speculate that Google's employee policy, for which it also cites Bard's ability to make undesired code suggestions, is likely an attempt to protect its reputation as it competes against Microsoft-funded ChatGPT for potentially billions of dollars in investment and advertising revenue.


    Pro-establishment narrative

    Not only has Google been transparent about the risks of this emerging technology, but it has also been at the forefront of implementing safeguards against risks such as data theft, data poisoning, and malicious chatbot prompt injections by bad actors. The company certainly aims to profit from AI, but it's also spending loads of cash to ensure public safety and privacy.

    Establishment-critical narrative

    Tech executives leading the AI discussion believe themselves to be prophets with the power to describe the end times while also offering the guide to salvation. While they issue rhetoric on the potential "existential threat" AI poses, they continue to push for the universalization of the technology, claiming it will bring us to a technocratic Garden of Eden. At this point in time, we shouldn't trust their chatbots or their self-proclaimed wisdom.

    Nerd narrative

    There's a 50% chance that AI will be given legal rights or be protected from abuse anywhere in the US before 2035, according to the Metaculus prediction community.

    Articles on this story

    Sign up to our newsletter!