Google Warns Its Staff About Chatbots

Photo: Wikimedia Commons

The Facts

  • As it begins marketing its own chatbot, Bard, around the world, Google parent company Alphabet Inc. is warning its employees about how to safely use artificial intelligence (AI) chatbots, including Bard, according to a Reuters report that cites four anonymous sources.

  • The company reportedly urged its software engineers to avoid direct use of computer code that chatbots can create, as it says AI can reproduce the data it receives during training, risking a potential leak. This recently happened to a Samsung engineer who uploaded code to ChatGPT.


The Spin

Pro-establishment narrative

Not only has Google been transparent about the risks of this emerging technology, but it has also been at the forefront of implementing safeguards against risks such as data theft, data poisoning, and malicious chatbot prompt injections by bad actors. The company certainly aims to profit from AI, but it's also spending loads of cash to ensure public safety and privacy.

Establishment-critical narrative

Tech executives leading the AI discussion believe themselves to be prophets with the power to describe the end times while also offering the guide to salvation. While they issue rhetoric on the potential "existential threat" AI poses, they continue to push for the universalization of the technology, claiming it will bring us to a technocratic Garden of Eden. At this point in time, we shouldn't trust their chatbots or their self-proclaimed wisdom.

Nerd narrative

There's a 50% chance that AI will be given legal rights or be protected from abuse anywhere in the US before 2035, according to the Metaculus prediction community.



Articles on this story

Sign up to our daily newsletter