Google Engineer Suspended after Claiming AI Chatbot 'Sentient'

Image copyright: The Washington Post

The Facts

  • A Google software engineer who went public with claims that the company's LaMDA (Language Model for Dialogue Applications) chatbot system is sentient has been suspended.

  • Blake Lemoine, who works for Google's Responsible AI team, began chatting with LaMDA in the fall of 2021 to test the artificial intelligence system for instances of discriminatory or hate speech.

The Spin

Pro-establishment narrative

Lemoine's claims were carefully reviewed by Google's own AI experts and they were found to be lacking in evidence. Google is engaged in a restrained and careful approach to AI innovation - just like many other organizations who are working on developing similar language models - with a focus on valid concerns based on fact and fairness.

Establishment-critical narrative

Large language model-based AI programs are already widely in use, and there are a number of reasons to be mindful of their potential downsides, including widespread loss of employment for starters. We should not trust the assurances of these innovators that their code is to "not be evil."

Cynical narrative

This is nothing more than a distraction. Scientists and ethicists are forced to rebut the nonsensical claim that a data-centric computational model is 'sentient' while the companies driving AI innovation continue to expand metastatically, laying claim to ever-larger portions of the decision-making and core infrastructure that guides our social and political institutions.

Establishment split



More neutral establishment stance articles

Sign up to our daily newsletter