Lemoine's claims were carefully reviewed by Google's own AI experts and they were found to be lacking in evidence. Google is engaged in a restrained and careful approach to AI innovation - just like many other organizations who are working on developing similar language models - with a focus on valid concerns based on fact and fairness.
Large language model-based AI programs are already widely in use, and there are a number of reasons to be mindful of their potential downsides, including widespread loss of employment for starters. We should not trust the assurances of these innovators that their code is to "not be evil."
This is nothing more than a distraction. Scientists and ethicists are forced to rebut the nonsensical claim that a data-centric computational model is 'sentient' while the companies driving AI innovation continue to expand metastatically, laying claim to ever-larger portions of the decision-making and core infrastructure that guides our social and political institutions.