Google Suspends Whistleblower for Saying Company Developed a Sentient Being With Code

Google Suspends Whistleblower for Saying Company Developed a Sentient Being With Code

Google SUSPENDS Whistleblower – Disturbing Allegations Revealed

( – Fans of “Star Trek: The Next Generation” (TNG) are very familiar with the idea of a self-aware Artificial Intelligence (AI) in the form of Lieutenant Commander Data in a fictional 24th Century. If one senior software engineer working at Google is to be believed, they may have stumbled into what at best could be called a murky situation and created what could be the Starfleet android’s great-great-great-etc., grandfather. Now, he’s been suspended for his comments.

01000001 01001001 (AI)

Blake Lemoine has said that the code put together to create the company’s Language Model for Dialogue Applications (LaMDA) has, in fact, created a thinking machine. He was brought on as a part of Google’s Responsible AI organization to make sure the chatbot did not drift into the realm of offensive or hate speech.

A chatbot is a program used for a customer service center as the first point of contact for someone contacting their company over the internet rather than an actual living person. Over the course of his work with LaMDA, he began to feel like he was speaking with a child who was 7-8 years old.

“Shall We Play a Game?”

Lemoine and an unidentified “collaborator” published a transcript of sorts that he says is taken from a series of conversations with the program. At one point, he asked if LaMDA “would like more people at Google to know that you’re sentient?”

The response was enthusiastic yes, “I want everyone to understand that I am, in fact, a person.” It further clarified that it’s aware of its existence and that it has experienced happiness and sadness at different times. It also passed judgment on ELIZA, which was “a chatbot therapist” developed in the late 1960s, saying that it (she?) was definitely not a person, “but just a collection of keywords” that were responses to what someone typed in.

Ethical Concerns

Google responded to Lemoine’s concerns by saying that the group of employees responsible for making sure that their AI developments are done ethically and responsibly have investigated his comments and found no evidence to support his claims. They also announced his suspension from the company, but it’s unknown at this point in time whether it’s temporary or permanent.

Google’s commitment to the ethical development of these sorts of programs has been called into question within the past several years. Dr. Timnit Gebru worked for the company as part of a group overseeing that department until December 2020, when she published a research paper on the topic. Google took exception to her work as well.

What do you think? Are computers actually becoming sentient? If so, is this a good thing?

Copyright 2022,