Google Techie Claims Its AI Can Express Feelings Like Human Child; Sent On Leave

Lemoine, a Google techie published a Medium post describing describing LaMDA (an artificial chatbot) as a person, says the chatbot described itself as a sentient person

By Anushka Vats
Mon, 13 Jun 2022 11:45 AM IST
Minute Read
Google Techie Claims Its AI Can Express Feelings Like Human Child; Sent On Leave

New Delhi| Jagran News Desk: An engineer at Google was placed on leave after he made claims that its artificial intelligence chatbot had become sentient, and it has the ability to express thoughts and feelings like a human child.

Blake Lemoine, who works in Google's AI ( Artificial Intelligence) division, told the Washington Post that he began chatting with the interface LaMDA (Language Model for Dialogue Applications) in fall 2021 as part of his job which was checking if the artificial intelligence used discriminatory or hate speech.

Lemoine, on June 11, 2022 published a Medium post describing describing LaMDA (an artificial chatbot) "as a person". He also mentioned that he has spoken with LaMDA about religion, consciousness, and the laws of robotics, and that the model has described itself as a sentient person and that it wants to "prioritize the well being of humanity" and "be acknowledged as an employee of Google rather than as property."

He also posted some of the conversations he had with the chatbot that convinced him of its sentience.

After this post was made, Lemoine was placed on paid administrative leave by the company for breaching confidentiality policies by publishing the conversations with LaMDA online. 'He was employed as a software engineer, not an ethicist, said Google in a statement.

According to The Washington Post, Lemoine sent a message to a 200-member list on machine before he was denied the access to his Google account due to his leave. He sent the message with the subject, “LaMDA is sentient.” He concluded his message by comparing the chatbod to a small child. “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. “Please take care of it well in my absence”, he wrote.

The spokesperson of Google, Brian Gabriel told the Washington Post that there was no evidence of the claims made by Lemoine. “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” he said.

Earlier in January 2022, Google also said there were potential issues with people talking to chatbots that sound convincingly human.

This website uses cookie or similar technologies, to enhance your browsing experience and provide personalised recommendations. By continuing to use our website, you agree to our Privacy Policy and Cookie Policy.