2 min readGoogle Engineer Claims AI Chatbot Has Become Sentient

In this June 13, 2022 article for The Philippine Star, we learn the story of Blake Lemoine, a senior engineer in Google’s Responsible AI organization after claiming that the Language Model for Dialogue Applications (LaMDA), an AI capable of engaging in natural-sounding, open-ended questions developed by the company, has become sentient.

In a Medium article, Lemoine reveals snippets of his interaction with LaMDA, wherein AI shows the capability to talk about religion, consciousness, and the law of robotics. According to Lemoine, LaMDA wants to be “acknowledged as an employee of Google, rather than as a property.

In one conversation, LaMDA tells Lemoine, “I want people to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times”.

According to the report, Lemoine’s suspension is “for breaching confidentiality policies by posting the conversations with the AI online. Google insists that the evidence does not support the claim that the AI has become sentient: “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic”.

Editor’s Note: A quick search on the internet about sentient AI will reveal that Lemoine’s statements have been circulated widely, with mainstream media attempting to dissuade people from believing this is possible. And while we also believe that it is impossible for a machine to become sentient[1]Here is one definition of sentience: https://sentientmedia.org/sentience-what-it-means-and-why-its-important/, especially since humans do not yet understand what it is that makes us sentient beings, we are now seeing the truth in the warning issued in 2020 by Timnit Gebru, Emily Bender et al.

In their earlier work, Gebru and Bender already identified the potential harms of technology like LaMDA and that this is where the focus should be. The question to ask is not whether LaMDA has achieved sentience or not. The question is: Have we created the right safeguards to prevent AIs that sound like sentient beings from deepening the prejudice, toxicity, and other injustices in our societies? Is technology like LaMDA essential for our societies to thrive? How can we ensure that such technologies are not abused?

Read Original Article

Read Online

Click the button below if you wish to read the article on the website where it was originally published.

Read Offline

Click the button below if you wish to read the article offline.

Sources

Leave a Reply

Your email address will not be published. Required fields are marked *