In this June 15, 2022 article for The Washington Post, Christine Emba writes a response to the recent claim that an AI tool developed by Google has become sentient [see Google Engineer Claims AI Chatbot Has Become Sentient to learn more].
Emba raises the issue of sentience is difficult to settle, especially as philosophers and scientists have not yet been able to define what consciousness really is. Despite this, Emba says that a sentient AI would raise many questions that humanity must answer. Some of the questions include:
- What responsibilities would we have to an ensouled AI, were one to exist?
- What might a conscious AI do to us?
Emba criticizes our habit of creating new technologies without looking at the possible implications these may have on our lives. She says, “It’s unlikely we could have seen all this coming [referring to the negative influences of social media and the smartphone]. But it also seems as though the people building the tools never even tried to look. Many of the ensuing crises have stemmed from a distinct lack of self-scrutiny in our relationship with technology — our skill at creation and rush to adoption having outstripped our consideration of what happens next”.
Editor’s Note: This article brings into mind some articles we have published in previous years. Here are a few of them:
- IT IS TIME TO REVIEW THE LEGAL DEFINITION OF BEING HUMAN
- RACHEL THOMAS: THIS IS WHAT SCARES ME ABOUT AI
- THE REAL PROBLEM WITH AI: ITS APPLICATION
- THE OUTCOME OF AI’S INABILITY TO UNDERSTAND MEANING
- THE DEMISE OF THE ETHICAL GOOGLE
We hope you can read back on these articles as we try to remind ourselves of the possible dangers of a “sentient” AI, and why a change in worldview is now necessary if humans wish to overcome the challenges of future technologies.
Read Original Article
Click the button below if you wish to read the article on the website where it was originally published.
Click the button below if you wish to read the article offline.