The creation of a super intelligent machine is a top priority for transhumanists, and they know that AI cannot be regulated. Understanding the philosophy they adhere to [see Transhumanists Want Homo Sapiens To Become Extinct] makes it clear why this isn’t such a big concern for them. On the other hand, this article strengthens our position calling for the application of the Precautionary Principle in the development of generalized intelligence.
Read Original Article
![](https://fully-human.org/wp-content/uploads/2019/06/fantasy-2861107_640.jpg)
Read Online
Click the button below if you wish to read the article on the website where it was originally published.
![](https://fully-human.org/wp-content/uploads/2019/06/black-hole-4092609_640.jpg)
Read Offline
Click button below if you wish to read the original article offline.
You may also like
-
Can humans harm AI?
-
The Confusion Between Intelligence, Consciousness, and Sentience Will Lead To Destructive AI
-
AI Tasked With ‘Destroying Humanity’ Now ‘Working on Control Over Humanity Through Manipulation’
-
Google Engineer Claims AI Chatbot Has Become Sentient
-
WEF Launches Metaverse, Predicts Digital Lives Will Become More Meaningful Than Physical Lives