The creation of a super intelligent machine is a top priority for transhumanists, and they know that AI cannot be regulated. Understanding the philosophy they adhere to [see Transhumanists Want Homo Sapiens To Become Extinct] makes it clear why this isn’t such a big concern for them. On the other hand, this article strengthens our position calling for the application of the Precautionary Principle in the development of generalized intelligence.
Read Original Article
Read Online
Click the button below if you wish to read the article on the website where it was originally published.
Read Offline
Click button below if you wish to read the original article offline.
You may also like
-
The Confusion Between Intelligence, Consciousness, and Sentience Will Lead To Destructive AI
-
AI Tasked With ‘Destroying Humanity’ Now ‘Working on Control Over Humanity Through Manipulation’
-
Google Engineer Claims AI Chatbot Has Become Sentient
-
WEF Launches Metaverse, Predicts Digital Lives Will Become More Meaningful Than Physical Lives
-
The 2045 Movement And The Many Paths Towards Human Immortality