The creation of a super intelligent machine is a top priority for transhumanists, and they know that AI cannot be regulated. Understanding the philosophy they adhere to [see Transhumanists Want Homo Sapiens To Become Extinct] makes it clear why this isn’t such a big concern for them. On the other hand, this article strengthens our position calling for the application of the Precautionary Principle in the development of generalized intelligence.
Read Original Article
Read Online
Click the button below if you wish to read the article on the website where it was originally published.
Read Offline
Click button below if you wish to read the original article offline.