The creation of a super intelligent machine is a top priority for transhumanists, and they know that AI cannot be regulated. Understanding the philosophy they adhere to [see Transhumanists Want Homo Sapiens To Become Extinct] makes it clear why this isn’t such a big concern for them. On the other hand, this article strengthens our position calling for the application of the Precautionary Principle in the development of generalized intelligence.
Read Original Article

Read Online
Click the button below if you wish to read the article on the website where it was originally published.

Read Offline
Click button below if you wish to read the original article offline.
You may also like
-
Elon Musk Says Tesla Robots Could Achieve AGI
-
Snowden reveals Denmark using law to prevent government officials from revealing illegal surveillance of citizens
-
The Fight to Define When AI Is ‘High Risk’
-
The new era of biodigital convergence
-
German and UK Defense Department partner to study the strategic implications of human augmentation