1 min readThe problem with regulating AI

AI cannot be regulated, and transhumanists know this. But they’re building an AGI anyway.

The creation of a super intelligent machine is a top priority for transhumanists, and they know that AI cannot be regulated. Understanding the philosophy they adhere to [see Transhumanists Want Homo Sapiens To Become Extinct] makes it clear why this isn’t such a big concern for them. On the other hand, this article strengthens our position calling for the application of the Precautionary Principle in the development of generalized intelligence.

Read Original Article

Read Online

Click the button below if you wish to read the article on the website where it was originally published.

Read Offline

Click button below if you wish to read the original article offline.

close

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every week.

Your e-mail address is only used to send you our newsletter and information about the activities of Fully Human.

Leave a Reply

Your email address will not be published. Required fields are marked *