1 min readUnderstanding the AI alignment problem

As DeepMind goes deeper into the creation of an artificial general intelligence (AGI), it attempts to address the alignment problem. Will Iason Gabriel’s new work finally resolve it?

In this article written by Iason Gabriel of DeepMind, he explores three propositions that can help us gain a deeper understanding of the alignment problem. 

Gabriel asserts that the technical (which focuses on the encoding the values or principles) and the normative (which focuses on which values or principles to encode) aspects of the alignment problem are interrelated. He also says that the real issue at hand is knowing what the goals of alignment are, and finding principles that treat people fairly and which commands widespread support.

Read Original Article

Read Online

Click the button below if you wish to read the article on the website where it was originally published.

Read Offline

Click the button below if you wish to read the original article offline.

close

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every week.

Your e-mail address is only used to send you our newsletter and information about the activities of Fully Human.

Leave a Reply

Your email address will not be published. Required fields are marked *