1 min readThe perils and promise of AI conscientiousness

SafeLife hopes to address unintended consequences of AI systems, but Matt Beane says that it could lead to even more complex unintended consequences.

We know that AI creates unintended consequences. In an effort to address this concern, the Partnership on AI has created a software called SafeLife which has a learning model that helps it avoid unintended consequences of its own activities. But author Matt Beane says in this op-ed for Wired that this innovation might not be for the best. 

Beane says that unintended consequences module built in to next gen AI could be the very thing that produces unintended consequences. It could lessen the reliability of AI. Perhaps the worst impact of a system like SafeLife is that it can give us a false sense of security, and we wont pay attention to unintended consequences that play out in the long term. 

Read Original Article

Read Online

Click the button below if you wish to read the article on the website where it was originally published.

Read Offline

Click the button below if you wish to read the original article offline.

Leave a Reply

Your email address will not be published. Required fields are marked *