We know that AI creates unintended consequences. In an effort to address this concern, the Partnership on AI has created a software called SafeLife which has a learning model that helps it avoid unintended consequences of its own activities. But author Matt Beane says in this op-ed for Wired that this innovation might not be for the best.
Beane says that unintended consequences module built in to next gen AI could be the very thing that produces unintended consequences. It could lessen the reliability of AI. Perhaps the worst impact of a system like SafeLife is that it can give us a false sense of security, and we wont pay attention to unintended consequences that play out in the long term.Â
Read Original Article
![](https://fully-human.org/wp-content/uploads/2019/06/fantasy-2861107_640.jpg)
Read Online
Click the button below if you wish to read the article on the website where it was originally published.
![](https://fully-human.org/wp-content/uploads/2019/06/fantasy-2861107_640.jpg)
Read Offline
Click the button below if you wish to read the original article offline.
You may also like
-
Over-Reliance on AI Could Raise Risk of Developing Dementia
-
There will be no human-centered AI without humane economics
-
How AI works is often a mystery — that’s a problem
-
AI Experts Write Open Letter To Governments Urging the Creation of an International AI Treaty
-
AI Experts Are Concerned With Mark Zuckerberg’s Plan To Build An AGI