Will a world-accepted standard for AI governance arise from the efforts of the Centre for the Fourth Industrial Revolution?
This article may have been written in 2018, but we are nowhere near resolving the explainability problem in even the simplest of AI. Why then do we allow countries and companies to experiment towards artificial general intelligence when the most basic of concerns has not yet been resolved?
The 2018 white paper from the Electronic Frontier Foundation aims to provide recommendations at mitigating AI risk, and a guide to militaries scrambling to develop AI weapon systems.
As more and more countries are venturing into the creation of AI weapons, leaders from the tech world such as Elon Musk of Tesla, and Demi Hassabis, Shane Legg and Mustangs Suleyman of Google Deepmind signed a policy statement calling for laws against the use and development of “lethal autonomous weapons”.
Without a pre-emptive ban on autonomous weapons, and an international law that will prevent the development of such technologies, members of the academia are forced to take matters to their own hands.