As more and more countries are venturing into the creation of AI weapons, leaders from the tech world such as Elon Musk of Tesla, and Demi Hassabis, Shane Legg and Mustangs Suleyman of Google Deepmind signed a policy statement calling for laws against the use and development of “lethal autonomous weapons”.
The statement was crafted by the Future of Life Institute, the same organization that issued the 23 Asilomar Principles.
The statement was published following the conclusion of the International Joint Conference on AI held in Stockholm last July 13-18, 2018. To date, it has been signed by 200 organizations and more than 2,600 individuals.
Why It Matters
- Last year a group of scientists and AI researchers drafted an open letter to the United Nations calling for laws to ban the use of AI weapons.
- South Korean organizations have began research, with the goal of manufacturing AI weapons.
- While top AI researchers are adhering to a prudent approach to the release of AI technologies, there is no assurance that countries are going to do the same.
- The age-old competition for world domination puts to light an old culture that is bound to lead to humanity’s possible destruction.
..we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable. There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual.
Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI. In this light, we the undersigned agree that the decision to take a human life should never be delegated to a…