2 min readMitigating AI risk: How militaries should plan for AI

The 2018 white paper from the Electronic Frontier Foundation aims to provide recommendations at mitigating AI risk, and a guide to militaries scrambling to develop AI weapon systems.

In the hope of mitigating AI risk, experts from various high tech companies such as Google, Hanson Robotics, and Tesla signed a policy statement against the development of lethal AI weapons last July 2018. This new set of principles has placed important limits to the command, control, and analysis of weapons systems conducted by signatory companies. But countries continue to assemble their own machine learning initiatives, and several of them are developing AI weapons.

 In light of this, the Electronic Frontier Foundation (EFF), the leading non-profit organization championing civil liberties in the digital world, published a report entitled The Cautious Path to Strategic Advantage: How Militaries Should Plan for AI. The report aims to provide a warning to militaries scrambling to develop autonomous AI weapons. Through this white paper, the EFF offers recommendations on how to mitigate possible risks, as well as strategic questions that need to be considered by countries as they develop such highly lethal weapons.

Proposals for Mitigating AI Risk

Part 1 of the report talks about the various uses of AI weapons, as well as the dangers and risks that countries have to consider in its development. It focuses on the loopholes of current machine learning systems and the problems in cybersecurity.

Part 2 offers an elaboration of considerations for AI development which needs careful dialogue and study. It also offers some recommendation for mitigating AI risk. Some noteworthy suggestions include the support and establishment of institutions and agreements for managing AI and AI related risks (a call the Future of Life Institute has been making for a few years now), and engagement in military-to-military dialogue to prevent accidental conflict and conflict escalation.

Part 3 offers a conclusion and a list of future questions that militaries can consider in their process of development.

Why It Matters

Throughout history, we have seen what could happen when countries try to dominate one another. We know the damage it can cause. Multiply that effect 10 times, and you’ll get an idea of the damage that AI weaponry can do to the world. Many institutions have said this in various ways: AI, particularly AI weapons, and Artificial General Intelligence, is not something we can prototype and release into the world without carefully considering its impact. It is a kind of invention that could end all inventions, if not humanity itself. We must tread this path carefully. EFF’s report enables provides the defense community with important insights, which militaries can build on to ensure the continuity of future generations.

The Cautious Path to Strategic Advantage: How Militaries Should Plan for AI

The Cautious Path to Strategic Advantage: How Militaries Should Plan for AI

Table of ContentsIntroduction Executive Summary Rising Military Interest in AI Part I: How Military Use of AI Could Create Unexpected Dangers and Risks Danger 1: Machine Learning Systems Can Be Easily Fooled or Subverted Danger 2: AI Systems Are…

https://www.eff.org/wp/cautious-path-strategic-advantage-how-militaries-should-plan-ai

Leave a Reply

Your email address will not be published. Required fields are marked *