2 min readArmy Scientists Develops Program for Human Agents to Trust AIs

Is AI already being used by the military? Most definitely! Most AI releases today are focused on commercial use, only because the military is not so eager in announcing their successes to the public. But just last January 11, 2018, army scientists may have just resolved the “trust problem”, where human agents find it difficult to trust the actions and decisions of machine agents.
 
In a 2016 report published by the U.S. Defense Science board, human agents identified six reasons why they can’t trust AI agents. Five of these are:
  1. Human agents have difficulty determining the AI agent’s intentions, performance, future plans, and reasoning process through simple observation,
  2. AI agents can be unpredictable,
  3. AI agents are difficult to direct, especially when operations do no go according to plan,
  4. Because they have no capacity to decide for themselves, AI agents cannot be held accountable for their actions,
  5. Human agents cannot be assured of  a mutual understanding of the common goals of an operation.

In order to address these issues, Dr. Jessie Chen, Senior Research Psychologist at the Army Research Laboratory (ARL), and his team developed the Situation Awareness-based Agent Transparency (SAT) model. Participants to the research expressed that they felt the AI agents “as more trustworthy, intelligent and human-like”. At present, Chen and his team are exploring the possibility of a bidirectional model to better improve collaboration between human and AI.

 

On one hand, this is good news because this program will provide greater efficiency in AI-human cooperation. On the other hand, this development may also spell better warfare. We’ve all heard Vladimir Putin say it: the nation that leads AI will be the ruler of the world. Could this the beginning of AI warfare?

Army scientists improve human-agent teaming by making AI agents more transparent

Army scientists improve human-agent teaming by making AI agents more transparent

U.S. Army Research Laboratory scientists developed ways to improve collaboration between humans and artificially intelligent agents in two projects recently completed for the Autonomy Research Pilot Initiative supported by the Office of Secretary of Defense. They did so by enhancing the agent transparency, which refers to a robot, unmanned vehicle, or software agent’s ability to convey to humans its intent, performance, future plans, and reasoning process.

https://phys.org/news/2018-01-army-scientists-human-agent-teaming-ai.html

Leave a Reply

Your email address will not be published. Required fields are marked *