2 min readExpert report forecasts possible malicious use of AI

This new report written by AI tech developers looks at the potential security threats arising out of the malicious use of advance AI systems.

We have all heard of the wonderful new world heralded by artificial intelligence. And while there is an opposing view, one that shows us how it can lead to the extinction of the human species itself, there are very few in-depth explorations in on the second possibility.

This is the reason why this new report written by various experts from the Future of Human Institute, Centre for the Study of Existential Risk, OpenAI, Open Philantrophy Project, and several other universities and organizations, is very important. This is not about robots taking over the earth, or machines eradicating humans. The scenarios mentioned by the report are all very practical, and have a high probability of happening.

Its introduction reads:

This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats. We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed.

Some Highlights of the Report

  1. You know those emails you get in your email pretending to be your bank or an organization you’re a part of, asking you to update your account? Well, those phishing emails could get worse as AI could get information on your online behaviors in order to create an email message you can’t ignore.
  2. Hackers and criminals could now utilize AI so that they become untraceable. By using software to talk to, and collect payment from victims, criminals can target more people from the comfort of their own homes.
  3. Fake news and propaganda will get worse. Have you heard of the new software that can mimic any voice, or the imaging software that recently created a fake video of ex-president Obama? Imagine the wide application these will serve in the hands of the wrong people.
  4. Self-flying drones that has the ability to detect a face and attack may have a huge potential of minimizing risks in pursuing criminals, but when it falls under the control of people who intend to use it differently, could breed fear and terror.

Why It Matters

Artificial intelligence is here. It is found in many commercial applications, many of which are completely unregulated. We all understand that AI has a dual-use nature. Its benefits and disadvantages depend on who are using them. Are we going to wait until they fall in the wrong hands? What is  our country, doing to ensure that these available AI technologies will not be misused? You may choose to download the report at Cornell University or read it below.

Leave a Reply

Your email address will not be published. Required fields are marked *