AI is biased against the poor and people of color. How can AI experts address this?

In this article written by Heather Kelley for CNN Tech, she explores the various ways AI can lead to further discrimination of already marginalized peoples. She also discusses the current steps being taken to address them.

2018 is the year AI will truly be a part of our daily lives. Already in March of this year, 47.3 million U.S. are using smart speakers to manage their homes, while 68% of all smartphone users have no idea that they are using AI. These figures are bound to increase when 5G gets rolled out in the latter part of the year.

While AI use is becoming ever more prevalent, the issues it raised are nowhere near resolution.

In this article written by Heather Kelley for CNN Tech, she explores the various ways AI can lead to further discrimination of already marginalized peoples. She also discusses the current steps being taken to address them.

Some Important Highlights

  • Every technology developed by human kind has led to unintended consequences that are difficult to control or resolve, and AI is no exception.
  • Already, there are AI programs that are found to return biased results. For example, the researchers have found facial recogntion software are unable to identify women of color and smart speakers have trouble picking up accents different from Americans. There are AI that have developed sexist views, and those that are biased against Black Americans.
  • AI developers are starting to recognize these issues and are  proposing a number of solutions to address these biases.
  • New industry standards, a code of conduct, greater diversity among AI developers, and regulation were the proposed solutions.

Why It Matters

While proposed solutions to remove AI biases are steps toward the right direction, they simply aren’t enough. Fei Fei Li, director of the Stanford AI Lab explains why:

In AI development, we say garbage in, garbage out…If data we’re starting with is biased, our decision coming out of it is biased.

With widespread discrimination and marginalization in our societies, this explains why the AI we are currently producing are bound to promote (and perhaps worsen) such as culture. Until our own societies become peaceful, equitable, and just, the technologies we will promote will continue to produce unintended consequences.

How then can we make our societies safe for humans?

We need technologists who understand history, who understand economics, who are in conversations with philosophers…We need to have this conversation because our technologists are no longer just developing apps, they’re developing political and economic systems

avatar

Marina Gorbis

Executive director, Institute for the Future

[contentcards url=”https://money.cnn.com/2018/07/23/technology/ai-bias-future/index.html” target=”_blank”]

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every week.

Your e-mail address is only used to send you our newsletter and information about the activities of Fully Human.

Leave a Reply

Your email address will not be published. Required fields are marked *