1 min readStudy finds Google’s AI hate speech detector to be racially biased

The output of an AI is only as good as the data it was trained on. But what does it mean when the world’s largest data gathering organization creates an AI that is racially biased?

A recent study has discovered that Google’s AI algorithm developed to monitor and prevent hate speech on websites and social media platforms are more likely to flag content posted by African-Americans. Google has not yet released a statement responding to the study and no actual steps have been identified to correct this bias.

Read Original Article

Read Online

Click the button below if you wish to read the article on the website where it was originally published.

Read Offline

Click button below if you wish to read the original article offline.

Leave a Reply

Your email address will not be published. Required fields are marked *