As we train machines to think for themselves, we might transfer to them our own gender biases.ere is a big chance for this to happen as a huge amount of our data is still biased against women. For example, it has been discovered that in some career platforms, highly-paid jobs were displayed less frequently for women than for men [note] Samuel Gibbs, “Women less likely to be shown ads for high-paid jobs on Google, study shows”, published on July 8, 2015, https://www.theguardian.com/technology/2015/jul/08/women-less-likely-ads-high-paid-jobs-google-study [/note].
In a study conducted in 2017 by computer science professor Vicente Ordóñez of the University of Virginia, they discovered that the image-recognition software they were training were showing signs of sexism.
Important Points from Ordóñez’s study
- The software showed a predictable gender bias. For example, when showed a photo of a kitchen the software associated it with women, while photos showing coaching and shooting activities were associated to men.
- When asked to complete statements such as “Man is to computer programmer as woman is to X“, the AI would reply, “homemaker”.
While the researchers have been able to correct the bias, it required a researcher who was conscious with the results of his study. “It requires a researcher to be looking for bias in the first place, and to specify what he or she wants to correct.”
Why it matters
What can happen if an AI developer makes no such effort to find those biases? What other biases might be embedded in the training data being used for AI which remains undiscovered until now? How might such biases lead AI, especially those with human-level intelligence, act towards humans?