The following is an edited transcript of an interview between Kate Crawford, a professor at the University of Southern California and researcher at Microsoft, and Tom Simonite, senior writer at Wired. The transcript was published last April 26, 2021.
In this interview, Crawford offers to us an alternative view of artificial intelligence (AI). She says, “Since the very beginning of AI back in 1956, we’ve made this terrible error, a sort of original sin of the field, to believe that minds are like computers and vice versa.“
The belief that mass data extraction can finally answer questions that we have about human nature that are not technical questions, had lead people to delve into machine learning. Crawford however says that this is problematic as data always has a context and politics associated with it. Using data as is, without consideration of its context can lead to biases [These biases are well documented, and continues to be unresolved despite efforts from AI developers, see The biases that can be found in AI algorithm].
Crawford goes on to expound on the emerging concerns surrounding AI and why they need to be addressed. She adds that some AI tools are causing harm and must be regulated.
Editor’s Note: This article is important because it raises some new issues that were not previously raised in the mainstream before. Crawford’s insistence that research focuses on questions of power, instead of ethics, brings into light the huge potential for AI to be misused by people who are already in control of the technology.
Crawford’s point about data having context is important. This recognition also makes it impossible for AI to address “non-technical” questions about the human being, as culture and beliefs can be difficult to program. It also places AI as a tool to be used by human beings, and nothing else. As tools, it is important that our societies implement safety measures. We must avoid a situation where AI gains the capacity to make autonomous decisions and implement them without asking its human operators [this is the exact plan for artificial superintelligence (ASI) and is no longer sci-fi, see AI: When artificial superintelligence just a couple of years away].
Read Original Article
Click the button below if you wish to read the article on the website where it was originally published.
Click the button below if you wish to read the article offline.