The “AI Misinformation Epidemic” is real, and it is leading the discussion away from concerns that really matter. In this article written by Oscar Schwartz, he talks about how some journalists are sensationalizing the results of AI researches, and how such approach could actually be detrimental to the industry.
Some Important Quotes
- …inaccurate and speculative stories about AI, like the Facebook story, will create unrealistic expectations for the field, which could ultimately threaten future progress and the responsible application of new technologies.
- People are afraid about the wrong things…There are policymakers earnestly having meetings to discuss the rights of robots when they should be talking about discrimination in algorithmic decision making.
- Experts can be really quick to dismiss how their research makes people feel, but these utopian hopes and dystopian fears have to be part of the conversations. Hype is ultimately a cultural expression that has its own important place in the discourse.”
Why It Matters
Writing on this blog for several months now, I can attest to the fact that finding real information about AI is very difficult. In the beginning, I subscribed to Google Alerts, thinking it would be an easy way to get a variety of articles on the topic. But to be honest, the results I get from there is not worth the convenience it offers. I would get feeds from websites with paraphrased reports, or would get a variety of websites publishing the same paraphrased reports. It can be frustrating, if one is simply looking for a quick information.
I have learned since then that if one wanted the truth about AI, you would need to spend some time looking for it. Sometimes, it would take me hours, and several articles later, before finding the actual source of a report. I don’t imagine a lot of journalists have the time and patience that I have for this.
But the “AI Misinformation Epidemic” relates more to the issue of quality than accessibility. Like I have mentioned earlier, many of the articles online are paraphrased, and if you would believe Schwartz, some of them are inaccurate, if not sensationalized. Those that had credible information were written in a highly scientific point-of-view, baffling (if not alienating) the billions of people around the world who do not have the scientific background to translate the results.
But this is the very reason why this site exists. We realize the need to curate articles which are real sources of information regarding AI. This is our contribution to preventing the further proliferation of the “AI Misinformation Epidemic”. What we need today are well-thought out articles that considers the latest information about the field of AI and machine learning, a discussion of the pros and the cons, as well as the realities and the hysteria surrounding the topic. The gesture is not to simply alarm the public about the possibilities of AI, but to understand the issue, and hopefully, find a solution to the concerns arising out of its development.
To end this article, I just want to clarify one thing. While it is true that a lot of the information about AI is sensationalized, there are of data tagged as part of the “AI Misinformation Epidemic” that are vital to understanding the real extent of the concern. The truth is that we are approaching a time we have never experienced before. Terminator-like robots are not yet here, but countries are now investing to develop AI weapons which can be just as fatal. A round-up of current technologies show that artificial general intelligence (AGI), an intelligence that equals humans, is within the horizon. There is also a consensus within the AI community that with the arrival of AGI, artificial superintelligence (ASI), an intelligence beyond that of humans, is not far behind.
We have seen from the experience of the FB developers and their bots, there are many missing pieces in AI development. There are human biases that are entering algorithm, and what if the solution an AI applies to a problem might be detrimental to human kind?
Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.
Nick Bilton
Columnist, New York Times
Unlike the FB bot scenario, with AGI and ASI, there are no second chances. We cannot turn off the bot and hope that’s the end of that. If these machines will be just like human, don’t you think they would have thought a Plan B, in case their plan A fails? It is not enough to simply disregard the existential threat of AI. A real conversation regarding this concern, not only in the academic halls, and AI research facilities. Citizens and their leaders must take part in this discussion, and hopefully from thence, will emerge a solution that will benefit all of humankind.
Read Original Article
Read Online
Click the button below if you wish to read the article on the website where it was originally published.