What does the Facebook chatbot experience tell us about the future of AI? What are the lessons we can learn from it?
Writing about the Facebook chatbot incident, months after it has been published has its benefits. For one, I don’t need to summarize the details any more because every one knows it already. Second, people have already made their value judgements, so I don’t need to convince you of anything.
Why It Matters
I read dozens of article regarding this incident, trying to decide which one to repost on this site, and I end up with this: Tom McKay’s article on the topic published by GizModo. Why did I choose this article you ask? Well I have a number of reasons.
- The article has a snapshot of the Facebook GUI so you can imagine how the conversation between Alice and Bob went.
- It included the verbatim conversation between Alice and Bob.
- Tom reported the various headlines used by websites concerning the topic.
- Actual findings of the Facebook AI Research Lab was also reported.
Aside from these, perhaps one of the most important aspect of McKay’s article is that it clearly shows how short-sighted we can sometimes be, refusing to look at implications of technologies, as indicated by smaller, simpler projects.
Perhaps it is a blessing that the chat bot incident is menial, hence, its effect is also minimal. But such errors cannot and should not happen in large scale projects that could have disastrous impact to humans and their societies. How many companies today are trying to build Artificial General Intelligence (AGI) in the hope of eliminating human error in programming in the future? And yet, how sure are we that human programmers have made no errors in building the AGI? What if that error has been passed on to the AGI children and has been permanently coded among Artificial Super Intelligence (ASI)? Unlike the Facebook scenario, human-level AI may react a different way.
But this isn’t a question of whether the Facebook scenario is indicative of a dismal future or not. Rather, it is a question of our humanity. Why are we so quick to write off errors without any attempt to understand its impacts or its importance? Why are we so quick to say, “machine learning is nothing close to true AI” (true AI meaning AGI) without knowing the actual state of development? And besides, if professional negotiators lose their jobs, isn’t that something to be concerned about? Or do we not care simply because we remain unaffected by this development?
No, Facebook Did Not Panic and Shut Down an AI Program That Was Getting Dangerously Smart
In recent weeks, a story about experimental Facebook machine learning research has been circulating with increasingly panicky, Skynet-esque headlines.