2 min readAI requires global dialogue

AI is a global concern, and requires a global response. But why aren’t countries responding?

This article (see below) was written by OECD Deputy-Secretary General Douglas Frantz for the OECD Conference in 2017. It provides a context for international cooperation, and sets into motion collaborations between governments and developers aiming to create ethical AI.

Much of the impacts of AI mentioned by Frantz in this article has already been explored extensively by this website (massive job loss, improvement in medicine and education, among others). But what makes this article important is that it stresses the need for governments to develop a framework which will promote the ethical development of AI and, and to create safeguards against potential abuses as the technology moves deeper into the mainstream.

Frantz stresses that there is no way AI development can be stopped and that shouldn’t even be the goal. In order to maximize the opportunity brought about by the digital transformation, a multi-stakeholder approach should be utilized, with the government take the lead role in the dialogue. Only then will our societies be able to promote research and creativity that will enable AI to fulfill its promise.

Why It Matters

Much of what Frantz said in 2017 is still applicable today. We knew the impacts AI would have on our societies, and we understood the need for a global dialogue to govern AI. Two years after today, no global consensus on a framework that will ensure the co-existence of humans and machines exist. This, despite the OECD Conference.

Why? Was the conference just a PR ploy by the 67 countries to appease the larger population of people who remain ignorant of the implications of AI development? If indeed a real multi-stakeholder conversation happened in 2017, what are the its results? Why are countries like US, China, and Russia, still building AI weapons? Why isn’t AI being used to eradicate poverty?

Perhaps the more important questions to address are the following: Why are beneficial AI being produced and prioritized by small groups of researchers with limited while big tech companies such as Google and Facebook focusing on industry-grade (profit-making) implementations? Does this mean that public benefit is once again taking a back seat in AI development? Whose interests then, are governments serving?

Obviously, there are more questions today than there were in 2017, mainly because no major change has occurred in the field of AI development. With 2019 bringing in even bigger innovations in technology, and the impending launch of 5G and more mobile AI applications, we can expect more concerns and fewer answers to crop up.

Read Original Article

Read Online

Click the button below if you wish to read the article on the website where it was originally published

Read Offline

Click the button below if you wish to read the original article offline.

Leave a Reply

Your email address will not be published. Required fields are marked *