The AI Now Institute is one of the few women-led AI institutes in the world. It is focused on understanding the social significance of AI, as well as the implications of these technologies. For this 2018 report, the institute stressed on the need to address the accountability gap – when AI systems are utililized for automatic decision making, and it causes harm to individuals, who should be held accountable?
According to the report, the AI accountability gap is growing. The lack of governance structures that regulate AI development has led to the growing concern for bias and discrimination in the integration of automatic decision systems in government systems. Governments have chosen to prioritize efficiency and cost-saving at the expense of civil rights. As a result, the rest of the world has been subjected to untested, unregulated AI systems, and businesses, despite having broken many things that are dear to us, are not at all held accountable for their action [for example, Facebook continues to operate despite the fact that it has failed to safeguard the privacy of millions of its users. For more info about this, read Facebook Just Had A Data Breach, But It’s Ready To Release A New Product].
While there have been a number of AI experts who have repeatedly sworn to ethical principles [see The 23 Asilomar Principles and Tech Experts From Google And Tesla Signs Policy Statement Against AI Weapons for examples], many of these commitments have no measureable effects in software development laboratories. Government’s failure to develop enforceable mechanisms to hold businesses and AI developers accountable for their creation increases the likelihood for the development of an AI that can wreck havoc to the world.
Why It Matters
The AI Now Report is important because it brings to light exactly what is wrong about AI development today. For sure, AI has its benefits, but as long as business interests are the driving force of this development, then we can expect for public interest to take a backseat [see article Uber Shows Us, Work Could Get Worse With AI for an example of this].
The report clearly shows what is wrong with our social institutions today. The government, which is mandated to ensure just and equitable development, has become a pawn of the economic sector. It has worked against public interest, but still tries to make it seem as if its actions are for its citizens. The good news is that reports such as that of the AI Now Institute shows us that people are awakening to the problem of AI, and are no longer afraid of speaking out against it.
We must, however, realize that the world is made up of more than two continents. There are those in the other side of the world, in Asia, and Africa, where the impact of AI technology is not yet felt. They remain as a “field” for market research – their data remains unprotected, their governments eager to get into the AI bandwagon in the hopes of catching up to their wealthier neighbors. In these areas, AI regulation is nowhere near their consciousness yet, as they are besmitten with the idea of progress, with AI providing the easy road.
Though these countries are located on the other side of the world, though we know no one there, we cannot simply ignore the fact that we all live in one earth. Whatever happens there, affects us, wherever we are [science shows us that we are all interconnected, we have various articles on this website relating to this].
We cannot simply declare these areas disposable. We have as much responsibility to protect them as we have, to protect our families and friends. They too, must know the consequences of AI, and how, in our own countries, we are slowly realizing its harm.
We all must advocate for ethical use of AI, but we must do it together, with the neighbor we love to hate, with people of different races, religions, and belief systems. We must raise our voices to be heard. We must raise it so loud the other side of the world will hear it.
Read Original Article
Click the button below if you wish to read the original article offline.