Who has the right to define what “high risk AI” means? This question is now at the center of a current debate in Europe. This story is covered by an article written by Khari Johnson and published by Wired on September 1, 2021.
The issue began with the European Union (EU) Artificial Intelligence Act, which is considered as “one of the first major policy initiatives worldwide focused on protecting people from harmful AI.” When enacted, it will classify AI technologies according to risk, allowing the EU to regulate AI that present high risk to humans. It also hopes to ban some forms of AI, for example real-time facial recognition as applied in several instances. It would create a common regulatory and legal framework for the 27 countries of the European Union.
Accordingly, the bill already has a definition of high risk AI as anything that “can harm a person’s health or safety or infringe on fundamental rights guaranteed to EU citizens, like the right to life, the right to live free from discrimination, and the right to a fair trial”. Despite this, the European Council and European Parliament has called on citizens to weigh in on the contents of the AI Act.
Johnson says that over 300 organizations have submitted their proposals. One of those organizations is the European Evangelical Alliance which urged the European Commission to allow more discussions on what is “‘considered safe and orally acceptable’ when it comes to augmented humans and computer-brain interfaces”.
Editor’s Note: We welcome this new development from Europe. It is time that a conversation around AI regulation should happen, especially as so many damaging AI technologies are already being released in our societies today [see ARE WE PASSING ON OUR BIASES TO ROBOTS?, AI IS BIASED AGAINST THE POOR AND PEOPLE OF COLOR. HOW CAN AI EXPERTS ADDRESS THIS?, GENDER BIAS COULD WORSEN WITH AI, THE BIASES THAT CAN BE FOUND IN AI ALGORITHM].
As some governments are already thinking of utilizing the social credit system which first originated in China, it is time to start putting up safeguards to protect citizens and countries from the potential effects of the abusive use of such AI-based technologies [also see GERMANY IS TOYING WITH CHINA’S SOCIAL CREDIT SYSTEM].
Read Original Article
Click the button below if you wish to read the article on the website where it was originally published.
Click the button below if you wish to read the article offline