2 min readHow AI works is often a mystery — that’s a problem

The increasing use of algorithms and AI in criminal justice decision-making raises concerns about transparency and accountability. In this Nature Podcast, Glenn Rodriguez shared his personal experience of being denied parole due to a high-risk score generated by a proprietary algorithm. Meanwhile, Nick Petrić Howe from Nature discussed the challenges and limitations of explainable AI, emphasizing the need for transparency, accountability, and human-centered approaches to building trustworthy AI systems. The podcast also highlighted the limitations of AI systems, while and discussed the challenges of Explainable AI in deep learning models and generative AI.

Editor’s Note: These problems existed in the early iterations of AI, and despite all the push to create what they call “ethical AI” and the new AI development techniques that have been developed, AI is still biased [see AI IS BIASED AGAINST THE POOR AND PEOPLE OF COLOR. HOW CAN AI EXPERTS ADDRESS THIS?, GENDER BIAS COULD WORSEN WITH AI, ARE WE PASSING ON OUR BIASES TO ROBOTS?, THE AI BLACKBOX AND HUMAN DELIBERATIVE PROCESSES, STUDY FINDS GOOGLE’S AI HATE SPEECH DETECTOR TO BE RACIALLY BIASED. Also read these efforts to create explainable AI, RESOLVING THE AI “BLACK BOX”, THE 10 ESSENTIAL INGREDIENTS OF ETHICAL AI].

So the question now is: when will we be able to create an ethical AI, when the training data it is receiving are highly biased? [Maybe this is the reason why there is a push for censorship, they are attempting to “correct” the AI bias, by creating a new bias that the would-be overlords deem acceptable. Read HOW CENSORSHIP WORKS, CHINA PLANS TO USE AI FOR INTERNET CENSORSHIP, NEW EU PROPOSAL WILL PUT AN END TO PRIVACY, FACEBOOK WILL SUPPRESS “POLITICAL” CONTENT, and CENSORSHIP HAS NO PLACE IN A FREE SOCIETY].

Read Original Article

Leave a Reply

Your email address will not be published. Required fields are marked *