For authors Effy Vayena, Alessandro Blasimme, and I. Glenn Cohen, the ethical challenges relating to machine learning (ML) in medicine can manifest in three stages: data sourcing, product development, and clinical deployment.
In order for any ML-based health device/algorithm to fully perform its purpose with minimal unintended consequences, the developer of such technology must adhere to the following conditions: (1) the data used for training the ML device/algorithm followed data protection and privacy requirements, (2) the commitment to fairness was organically built-in to the development process, and (3) there is transparency in the efficacy results of the ML device/algorithm once it has been deployed [read this article to know What Does An Ethical AI Look Like In Health Care]
Read Original Article

Read Online
Click the button below if you wish to read the article on the website where it was originally published.



Read Offline
Click the button below if you wish to read the original article offline.
You may also like
-
Individual rights and freedoms under siege in COVID era
-
EU considers stricter rules for AI in high risk sectors
-
What does an ethical AI look like in health care
-
Here’s how health care orgs are addressing the challenge of AI
-
Task force releases policy principles to guide the use of AI in health care