1 min readThe way we train AI is fundamentally flawed

AI training has a serious flaw, and experts say there is no quick fix.

In this November 18, 2020 article for Technology Review, Will Douglas Heaven says that the way AI is being trained today does not give an insight on whether it will work in the real world or not. He says, “it is no secret that machine learning models tuned and tweaked to near-perfect performance in the lab often fail in real setting”. 

Through this article, Heaven offers an in-depth discussion of the problem of under specification described by experts this way: “even if a training process can produce a good model, it could still spit out a bad one because it won’t know the difference. Neither would we.” 

Editor’s Note: As this article points out, small, arbitrary differences in machine learning models make a huge difference in terms of real world results. Stress tests for AI have to be developed depending on tasks, which in itself is already a huge job. Chances are, only the biggest company will have the capacity to test, and retest their AIs. 

This article highlights an important: we cannot put our full trust on AIs because we do not know if they results they are returning are correct or not. Humans still need to cross check the results.

Read Original Article

The way we train AI is fundamentally flawed

The way we train AI is fundamentally flawed

It’s no secret that machine-learning models tuned and tweaked to near-perfect performance in the lab often fail in real settings. This is typically put down to a mismatch between the data the AI was trained and tested on and the data it encounters in the world, a problem known as data shift. For example, an…

https://www.technologyreview.com/2020/11/18/1012234/training-machine-learning-broken-real-world-heath-nlp-computer-vision/

Read Offline

Click the button below if you wish to read the original article offline.

Leave a Reply

Your email address will not be published. Required fields are marked *