Matthew Baker believes that when it comes to AI, we need to take a responsible approach. As more and more countries and companies are gearing towards developing an AI that can make independent decisions, we need to make sure that such machines will make decisions that are beneficial to human kind. Unless we can be sure of such condition, AI developers must shift their mindset towards “risk prevention, security planning, and simulation testing.
The risk with AI that replaces or competes with human intelligence is that it can be applied at scale simultaneously. The scope and reach of AI is both massive and instantaneous. It’s fundamentally introducing higher risk. While one driver who makes an error is truly unfortunate, one driver that makes the same error for millions of people should be unacceptable.
For typical products, going to market quickly and seeing what happens is fine. The implications of AI merit a much more considered approach.