After the report on the collaboration efforts between KAIST and Hanhwa Systems, at least 50 of the top personalities in the AI community have declared a boycott of KAIST. The open letter published on the Centre on Impact of AI and Robotics, had these to say:
At a time when the United Nations is discussing how to contain the threat posed to international security by autonomous weapons, it is regrettable that a prestigious institution like KAIST looks to accelerate the arms race to develop such weapons. We therefore publicly declare that we will boycott all collaborations with any part of KAIST until such time as the President of KAIST provides assurances, which we have sought but not received, that the Center will not develop autonomous weapons lacking meaningful human control. We will, for example, not visit KAIST, host visitors from KAIST, or contribute to any research project involving KAIST.
Why It Matters
Despite the very real dangers of autonomous weaponry, and the reality that many countries are now joining the new arms race, the United Nations has failed to create a meaningful method of control. The AI arms race could go the same way as that of nuclear weapons – where millions of government funds are being wasted, without a real contribution to the well being of its people. And as Vladimir Putin has said, whoever masters AI will rule the world. It looks like countries are now actively scrambling for the first place.
Without a pre-emptive ban on autonomous weapons, and an international law that will prevent the development of such technologies, members of the academia are forced to take matters to their own hands.
How effective will their methods be in controlling their colleagues’ actions? Will they be successful in subverting the KAIST-Hanhwa Systems plan?
[contentcards url=”https://www.cse.unsw.edu.au/~tw/ciair//kaist.html” target=”_blank”]