Conner Forrest says a South Korean university’s decision to open a lab working on AI-powered weapons has led to a boycott by researchers.
By Conner Forrest*
South Korean university KAIST recently opened a research lab to work on artificial intelligence (AI)–powered weapons, and the AI research community was not happy about it.
On 5 April, more than 50 researchers announced a boycott of the university until it promised to stop working on such weapons.
The boycott was organised by Toby Walsh, a Professor at the University of NSW in Sydney, and centred on an open letter signed by the participants.
The letter notes that the published goals for the Research Center for the Convergence of National Defense and Artificial Intelligence at KAIST were to “develop AI technologies to be applied to military weapons, joining the global competition to develop autonomous arms”.
The United Nations has been discussing a ban on such weapons for some time.
Even with those discussions happening, the researchers said that KAIST is attempting to “accelerate the arms race to develop such weapons,” according to the letter.
Those who signed the open letter pledged that they would not visit the university, host visitors from the university, or contribute to any research project at KAIST.
According to the letter, the boycott will continue until the researchers are assured that KAIST’s Research Center for the Convergence of National Defense and Artificial Intelligence won’t develop any “autonomous weapons lacking meaningful human control.”
According to a recent Reuters report, KAIST responded soon after the publishing of the letter stating that it had “no intention to engage in development of lethal autonomous weapons systems and killer robots.”
KAIST was formerly known as the Korea Advanced Institute of Science & Technology.
According to the letter, it opened the research centre on 20 February in partnership with the Hanwha Group, a company that makes explosives.
As noted by Reuters, KAIST President, Sung-Chul Shin said the university was “significantly aware” of ethical issues around AI, and reaffirmed it won’t conduct research on “autonomous weapons lacking meaningful human control.”
Autonomous weapons and the risk of AI, in general, have been a topic of debate in tech circles for years.
Tesla CEO, Elon Musk went as far as to say that AI was “more dangerous than nukes,” and that it needs a governing body to oversee its implementation.
Additionally, a recent report examined the potential malicious uses of AI, including the ideas that rogue nation-states or terrorists could use the technology for their own ends.
“If developed, autonomous weapons will be the third revolution in warfare,” the letter said.
“They will permit war to be fought faster and at a scale greater than ever before.”
In closing, the letter urges KAIST not to develop the weapons, and to “work instead on uses of AI to improve and not harm human lives.”
* Conner Forrest is a Senior Editor for TechRepublic. He tweets at @ConnerForrest.
This article first appeared at www.techrepublic.com.