Isaac Asimov's "Three Laws of Robotics"
- A robot may not injure a human being or,
through inaction, allow a human being to come to harm. - A robot must obey orders given it by human beings
except where such orders would conflict with the First Law. - A robot must protect its own existence
as long as such protection does not conflict with the First or Second Law.
Killer Robots
Elon Musk has long warned of the dangers posed by AI, signing a letter published by the Future of Life Institute in January supporting the Asilomar AI Principles to ensure the development of artificial intelligence that is beneficial to humanity. Musk spoke at the National Governors Association Summer Meeting in Rhode Island 15 July 2017. “Until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal... AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late....
“Normally the way regulations are set up is a while bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry... It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization. AI is a fundamental risk to the existence of human civilization.... When I say everything, the robots will do everything, bar nothing...”
In September 2017, Elon Musk tweeted AI could caused World War III. In addition, Musk sounded the alarm bells regarding AI, saying it will "beat humans at everything" within the next few decades, labeling it humanity's "biggest risk."
The findings of journalist-soldier General S.L.A. Marshall about combat fire ratios — particularly that in World War II less than 25 percent of American combat infantrymen in battle fired their weapons — have been controversial since Marshall published them in his 1947 book, Men Against Fire. He continued to apply his methodology — the after-action, group interview with enlisted men—during the Korean War, where he concluded that only than half the front-line soldiers were firing their weapons.
The fully autonomous weapon or "killer robot" has not yet been developed. Technology, however, is moving toward increasing autonomy. Such weapons would select and pull the trigger on targets without human intervention. Several robotic systems with various degrees of autonomy and lethality are in use by Britain, Israel, the United States and South Korea. They say other nations, such as China and Russia, are believed to be moving toward these systems.
In November 2013 an international coalition called for a ban on fully autonomous weapons known as "killer robots." The 45-member Campaign to Stop Killer Robots says it wants the United Nations to draft an international treaty to outlaw the use of these robotic weapons. The Campaign to Stop Killer Robots toook its case to governments attending the annual meeting of the Convention on Conventional Weapons in Geneva. The group of non-governmental organizations said it wanted the U.N. gathering to agree to add fully autonomous weapons to the Convention's work program in 2014.
Noel Sharkey chairs the International Committee for Robot Arms Control and is a founding member of the Campaign to Stop Killer Robots. He says autonomous weapons should be banned outright. "The big problem for me is that there are no robot systems that can discriminate between civilian targets and military targets unless they are very, very clearly marked in some way…so, the idea of having robots going out into the field and selecting their own targets is to me, is just horrifying. It cannot work, " said Sharkey.
The director of the Arms Division at Human Rights Watch and a member of the campaign, Steve Goose, warns that killer robots will become a reality if governments do not act now to ban them. He says the technology and doctrine are headed toward greater autonomy on the battlefield. While fewer and fewer soldiers are on the battlefield, he says many civilians remain. Goose says a line must be drawn on a weapons system that would be able to select and attack targets automatically. He says this concept crosses a fundamental moral and ethical line.
"Armed robotic weapons systems should not make life and death decisions on the battlefield. There is simply something inherently wrong with that," said Goose. "So, they need to be banned on ethical grounds. We think they also need to be banned on legal grounds. If and when a killer robot commits a war crime, violates international humanitarian law…who would be held accountable, who would be responsible for that violation?"
Goose says in recent months, fully autonomous weapons have gone from an obscure issue to one that is commanding worldwide attention. He says that since May, 34 countries, including several that are developing these systems, have openly expressed concern about the dangers the weapons pose. He notes that in 1995, the Convention on Conventional Weapons created a protocol to the treaty, which pre-emptively banned blinding lasers. Goose says he believes killer robots could become the second such weapon to be prohibited before it is ever used on the field.
The first multinational discussions on the rising specter of ‘autonomous killer robots’ was hosted in May 2014 by the United Nations to consider whether the global community should ban the new technology – before it’s too late. Acting Director-General of the UN Office in Geneva Michael Møller said the time to take action against killer robots is now. “All too often international law only responds to atrocities and suffering once it has happened,” he said. “You have the opportunity to take pre-emptive action and ensure that the ultimate decision to end life remains firmly under human control."
The Campaign to Stop Killer Robots, an international coalition of non-governmental organizations, successfully petitioned the UN to consider the question of ‘autonomous weapons systems’ in a Convention on Conventional Weapons (CCW) meeting.
One of the founders of the NGO, Nobel Peace Laureate Jody Williams, urged a ban on “autonomous robots,” which the US Pentagon defines as weapons that "once activated, can select and engage targets without further intervention by a human operator.... “Talking about the problems posed by these future weapons is a good place to start, but a ban needs to be put in place urgently if we are to avoid a future where compassionless robots decide who to kill on the battlefield,” Williams said, as quoted by Forbes.
Williams teamed up with 19 other Nobel Peace laureates to demand a ban on the lethal technology. At the same time, Human Rights Watch (HRW), another participant of the Campaign, released its own report detailing the potential threats posed by machines that have no ability to implement “human judgment” in the heat of the battle. “In policing, as well as war, human judgment is critically important to any decision to use a lethal weapon,” said Steve Goose, arms division director at Human Rights Watch, before the talks. “Governments need to say no to fully autonomous weapons for any purpose and to preemptively ban them now, before it is too late.”
"The Campaign to Stop Killer Robots" urged governments attending an April 2015 UN conference to prevent the development of a lethal robotic system that holds the power of life or death without any human control. Many of the 120 states that are part of the Convention on Conventional Weapons participated in the meeting of experts on “lethal autonomous weapons systems.”
Fears of artificial intelligence (AI) gone wrong prompted more than a thousand scholars and public figures - including theoretical physicist Stephen Hawking, SpaceX founder Elon Musk and Apple co-founder Steve Wozniak - to sign an open letter, cautioning that the autonomous weapons race is “a bad idea” and presents a major threat to humanity. The letter, presented Monday at the International Joint Conference on AI in Buenos Aires by Future of Life Institute, warns about the high stakes of modern robotic systems that have reached a point at which they are to be feasible within just years, and that "a global arms race is virtually inevitable."
Over 160 companies working in artificial intelligence signed a pledge in July 2018 not to develop lethal autonomous weapons. The pledge, which was signed by 2,400 individuals including representatives from Google DeepMind, the European Association for AI and University College London, says that signatories will “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.” The pledge was announced by Max Tegmark, president of the Future of Life Institute, which organized the effort.
Robotics activists claim that the use of autonomous killer robots is morally wrong. "We've focused on two things that we want to see remain under meaningful, or appropriate, or adequate, or necessary human control," Mary Wareham, the global coordinator for the Campaign to Stop Killer Robots, told the New York Post. "That's the identification and selection of targets and then the use of force against them, lethal or otherwise," key decision points, she noted, where only human judgment — capable of discriminating between enemy and bystander — can keep a sense of proportionality in responding and can be held accountable while satisfying the conventions of war.
NEWSLETTER
|
Join the GlobalSecurity.org mailing list |
|
|