0
Shares
Pinterest WhatsApp
Campaign to stop killer robots/Flickr

Last month, the international “Campaign to Stop Killer Robots” was officially launched in London. The Initiative, backed by groups from ten countries and several international NGOs aims at pre-emptively banning autonomous robots, most importantly autonomous unmanned aerial vehicles (UAVs or “drones”). In the press statement, the initiators claim that “[u]rgent action is needed to pre-emptively ban lethal robot weapons that would be able to select and attack targets without any human intervention”. They call for “an international treaty […] national laws and other measures.” Nobel Peace Laureate Jody Williams argues that “[a]llowing life or death decisions on the battlefield to be made by machines crosses a fundamental moral line and represents an unacceptable application of technology.”

There is, of course, nothing wrong with this demand. Who would not immediately applaud such an initiative? After all, no one likes the idea of armed terminator robots, running (or flying) around unsupervised, killing people at will.

The problem with the campaign is however, that indeed, no one likes the idea of autonomous UAVs. Fully autonomous military UAVs which can identify and engage targets without human supervision do not exist and there are no known plans to build them. (The weapon which comes closest to this idea is an Israeli-built “smart bomb” called ‘Harpy’. It can loiter for several hours over a designated area, screen the ground for enemy radar emitters and attack them when detected. The decision to launch the weapon and in what area is still made by humans.)

It seems unlikely that military or political decision-makers would really want to introduce a fully autonomous weapon to the battlefield. Most importantly, the military has no desire to render itself redundant. New York Times commentator Bill Keller claims that the advantage of autonomous weapons is that it “can continue to fight after an enemy jams your communications, which is increasingly likely in the age of electromagnetic pulse and cyberattacks”. While this might be a valid point, wouldn’t it be considerably easier (and cheaper) to protect communication lines from getting hacked or jammed? And does the military really prefer an unsupervised weapon with which no communication can be established over a jammed one?

For political decision makers, autonomous weapons would be even more problematic. In our times, when civilian casualties can (for good reason) become political liabilities, which politician would want to have machines take life-and-death decisions? Often, politicians are personally held responsible for mistakes. In 2009 for instance, former German defence minister Franz Joseph Jung had to step down over an airstrike in Afghanistan which caused the death of more than 100 civilians. Trying to hold a machine (or its inventor, engineer or designer?) responsible, would be extremely difficult. For any politician in his/her right mind, this must sound like a nightmare scenario.

Still, the campaign initiators could argue, it is better to be safe than sorry. The campaign could ensure that supporters of autonomous robots, as few as they may be, would be dissuaded to pursue the idea. The real risk lies elsewhere though: the campaign might play into the hands of those governments wanting to acquire the equally contested, but actually existent non-autonomous armed UAVs. For the moment, only three countries are known to hold armed UAVs (Israel, the UK, the US) and two more are assumed to have them (China and Iran). Several countries, most recently Germany, have announced that they are looking into the possibility of acquiring armed drones. With emotions running high what would be more convenient than supporting an initiative to ban worse, yet non-existent weapons? It should not come as a surprise if several (European) governments decided to back the initiative. This move would then allow them to prove and underline their good intentions – while buying armed UAVs in the meantime.

Hence, while the intentions of the “Stop Killer Robots” campaign initiators should be applauded, they might be well advised not to concentrate only on what might be – but rather to spend more time on the moral and ethical issues already at stake.

Comments

comments

Previous post

Index on Censorship student blogging competition

Next post

Bernard-Henri Levy on Humanitarian Intervention

No Comment

Leave a reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.