Nowa Fantastyka Jan 2014

war

The Terminator has made it into the pages of Science. The December 20 issue of that prestigious journal contains an article entitled “Scientists Campaign Against Killer Robots,” which summarizes the growing grass-roots movement against autonomous killing machines on the battlefield. Lest you think we’re talking about the burgeoning fleet of drones deployed with such enthusiasm by the US—you know, those weapons the Obama administration praises as so much “more precise” than conventional airstrikes, at least during those press conferences when they’re not expressing regrets over another Yemeni wedding party accidentally massacred in the latest Predator attack—let me bring you up to speed. Predators are puppets, not robots. Their pilots may be sipping coffee in some air-conditioned office in Arizona, running their vehicles by remote control, but at least the decision to turn kids into collateral is made by people.

Of course, the problem (okay: one of the problems) with running a puppet from 8,000 km away is that its strings can be jammed, or hacked. (You may have heard about those Iraqi insurgents who tapped into Predator video feeds using $26 worth of off-the-shelf parts from Radio Shack.) Wouldn’t it be nice if we didn’t need that umbilicus. Wouldn’t it be nice if our robots could think for themselves.

I’ve got the usual SF-writer’s hard-on for these sorts of issues (I even wrote a story on the subject—“Malak”—a couple of years back). I keep an eye open for these sorts of developments. We’re told that we still have a long way to go before we have truly autonomous killing machines: robots that can tell friend from foe, assess relative threat potentials, decide to kill this target and leave that one alone. They’re coming, though. True, the Pentagon claimed in 2013 that it had “no completely autonomous systems in the pipeline or being considered”—but when was the last time anyone believed anything the Pentagon said, especially in light of a 2012 US Department of Defense Directive spelling out criteria for “development and use of autonomous and semi-autonomous functions in weapon systems, including manned and unmanned platforms”1?

Root through that directive and you’ll find the usual mealy-mouthed assurances about keeping Humans In Ultimate Control. It’s considered paramount, for example, that “in the event of degraded or lost communications, the system does not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator”. But you don’t have to be Isaac Asimov to see how easy it would be to subvert that particular Rule of Robotics. Suppose a human operator does approve a target, just before contact with a drone is lost. The drone is now authorized to hunt that particular target on its own. How does it know that the target who just emerged from behind that rock is the same one who ducked behind it ten seconds earlier? Does it key on facial features? What happens if the target is wearing clothing that covers the face? Does it key on clothing? What happens if the target swaps hats with a friend?

According to Science, the fight against developing these machines—waged by bodies with names like the Convention on Certain Conventional Weapons and the International Committee for Robot Arms Control—centers on the argument that robots lack the ability to discriminate reliably between combatants and civilians in the heat of battle. I find this argument both troubling and unconvincing. The most obvious objection involves Moore’s Law: even if robots can’t do something today, there’s a damn good chance they can do it tomorrow. Another problem—one that can bite you in the ass right now, while you’re waiting for tomorrow to happen—is that even people can’t reliably distinguish between friend and foe all the time. North American cops, at least, routinely get a pass when they gun down some innocent civilian under the mistaken impression that their victim was going for a gun instead of a cell phone.

Does anyone truly believe that we’re going to hold machines to a higher standard than we hold ourselves? Or as Lin et al put it back in 2008 in “Autonomous Military Robotics: Risk, Ethics, and Design”:

“An ethically-infallible machine ought not to be the goal. Our goal should be to design a machine that performs better than humans do on the battlefield, particularly with respect to reducing unlawful behaviour or war crimes.”

Ah, war crimes. My final point. Because it’s actually really hard to pin a war crime on a machine. If your garden-variety remote-controlled drone blows up a party of civilians, you can always charge the operator on the other side of the world, or the CO who ordered him to open fire (not that this ever happens, of course). But if a machine decided to massacre all those innocents, who do you blame? Those who authorized its deployment? Those who designed it? Some computer scientist who didn’t realize that her doctoral research on computer vision was going to get co-opted by a supervisor with a fat military contract?

Or does it stop being a war crime entirely, and turn into something less—objectionable? At what point does collateral damage become nothing more than a tragic industrial accident?

To me, the real threat is not the fallibility of robots, but the deliberate exploitation of that fallibility by the generals. The military now has an incentive: not to limit the technology, not to improve its ability to discriminate foe from friend, but to deploy these fallible weapons as widely as possible. Back in 2008 a man named Stephen White wrote a paper called “Brave New World: Neurowarfare and the Limits of International Humanitarian Law.” It was about the legal and ethical implications of neuroweaponry, but its warning rings true for any technology that takes life-and-death decisions out of human hands:

“. . . international humanitarian law would create perverse incentives that would encourage the development of an entire classes of weapons that the state could use to evade criminal penalties for even the most serious types of war crimes.”

Granted, the end result might not be so bad. Eventually the technology could improve to the point where robotic decisions aren’t just equal to, but are better than those arising from the corruptible meat of human brains. Under those conditions it would be a war crime to not hand the kill switch over to machines. Under their perfected algorithms, combat losses would dwindle to a mere fraction of the toll inflicted under human command. We could ultimately end up in a better place.

Still. I’m betting we’ll spill rivers of blood in getting there.

1https://www.hsdl.org/?abstract&did=726163