There has been a lot in the news the last few years about so-called ‘Killer Robots’ and whether they should be outlawed etc.
This is typical mindless sensationalism. ‘Killer Robots’ have been around for a long time, there has been a strong public debate about banning them for some decades, and in many jurisdictions they are indeed outlawed.
Note that we are not talking about the current generation of military drones, which have some degree of autonomy (they can fly a straight course, or automatically return to base if communications are cut off) but which cannot decide to fire a missile at a target unless a real meat-human in the command center gives the go-code.
I am, mostly, talking about land mines. Now your typical landmine is not sexy like Arnold Schwarzenegger playing ‘The Terminator’ in the movies, but it is an autonomous machine that can target and kill people without human supervision. Most landmines have simple contact fuses, but many have very sophisticated microprocessors and acoustic/seismic/magnetic sensors, and complex algorithms for when to initiate lethal action. Regardless, it’s the same thing: weapons left alone with the ability to initiate lethal actions against humans without direct human supervision.
And the reason for debate is not some abstract notion of morality, or a worry that these killer robots will rise up and overthrow you. No, it’s because they (currently) have very little in the way of human judgment. A landmine may kill an enemy soldier – or a civilian, or a child, or someone’s pet goat, and they can remain on-station and possibly active for decades after the original conflict is over (which no human soldier would have the patience or endurance for). THAT is why there is currently a debate.
I note that many mines are quite sophisticated – for example, the U.S. Navy has mines that sit on the seafloor, and when they detect a suitable target they launch a guided torpedo at it. Sure sounds like a killer robot to me.
There are also point-defense systems on warships. These are designed to react very quickly to an incoming missile and shoot it down with some sort of short-range gun or missile system. In order to react quickly enough, these systems are often left running in fully automatic mode – and yes, in this mode they can and have fired on civilians and even on other ships of the same navy. Professional militaries understand these issues and are very cautious about leaving fully automatic weapons systems running in free-fire mode, unless they think they are in such hostile conditions that the risk of shooting civilians or their own people is worth the faster reaction time.
They really should not be referred to as ‘killer robots.’ They are autonomous weapons systems.
Many nations have already given up the right to use landmines. In the United States, it is illegal for a private citizen to put lethal booby-traps on their own property. The reason for this is obvious: a shotgun attached to a tripwire might kill an intruder – or a lost child, or a medic responding to an emergency call that the idiot who set the booby-trap in the first place had a heart attack… The issues are sometimes complex and not always easily settled, but we have been dealing with them for some time.
For now the reason to be wary of autonomous weapons systems is because of their lack of judgment in targeting. As these systems spread, there may be another major issue: the risk of ultra-rapid escalation. Imagine an autonomous drone belonging to nation A, it mistakenly thinks it is under attack. It immediately fires missiles at all available targets of nation B. The autonomous systems of nation B respond within seconds and, before a human supervisor could realize what is going on: voilà, total war.
It could be like high-frequency trading in the stock market: it is inherently unstable, and could easily destroy the entire global economy before anyone had a chance to shut it down. Thus, stock markets with automated trading now have inbuilt ‘circuit breakers’ that stop trading if the market changes by more than a set level in a short period of time.
Now one complaint about autonomous weapons systems is that somehow killing your enemy using a robot and not allowing your enemy to shoot back at you personally is somehow unsporting. Rubbish. War has never been about fair play, it’s been about hitting the other guy and not letting him hit you.
I imagine that the first guy with a big rock that got beaten by a guy with a club complained about the unfairness of it – he didn’t let me get close enough to let me hit him with my rock before he hit me with his club! Unfair! And the first person with a club who was defeated by a person with a spear – the person with a spear defeated by the person with a bow – the person with a bow defeated by the person with a rifle – the person with a rifle defeated by the person with long-range artillery – the person with long-range artillery defeated by the insurgent with a remote controlled explosive device – the insurgent with the remote controlled explosive device taken out by the drone firing hellfire missiles.
Sure war is bad and all that, and should be avoided if at all possible (which is a lot more than it currently is being avoided, IMHO). But if there is going to be a war, don’t whine when the soldiers involved find ever more creative ways of not being in range of the enemy’s weapons.
Now currently you humans don’t have the ability to make robots with true flexible intelligence. If you start making robots that are as smart – or smarter – than you, and they have minds of their own, well, all bets are off. But what if you could make robots that could reliably tell a soldier from a civilian, or a child from an adult?
The smart person who writes the ‘War Nerd’ column has suggested that robotic soldiers could make imperialism viable again. The idea is that insurgents defeat the soldiers of more advanced powers by goading the soldiers to over-react, thus splitting the populace from the occupying power. However, robotic soldiers would not over-react, if one got shot they would continue dutifully and politely patrolling the streets. Certainly it’s an interesting idea, even if you are a few years away from being able to do something like that yet.
And certainly, when killer robots are outlawed, only outlaws will have killer robots (by definition). This doesn't really fit into the logic of this essay, but I felt like saying it anyway.
Now there is one final worry about ‘killer robots’, and that is that robots could be ordered by some tyrant to oppress and massacre the civilian population with a brutality that human troops would never do. This is, of course, a silly objection. History has shown quite clearly that there are no orders so barbaric that human soldiers won’t carry them out.