There are only five years left in Congress’ mandate that a third of the ground vehicles and a third of the deep-strike aircraft in the military must become robotic by 2010, largely through a program called Future Combat Programs, according to the New York Times.
The Pentagon is facing a daunting task that appears to have no realistic end in sight.
But assuming the Pentagon can meet this goal and fully automate the armed forces by 2035 as it predicts, the nation is going to be left in an ethical and theoretical gray area.
The problem with implementing robotic warfare is twofold.
Who will accept ultimate responsibility for the robot’s actions, and how will expendable robots affect the willingness of government leaders to resort to the option of war?
The Geneva Convention currently does not differentiate between robots and human beings, but rather combatants. Combatants are responsible for their actions and are held accountable for any violations.
But a robot would have no sense of responsibility. For example, if a robot is given the command “kill enemies,” it identifies the enemy and commences firing. The enemy then drops its weapons and surrenders.
The robot mistakenly recognizes this action as continued hostilities and fires, killing the enemy.
This entire sequence is caught on tape and immediately shown around the world.
Who or what is held responsible?
Can the programmer be held responsible?
The computer programmer was not directly involved in the death and can show in the code where they have programmed in the appropriate response – take the surrendering enemy as prisoner.
Would the robot then be responsible? That appears unlikely, as robots cannot stand trial.
It is commonly thought that robots would have no understanding of their actions, very much like young children who are not punished as harshly as adults in crimes.
Even if the robot was found guilty, at worst, the offending government lost some piece of hardware that was bound to be lost in any war.
In effect, this would be a legal loophole. Robots could be used to commit any violation. It would allow any government to scapegoat its actions. The expendable nature of robots leads to the second problem.
In the United States and other democratic societies, escalating or possible war casualties are primary motivators spurring grassroots movements against wars.
In war, the only casualties that have impact are those on your side. But when one side’s losses are merely mechanical parts, will there ever be pressure to stop or prevent a war? It is doubtful, for who would complain?
It is for that reason that the military prevents the media from showing pictures of military coffins, under the guise of privacy. Reminders of our mortality affect us all, but reminders of broken machinery give no cause for concern.
The ethics of robotic warfare have not been fully considered. Regardless, the military machinery continues churning forward, building better, faster and deadlier weapon systems.
It is a scary concept, much like the negative precedent set by the Manhattan Project and its result, the atomic bomb. Similar to the use of the atomic bomb at the end of World War II, robotic warfare will most likely be used before its ramifications are fully known by those who decide to use it.
Now is the time to fund ethics research into this topic in order to prevent another horrific atrocity and provide rules under which robotic warfare can be fought.
He can be reached at firstname.lastname@example.org.