Saturday, February 7, 2009

Robots and War

The Three Laws of Robotics (Isaac Asimov, Runaround 1942)
(I) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

(II) A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

(III) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I remembered Asimov after recently reading Military Robots and the Laws of War  an essay in the The New Atlantis magazine, adapted from the new book, Wired for War: The Robotics Revolution and Conflict in the Twenty-First Century, by P. W. Singer, a senior fellow at the Brookings Institution and the director of the institution’s 21st Century Defense Initiative.  

Singer offers some interesting stats:
When U.S. forces went into Iraq, the original invasion had no robotic systems on the ground. By the end of 2004, there were 150 robots on the ground in Iraq; a year later there were 2,400; by the end of 2008, there were about 12,000 robots of nearly two dozen varieties operating on the ground in Iraq. As one retired Army officer put it, the “Army of the Grand Robotic” is taking shape.

It isn’t just on the ground: military robots have been taking to the skies—and the seas and space, too. And the field is rapidly advancing. The robotic systems now rolling out in prototype stage are far more capable, intelligent, and autonomous than ones already in service in Iraq and Afghanistan. But even they are just the start. As one robotics executive put it at a demonstration of new military prototypes a couple of years ago, “The robots you are seeing here today I like to think of as the Model T. These are not what you are going to see when they are actually deployed in the field. We are seeing the very first stages of this technology.” And just as the Model T exploded on the scene—selling only 239 cars in its first year and over one million a decade later—the demand for robotic warriors is growing very rapidly.
It's a long sobering essay that concludes with three points of Singer's own:
In time, the international community may well decide that armed, autonomous robots are simply too difficult, or even abhorrent, to deal with. Like chemical weapons, they could be banned in general, for no other reason than the world doesn’t want them around. Yet for now, our laws are simply silent on whether autonomous robots can be armed with lethal weapons. Even more worrisome, the concept of keeping human beings in the loop is already being eroded by policymakers and by the technology itself, both of which are rapidly moving toward pushing humans out. We therefore must either enact a ban on such systems soon or start to develop some legal answers for how to deal with them.

If we do stay on this path and decide to make and use autonomous robots in war, the systems must still conform with the existing laws of war. These laws suggest a few principles that should guide the development of such systems.

First, since it will be very difficult to guarantee that autonomous robots can, as required by the laws of war, discriminate between civilian and military targets and avoid unnecessary suffering, they should be allowed the autonomous use only of non-lethal weapons. That is, while the very same robot might also carry lethal weapons, it should be programmed such that only a human can authorize their use.

Second, just as any human’s right to self-defense is limited, so too should be a robot’s. This sounds simple enough, but oddly the Pentagon has already pushed the legal interpretation that our drones have an inherent right to self-defense, including even to preemptively fire on potential threats, such as an anti-aircraft radar system that lights them up. There is a logic to this argument, but it leads down a very dark pathway; self-defense must not be permitted to trump other relevant ethical concerns.

Third, the human creators and operators of autonomous robots must be held accountable for the machines’ actions. (Dr. Frankenstein shouldn’t get a free pass for his monster’s misdeeds.) If a programmer gets an entire village blown up by mistake, he should be criminally prosecuted, not get away scot-free or merely be punished with a monetary fine his employer’s insurance company will end up paying. Similarly, if some future commander deploys an autonomous robot and it turns out that the commands or programs he authorized the robot to operate under somehow contributed to a violation of the laws of war, or if his robot were deployed into a situation where a reasonable person could guess that harm would occur, even unintentionally, then it is proper to hold the commander responsible.
I like the brevity of Isaac Asimov's Three Laws of Robotics, but then again he also took many novels and short stories exploring their consequences and possibilities...

No comments: