Few countries pride themselves on a concrete and stringent set of values in the way the U.S. military has since its founding. That is not to say that those values are not breached within its borders, while they are consistently and strongly advocated outside them. Yet ethics are neither absolute nor enduring, as is exemplified by the phenomenon of technological advancement in the context of war.
In this country, soldiers are supposed to be courageous and fearless, dedicating their lives to protecting the "freedoms" of citizens without asking questions. At the same time, civilians are assumed to unhesitatingly endorse these wars, and when they don't, they are accused of not supporting "our troops." In the societal action of war we accept mass casualties and the death of people whose identities we will never know and whose faces we will never see. But war technology — both remote controlled and robotics weapons systems — only widens the gap between killers and killed, pushing us to ask: How does this "revolution in military affairs" affect the values of military combat and of society in general? While the topic of the ethical implications of these developments has been largely bowled over by the unquestioning pursuit of more advanced technology, a consideration of their moral consequences is imperative.
In 2002 and 2003, the United States invaded Afghanistan and Iraq with a handful of unmanned vehicles. Since then, development of war technology has soared: By 2008, there were 5,331 drones in the U.S. military's inventory and about half that many unmanned planes, to say nothing of their other robotic and unmanned gadgets. This has been the response to a change in tolerance for war deaths: While Americans are growing less tolerant of the loss of their soldiers' lives, they are becoming desensitized to other victims. The loss of an American life is tragic, but the deaths of thousands of people of other nationalities aren't even worth a headline in a newspaper that will have cover stories about nasty words exchanged between presidential candidates.
As a response, trillions of dollars of American funding for technology is finally starting to be able to replace human presence on the battlefield. Pilots control unmanned aerial vehicles (UAVs) from trailers outside Las Vegas and kill people in rural Afghanistan more than seven thousand miles away, putting another human being's fate in the motion of a joystick or the pressing of a button. This distance not only affects how the controller looks at the target — consider how this situation mimics that of a video game — but also how she or he looks at herself or himself, allowing the adoption of a different personality as a player in a game very distant from reality. Unmanned robots have also taken on an important role in war: These devices can be sent into dangerous areas where soldiers would otherwise be risking their lives. Some work with human teams to identify and dismantle improvised explosive devices that used to claim dozens of American lives a month. Others are sent to explore dangerous situations in the battlefield, and an increasing number of them are mounted with weapons — marking a break with Asimov's Three Laws of Robotics as well as some ethical and logical considerations.
I would argue that each of these technologies brings with it perils for the future of warfare, both how and when it is waged. Those who develop them emphasize that they lessen the risk posed to soldiers and that they actually decrease the number of civilian casualties: Technology is "described as a way to reduce war's costs and passions." This is supported by the idea that UAVs allow for increased surveillance and thus a more accurate determination of targets. Further, removal of humans from the battlefield allows technology to calculate decisions that are supposedly not susceptible to human error, such as nervousness and surprise in combat or even unsteadiness and impaired visibility. But to say that these systems do not succumb to the same mistakes as individuals is to ignore their origins: They do have to be programmed by people, and they certainly malfunction, and who is held accountable for these errors?
Those who question this technology are not saying that we should ignore that which could save lives, but instead that there needs to be a debate about the implications of this technology and its ripple effects through society. Replacing soldiers with robots makes it considerably more likely that a country will go to war because of its relatively low cost compared with the value of human lives. Also, people are drifting further and further out of the "loop" with this technology, as machines are allowed to make more decisions based on extensive computable information but not ethical or situational considerations. Further, it is human nature to protect ourselves from emotional pain, and putting a soldier behind a computer screen subconsciously allows that reality filter to develop and decisions to seem like they don't have real life consequences.
Is a war fought on one side primarily by technology still considered "just war" that follows the inherent laws of morality? It certainly makes more likely the possibility of going to war without establishing jus ad bellum, the right to go to war based on cause and intentions. What does this astronomical investment in military technology say about the United States, or other countries that are also trying to make war more "efficient?" Without even questioning the act of war and the pursuit of the deaths of groups of people who a government has deemed "guilty" of some crime, consider our bent towards war and the implications of what military historian John Keegan calls the "impersonalization of battle."
If you'd like to find out more about this issue, and various others, come to the EPIIC symposium on "Conflict in the 21st Century" from Feb. 22–26.
--
Darcy Covert is a freshman who has not yet declared a major.



