Can Drones Have Ethics?

In this interview, Claire Richard and media studies professor Peter Asaro discuss the history of drone warfare and the troubling proliferation of new technologies that can surveil and kill from a ...

In this interview, Claire Richard and media studies professor Peter Asaro discuss the history of drone warfare and the troubling proliferation of new technologies that can surveil and kill from a distance. While we’re plenty familiar with drones thanks to the War on Terror, Asaro details the strategic rationale behind their use, along with the ethical ramifications of unmanned aerial vehicles controlled by soldiers thousands of miles away. But drone warfare is hardly the end—as Asaro points out, the increasing use of domestic, nonmilitary drones in the United States also poses significant challenges to modern notions of freedom and privacy. When being visible from the sky is legally considered being in public, a fence is no longer protection from prying eyes. And given the burgeoning use of drones by hobbyists, corporations, and police departments, how do we decide who should have access to such technology? These questions, Asaro insists, will only become more pressing as researchers develop autonomous drones and robots that may soon obviate the need for human control altogether. Asaro’s introduction to this brave new world of technological possibility suggests the urgent need to think through the political, legal, and ethical implications of drones for contemporary life.


Claire Richard (CR): Where does the US use drones, and who uses them?

 

Peter Asaro (PA): The US has the most drones and the most armed drones. There are 50 countries in the world that are currently flying drones within their militaries. The US, though, has multiple drone programs going on in different divisions—the Army, the Navy, the Air Force, and the Marines all have theirs. There’s also a program within the Central Intelligence Agency, but that one is very mystifying, obscure, with no public record. The drones are mostly being used for operations in Iraq and Afghanistan, although more recently their activity has spread. With the armed drones, there have been strikes in Yemen against American citizens who have allegedly been involved with al-Qaeda. Drones were used in Libya during the NATO operations there. There are even some mixed reports coming from the Philippines, where it was clear that there were US drones involved in targeting a Philippine terrorist group (it wasn’t clear whether they actually fired on it or whether they were just directing the fire from the Philippine Air Force planes). We’ve seen this spread globally. The drones are also being used now quite extensively to monitor piracy around Somalia. There are also surveillance drones being used globally that are not armed by the US. Several were sent to Japan to deal with the Fukushima crisis and try to monitor radiation levels above the power plant during the disaster. The US is also using these drones domestically for border patrol on both the Canadian and Mexican borders.

 

CR: Can you elaborate on the difference, the legal difference, between strikes by the armed drones in Iraq and Afghanistan and the targeted killings in other territories?

 

PA: Most of the activities of these armed drones have been in Iraq and Afghanistan, generally in support of other military combat operations. The drones are particularly good at identifying roadside bombs because they have thermal imaging cameras that are able to see where the dirt has been dug up. So a lot of them are just scanning these roadsides looking for bombs and things like that. But any time soldiers on the ground get involved in combat, they can request air support, which can include a drone. The drone can see enemies, target them, and then support troops on the ground. Those are the most common uses in the military.

With other operations, it is much less clear who is actually operating the drones and how. Those operations started in Pakistan, with a number of drone strikes over the last decade. Several websites, such as the New America Foundation website, are tracking them. There is a great deal of confusion because there is no transparency about who’s conducting these operations. It could be the military operating Special Forces, commando-type operations, but technically Pakistan is not a war zone. Insofar as the people being targeted are supporting the War in Afghanistan—with supplies of soldiers and training, arms, funding, planning, intelligence, any of those things—they qualify under the International Committee of the Red Cross’s definition of direct participation in armed conflict. And thus, even though they’re not in Afghanistan, they are a legitimate military target under international law. Many of those operations are being conducted and directed by the Central Intelligence Agency. They claim that they are finding—at least this is the most recent report from the Attorney General—that there is some indication that the targets are terrorists, that they have intent to commit terrorist acts, and that they cannot be apprehended any other way if the government where they are currently located is unwilling to try to apprehend them. The government claims this is the legal justification, although it is a reinterpretation of things like due process, even within the US law, regarding how these sorts of operations have been conducted.

 

CR: Can you walk us through how the drones work? Who launches them? Where is the pilot? Explain the chain of command.

 

PA: There are really two kinds of armed drones that the US is primarily using, which are the Predator and the Reaper drones. The Reaper is an improved version of the Predator, which carries two small Hellfire Missiles that were originally designed to be put on helicopters to shoot at tanks (so they are fairly small munitions). But the Reaper can carry 16 of these bombs or even much larger ones. Generally if you’re flying missions in Afghanistan and Iraq, you’ll be launching those aircraft from nearby bases either in Afghanistan or Iraq or nearby in Kuwait. The actual pilots and the people who make decisions about whether to fire missiles operate on bases mostly in Nevada, although there are some other bases in New York and Florida. But they could potentially be anywhere. They depend on communications that exist between the station where the pilots are—usually housed in an air-conditioned semi-trailer, although you can also just set it up in a room, an office space, a cubicle space—linked through satellite communications.

Generally you have a base of operations that launches and lands the plane and then does all the maintenance—arms it with new missiles and fuel and all that. They will arm it and fuel it up and launch it, then they’ll transfer control to the center in Nevada, where there are two people in a virtual cockpit—the pilot and the sensor operator, along with a mission information coordinator. Potentially, you also have superior officers, mission planners, and other levels of command, which could also include military lawyers, information specialists, sensor specialists, and people who analyze the imagery because the drones are primarily still for collecting and then analyzing imagery and intelligence. With the armed capability, they’re able to act immediately on the intelligence that’s being gathered. The aircraft are also useful to the military because they stay in the air for a very long time: the Predator and Reaper can fly 24, 36 hours, which is much longer than human pilots in an airplane could ever fly. With drone pilots working in shifts, the military can continuously survey, for example, a location of suspected militants or terrorists for many, many hours. Then they can determine exactly when an important leader might be coming in or make sure that the civilians or maybe the wife and children of a suspected militant have left the house, and then choose the moment to target.

 

CR: Who decides to fire?

OPERATORS might launch a missile, but once it has been launched, they can’t divert it or recall it. Then some kids might run into the house—this happens.

PA: The military keeps a very tight control on the chain of command and the authority to use lethal force or to release a weapon or missile from one of these platforms. Generally, as a pilot or operator, you’re instructed to go to a target and destroy it. That’s been decided by somebody up the chain of command through the analysis of information and intelligence reports over some period of time. But, as the pilot, you have a certain amount of discretion that gives you the authority to fire: you still make the decision of when and how to fire, and the angle from which to fire. That has a big impact on civilian casualties. On the other hand, if you’re giving air support to ground troops in combat, you can see that the ground troops are under threat. And then you can request and receive authorization to fire from your superiors. There have been some friendly-fire incidents, where they mistakenly fired on friendly NATO forces. They make mistakes often, and you can’t always avoid civilian casualties. Sometimes you have to make very difficult judgments about whether a target is valuable enough to accept civilian casualties. Also, drones fly a mile or more away from the targets. They might launch a missile, but once it has been launched, they can’t divert it or recall it. Then some kids might run into the house—this happens. This is often a cause of things like post-traumatic stress disorder because the operators aren’t able to change the course of the missile once it’s launched, but they’re watching it all in real time on video.

 

CR: Given all these issues like civilian casualties, are there any studies or reliable evidence on the general benefit of the drones compared to an airplane fighter or tank?

 

PA: It’s difficult to prove the benefit of drones scientifically, because it’s hard to have a controlled experiment. The anecdotal evidence and the impressions of the people involved with these technologies are that they are more selective and more precise than traditional alternatives, which would include piloted fighter or bomber aircraft. The difference is that the pilots are traveling very fast in high-speed aircraft and they don’t have much time or much fuel to circle around, so they’re generally more dependent on intelligence that’s being gathered from another source. They don’t have time to notice a target coming and going from a house, for instance. Drones are also meant to replace Special Operations commandos who drop in and infiltrate a specific building and try to kill or capture whoever is there. In those situations, there are a lot of civilian casualties as well, because the people inside—even if they’re civilians and not involved as militants at all—generally have firearms for protection, especially in Afghanistan. When they hear shooting going on, they tend to come out of their houses carrying guns. And if they’re seen by commandos carrying a gun, they are going to be fired upon. [Drones] are one tool among many tools, and the military likes this tool because it tends to lower their risk: you don’t risk losing commandos, and it also reduces the civilian impact, to an extent.

There’s a corollary to this that the military often doesn’t talk about, which came up in debates about precision-guided munitions in the 1990s. From a military perspective, precision-guided munitions allow you to destroy more targets for less money. You have to use fewer bombs because they’re more precise, and you have to go back fewer times because you’re more likely to hit your target on the first attempt. That reduces the cost of destroying a target with a bomb. The immediate implication of this is not that you bomb fewer things—because it’s cheaper, you actually start bombing more things. So for any individual target, you’re less likely to create collateral damage, whether that’s civilian casualties or destruction to civilian property or infrastructure. But now you have a situation where the bombing has become really easy to do, so you do more and more of it. Even if each instance only has a little bit of collateral damage, if you start to add it all up you actually wind up with just as much collateral damage and civilian casualties in the long run, because you’ve selected or attacked more targets.

<i>MQ-1 Predator unmanned aircraft</i> (2008). Wikimedia Commons / US Air Force photograph / Lt. Col. Leslie Pratt

MQ-1 Predator unmanned aircraft (2008). Wikimedia Commons / US Air Force photograph / Lt. Col. Leslie Pratt

CR: If you’re able to shoot someone from halfway around the globe through a screen, this seems to severely trivialize war. Is that true?

 

PA: There’s been a lot of discussion about PlayStation warriors, the video-game mentality of soldiers, or the various forms of this kind of moral buffer, the psychological detachment of having people in a little air-conditioned room in Nevada who are killing people in Afghanistan or Pakistan using what is essentially a video-game-like interface. The fear, of course, is that they become much more trigger-happy or have less regard for the value of human life. But the reality has been somewhat different—this has come out of some medical studies that the military has conducted on the kind of stress that these pilots experience. What they find is that because these pilots spend so many long hours staring at the video feeds coming from these drones, they actually have a very intimate relationship to it—they become emotionally involved. And if you have watched anybody play a video game who’s really into it, it’s very immersive, it’s not something that you’re detached about at all; you’re deeply embedded in it. They are very much aware that they are in a combat zone and that their actions have consequences. These drone operators are watching with a high-resolution camera that can distinguish what kind of shoes you’re wearing: they watch as the bodies are pulled from the rubble. They are very aware of the consequences of their choices and actions in a way that some other service personnel are not.

They also don’t have some other forms of stress, since they are not in combat all day (or all night) long. They go home to their families in the suburbs. They’re living a life in the day-to-day civilian world, and then at night they’re involved in intense combat. This has a lot of psychological pressure if you think about the kind of esprit de corps that soldiers have in combat. There’s a bond that forms and provides a certain amount of emotional and social support. A lot of these pilots, on the other hand, who are trying to lead normal lives during the day and have all the family life and the stresses that come with children and spouses and everything else—they’re trying to sort of go back and forth between these worlds and it is very difficult.

 

CR: Let’s move on to the question of automated weapons. For example, in the US, how far has the development of automated [weapons] progressed?

 

PA: Most of the systems being used now are under the control of humans, but they contain automated processes within them. The drones can take off by themselves, land themselves, fly to certain positions, and hold certain formations circling over a location—all on their own. In fact, it’s quite easy to automate other aspects, including firing weapons—automation is actually not a huge technical burden. The challenging part is replacing the decision-making that the humans actually do to decide what to target and when to target and why. Thankfully, for the most part, that still remains in human hands. In the military, they call it “human in the loop.” But the potential is very concerning to me—I actually founded an organization to try to ban autonomous lethal robots or any of these automated systems that would decide for themselves what to target and what to shoot.

We can think about the process that exists now: if you have essentially a fully automated, armed drone that’s circling over a city, processing video and looking for targets, at that point it becomes some kind of automated artificial intelligence system trying to identify uniformed soldiers or certain patterns of behavior or maybe recognize the face of Osama bin Laden in a crowd. The question then becomes whether or not we would allow that system to decide to use lethal force against human beings, based on a software program. I think this is a critical question that the world community needs to address, because the technologies exist to do that, and in very simplistic and really terrible ways. The stupidest end of the spectrum is the landmine: we’ve already seen an international ban on anti-personnel landmines, specifically because they’re very indiscriminant and they disproportionately kill children and civilians. We wouldn’t want to see anything like that happen with these more sophisticated systems, where it was morally or legally permissible to attack humans, but where we don’t really know who is being attacked or what the justifications are. There is wide room for violations of human rights. In this example, it’s also then very difficult to say who would be responsible given that it’s an automated system—it could be failures of camera systems or software or various other technological failures. Apart from the requirements for discrimination and proportionality in weapons, there’s nothing that specifically makes autonomous lethal weapons illegal under currently existing international law. We feel it is important to have this international discussion about banning autonomous lethal weapons and getting an international treaty. The group is the International Committee for Robot Arms Control, and we’ve been working for the last year or so on trying to stimulate discussion and move towards this kind of treaty or ban.

 

CR: What are the implications of, say, delegating the decision to kill to a machine? Ron Arkin, for example, argues that if you can break down ethics into a set of rules and encode them in a program, you actually might stand a better chance of producing ethical behaviors because humans are irrational beings. What’s your take on that?

 

PA: I’ll start with the question of Ron Arkin’s proposal, which is that we can install “ethical governors,” as he calls them, into these systems, and that, in fact, fully autonomous systems with a proper kind of ethical constraints could potentially be better than humans. I think that he and I both look at the situation I was describing earlier—with the targeted use of remote drones—as an opportunity to improve the selectivity of targets and weapons in war. But he extrapolates it in a different direction, one where we could continue to automate that process and replace humans’ ethical decision-making with automated systems that would do the ethical analysis and make sure that any targeting decisions were to conform to the Geneva Convention and the laws of war and any other ethical constraints before firing. While that might be possible in principle, it gets wrong the notion of what the laws are, how they operate, and how ethical rules operate.

we shouldn’t automate the process of taking a human being’s life. i call this the human right noT to be killed by a machine.

The law and ethical rules are both designed to help humans make decisions, right? If we actually look at the Geneva Conventions, it’s a requirement that commanders take into account the implications of their actions. A proportionality decision is one that a commander has to make about the value of a target, the cost of civilian lives and property around a military target—how do you make that evaluation? That’s not the kind of evaluation that computers are good at making. There is a lot of judgment and estimation and anticipation to really determine the value of a military target. Commanders have a sense of the relative importance of strategic factors, which is very difficult to encode and translate into computer programs and any kind of automated process, much less to determine the relative value of a human life of this or that civilian. Those kinds of judgment calls just shouldn’t be automated.

To get back to Ron’s point, if you look at the situation we have with remote-operated drones, what this really allows is more deliberation on the part of the humans. We can claim that it’s potentially more ethical in its application because it gives more time and space to make careful decisions with better information, for example to more carefully time the use of lethal force and other questions which implicate the human in them. This could potentially provide a larger ethical awareness; automating away the human, on the other hand, is not really ethics. It may be more efficient, but that’s not necessarily more ethical because the ethics still preside in the human.

Then to return to this question of delegation—there’s a more fundamental question of human rights at stake. If you look at human law and morality around the world and in various cultures, there is a growing sense over the last few centuries of the importance of human rights. The decision to override somebody’s right to their own life should fall under human [discretion], right? Whether that’s a judicial process or an individual defending themselves or acting on behalf of another to defend them, there’s still a human making that decision about another human being’s life. We shouldn’t automate that process; there’s no rigid policy or computer program or automated technology that can or should replace that process. I call this the human right not to be killed by a machine. We cannot delegate [life and death] to an automatic process: this is a special class of decision-making that we cannot turn over to machines.

<i>Group photo of aerial demonstrators at the 2005 Naval Unmanned Aerial Vehicle Air Demo</i> (2005). Wikimedia Commons / US Navy photograph by Photographers Mate 2nd Class Daniel J. McLain

Group photo of aerial demonstrators at the 2005 Naval Unmanned Aerial Vehicle Air Demo (2005). Wikimedia Commons / US Navy photograph by Photographers Mate 2nd Class Daniel J. McLain

CR: How did we go from military drones to domestic drones, and can you define for us what domestic drones are?

 

PA: There are several aspects to the question. Part of that is the history of the United States over the last decade. When the [US] military invaded Iraq in 2003, I think they had 50 drones. Now they have close to ten thousand in Afghanistan and Iraq. And as soldiers are withdrawing from Afghanistan and Iraq, they need to bring all those drones back home. As it turns out, it’s actually quite difficult within the legal framework for the military to even fly these drones over US airspace. There’s been an effort in the Pentagon for several years to transform the legal framework for certifying these drones for flight in civilian airspace, as well as how they interact with civilian airspace in terms of flight control paths and things like that. There’s a lot of commercial interest in this, but the leading interest is really the military because they need to bring all these drones home. There’s also a large number of drones patrolling the borders or monitoring the drug war in Mexico. Moreover, US drones are being shared cooperatively with the Mexican government. But there’s also a huge civilian base of home hobbyists—after all, as I mentioned, at the beginning, these drones are essentially just remote-controlled airplanes. And as long as it’s under about 25 kilos or 22 kilos, you don’t need a special license, provided you’re not using it for any commercial purposes. So you can take your remote-controlled airplane and put a video camera on it—that’s perfectly legal. They are sold ready-made in stores, and rather inexpensively now, just a couple hundred dollars. But these are also going to be more sophisticated and larger.

 

CR: For the drones that are operated by the police force, what’s the existing regulation and is there a risk of lowering the barriers of accepted drone use?

 

PA: There are police departments that have already been operating some small drones. There is one particular instance recently reported of drones operated by the US Customs and Border Patrol. The Department of Homeland Security, for example, is patrolling the Canadian/US border with an unarmed Predator drone. But they have a prerogative or mission statement that says they should lend aid to other government organizations who request it with the kind of free time that the drones have. This is the same type of aircraft that they’re using in Afghanistan and Iraq: it can survey thousands of square miles or hectares of space using high-precision cameras and surveillance and zoom in from a great distance. There was an instance where some cattle rustlers—actually, some cattle—wandered onto a property where there were some people who were anti-government in their rhetoric. And the local authorities thought that if they went in as a police force, unassisted, the farmers might come out with guns. So the local sheriff’s department requested from the Department of Homeland Security to have the Predator drone fly over and figure out where these people were on the property. [The drone did its job,] and then the police were able to go in and arrest the offenders without much incident. But that revealed to everybody that these drones that are being used to patrol the borders now are also being utilized by local police departments. For the most part, the police departments have been operating small helicopter-like drones with cameras to monitor things like demonstrations, or to find out what’s going on in a particular place. If you look at the larger police departments of Los Angeles and New York, they operate a large number of helicopters, which are very expensive. If they were able to effectively replace those with unmanned versions of helicopters that could do essentially the same work and not cost nearly as much in terms of the pilots, fuel, and all that maintenance, then there’s a strong incentive for them to move in that direction.

To get back to your question about what then becomes legitimate to survey, and the legal framework for this: for the most part, there are no restrictions on watching things from the sky, any more than there are for walking around in public and watching things. Flying over has been determined not to be legally protected, so if you’re visible from the sky, you’re in public, and it’s a public viewing just like you’re walking down the street. But if you think realistically, even if you build a fence around your property so that people can’t see what you’re doing in your backyard, now everybody can see what you’re doing in your backyard as they fly a drone. This causes a transformation in the sense of privacy and where you may be subject to being viewed, surveyed, and recorded.

 

CR: Do domestic drones afford new possibilities for activists if they can be relatively easily acquired, giving activists more power to resist?

 

PA: There are opportunities for weaker powers to utilize drones for their own interests, to survey the state and others. And we’ve already seen this. Occupy Wall Street and Tim Pool, for example, who’s been live-streaming a lot of the Occupy activities, has built a drone. And if you look at how live-streaming, cell phone imagery, and video have been used effectively in both the Arab Spring and in the various Occupy movements around the US, surveillance on the civilian level has been effective in keeping the police force and state power in check. Who’s watching the watchers? Well now everybody is, right? And if you have a situation where only the police or only the state is allowed to use surveillance technologies or video cameras, that’s a very skewed use of power. But once civilians can also document everything, it levels that playing field. Then there’s the question of the drone itself as a sort of aerial surveillance version of that. To what extent is that really effective? Overall, I think that the utilization of the technology is going to favor large organizations that have the resources to organize and use it effectively, and also to control the laws and the airspace around it.

Where drones have a clear and positive advantage is environmental activism. This is because, on the one hand, the environment is often better seen from above. Drones have also been used quite extensively by Greenpeace to track whaling ships in the seas because it’s very hard to search the sea in a big boat and find another big boat. They can also be used to monitor crops and different land uses. There was a hobbyist who flew a drone in Texas and discovered a red creek behind a meat-packing facility, which turned out to be illegally dumped pigs’ blood. They were able to get the Federal agencies in there and stop what was an ongoing environmental hazard. I think you’re going to see more effective use of these drones. icon

Featured image: A BQM-74E aerial target drone launches from the flight deck aboard the amphibious assault ship USS Boxer (2005). US Navy photograph by Photographer's Mate Airman Paul Polach / Wikimedia Commons