9

Wars Without War

“In such dangerous things as war, the errors which proceed from a spirit of benevolence are the worst,” observed the Prussian strategist Carl von Clausewitz.1 For Clausewitz, war is “an act of violence pushed to its utmost bounds, as one side dictates the law to the other,” thereby creating a “reciprocal action” in which attempt at moderation or restraint by one side can only be counterproductive, since “he who uses force unsparingly, without reference to the bloodshed involved, must obtain a superiority if his adversary uses less vigour in its application.” From the Civil War onward, most American wars have been fought in accordance with these precepts. As George S. Patton once put it, “There is only one tactical principle which is never subject to change. It is to use the means at hand to inflict the maximum amount of wound, death, and destruction on the enemy in the minimum amount of time.”

America’s dazzling high-tech wars of the last decade of the twentieth century seemed to herald another possibility, in which wars could be fought and won with minimal levels of “wound, death, and destruction” for civilians and soldiers alike. These expectations were partly a consequence of America’s technological, tactical, and organizational superiority against enemies who could not even begin to retaliate in kind—an imbalance that naturally reduced the “reciprocal dynamic” Clausewitz described. In a 2000 treatise on urban warfare written for the U.S. Army’s Command and General Staff College at Fort Leavenworth, the military historian Anthony Spiller predicted, “The era of the iron force is over. The nation that will lead the military world this next century already produces and employs its coercive power differently from any army in history. Finesse is replacing weight as the basis of American military power.”2

To some American foreign-policy hawks and military strategists, this awesome technological power suggested that wars could be less “hellish” than those of Sherman and his successors—and also more palatable to the public as an instrument of policy. In Panama, Iraq, and Kosovo, such “coercive power” was directed toward the achievement of very specific political and strategic objectives, and American technological supremacy—coupled with the weakness of its opponents—made it possible to achieve them at very little cost while still maintaining the illusion of “surgical” and “humanitarian” war. At the beginning of the twenty-first century, however, the United States entered an era of permanent global war, which demonstrated once again the supremacy of its military machine, but which also revealed its limitations in a series of murky and often chaotic confrontations in which finesse was conspicuously absent.

The War on Terror

The catalyst for this transformation took place on September 11, 2001, when nineteen hijackers enacted a spectacle of destruction and mass killing that had not been seen on American soil since the Civil War. The attacks on New York and Washington were frequently compared to the Japanese attack on Pearl Harbor, and were immediately declared by the Bush administration to be acts of war, even though they were carried out by an amorphous nonstate terrorist organization that offered no commensurate military target. On September 20, 2001, George W. Bush promised the American people that the United States would respond to these attacks with a global “war on terror” that “begins with al Qaeda, but it does not end there. It will not end until every terrorist group of global reach has been found, stopped and defeated.”

In this war, Bush warned, “Americans should not expect one battle, but a lengthy campaign, unlike any we have ever seen. It may include dramatic strikes, visible on TV, and covert operations, secret even in success.” The most obvious object for a military response consisted of the “al Qaeda bases” in Afghanistan, but Bush also issued a wider warning that “every nation, in every region, now has a decision to make. Either you are with us, or you are with the terrorists.”

Even at this stage, the CIA had already identified a “worldwide attack matrix” of targets for covert and overt military operations in more than eighty countries. In his January 2002 State of the Union address, Bush identified Iran, North Korea, and Iraq as members of an “axis of evil” whose development of weapons of mass destruction posed an ongoing security threat to the United States. Other countries were listed by the administration and its supporters as “outposts of tyranny” that also constituted potential enemies of the United States. From the outset, therefore, the “global war on terror” (GWOT) became a recipe for a rolling series of wars, military interventions, and covert operations directed not only against al-Qaeda, but against an array of “terrorist” organizations considered by the United States to be in some kind of alliance with Osama bin Laden’s network and also against the states that were deemed to be their supporters.

This framework transformed the world into a global battle space for the limitless projection of U.S. military power. Bush administration officials and intelligence officers repeatedly warned that such a war would not be subject to moral or legal restraints, and these claims were reflected in the introduction of new measures and procedures that included torture, or “enhanced interrogations,” Special Forces assassination teams, trials of terrorist suspects by military courts, indefinite detention of “enemy combatants,” and kidnapping, or “rendition,” of suspects across international boundaries to secret prisons in countries with weak or nonexistent legal protections.

As in previous American wars, the determination to “take the gloves off” was justified on the basis that the United States faced an existential threat to its “way of life” comparable to Nazism or Communism from an enemy that did not fight by the same “rules.” The persistent references by the Bush administration to the “evil” of al-Qaeda and the “axis of evil” that was somehow connected to it echoed the crusading zeal that was also intrinsic to American warfare. During the 2004 presidential election campaign, Vice President Dick Cheney mocked Democratic nominee John Kerry’s proposal to wage a “more sensitive” war on terror, saying, “President Lincoln and General Grant did not wage sensitive warfare—nor did President Roosevelt, nor Generals Eisenhower and MacArthur.” A “sensitive war,” Cheney argued, would not “destroy the evil men who killed 3,000 Americans and who seek the chemical, nuclear, and biological weapons to kill hundreds of thousands more.”3

These comparisons were not entirely outlandish. The Lincoln administration also adopted an “iron fist” war policy that was regarded as a departure from conventional warfare, and it suspended certain civil liberties in order to pursue the war more effectively. But Cheney’s suggestion that al-Qaeda represented a threat to the national existence of the United States comparable to the Confederacy was more propaganda than history, and his invocation of Lincoln ignored the fact that the Union from the very beginning of the war had a very clear strategic goal—the surrender of the Confederacy and its reincorporation into the Union—that was realistic and achievable, even if it took some time to achieve it. By contrast, the Bush administration’s proposals to eliminate “every terrorist group of global reach” and “rid the world of evil” established a benchmark that was difficult if not impossible to achieve and defied long-established military principles, such as economy of force and the accurate assessment of the enemy’s strengths and vulnerabilities.

“No one starts a war—or rather, no one in his senses ought to do so,” declared Clausewitz, “without first being clear in his mind what he intends to achieve by that war and how he intended to conduct it.” This principle was entirely absent from the GWOT from its inception. In 2005 the U.S. Army’s National Military Strategic Plan analyzed the first four years of the GWOT and offered strategic guidance for its future conduct. On the one hand, its authors defined al-Qaeda as a “movement” and identified its primary “center of gravity” as its “extremist ideology,” which “motivates anger and resentment and justifies, in the extremists’ eyes, the use of violence to achieve strategic goals and objectives.”4

From a military point of view, the question of whether al-Qaeda was a “movement,” an “ideology,” or—as the Bush administration often insisted—a hierarchical military organization with a chain of command and control was important because very different uses of force might be required, depending on the answer, and the Strategic Plan’s authors didn’t seem sure themselves about who or what should be targeted and how this might be done. In order to bring military power effectively to bear against al-Qaeda’s “center of gravity,” the authors made the obvious suggestion that “knowledge of indigenous population’s cultural and religious sensitivities and understanding of how the enemy uses the U.S. military’s actions against us should inform the way the U.S. military operates. Where U.S. military involvement is necessary, military planners should build efforts into the operation to reduce potential negative effects.”

These recommendations suggested a calibrated and careful use of military force coupled with a global ideological attempt to win over al-Qaeda’s potential constituency, but these guidelines were not always observed in the chaotic, incoherent swathe of global violence that ensued. Two full-scale wars and occupations, proxy wars like the Ethiopian invasion and occupation of Somalia, “targeted assassinations” by drones (unmanned aerial vehicles, or UAVs), and the global deployment of Special Forces death squads have generated “negative effects” that undermined the ideological justifications of the Bush administration’s antiterror crusade. In Somalia, the United States gave diplomatic and military support in 2006 to Ethiopia’s toppling of the Islamic Courts Union (ICU), a broad coalition of Islamist groups that opposed the corrupt and warlord-dominated Transitional Federal Government (TFG).

Regarded by the Bush administration as an affiliate of al-Qaeda, the ICU was followed by the far more radical al-Shabaab movement, which fought the Ethiopian army and the TFG in one of the bloodiest periods in Somalia’s recent history. In 2007 some two hundred thousand people were driven from Somalia as a result of the fighting between the Ethiopian army/TFG forces and al-Shabaab. Though al-Shabaab’s harsh version of Islam is now well known, its Islamist nationalist insurgency was fueled by the brutality of the Ethiopian security forces, which burned villages and shot, raped, and tortured Somali civilians with impunity.5

Somalia was not the only country where the war on terror generated more violence than it was supposedly intended to eliminate. And nowhere was this discrepancy between expectations and outcomes more glaring than in Iraq, where the U.S. military fought a war and occupation that many analysts have considered to be one of the worst disasters in American military history.6

Iraq Redux

On March 19, 2003, the U.S.-led coalition planes and missiles struck at targets throughout Iraq in the opening salvo of a war that, according to George W. Bush, was intended “to disarm Iraq, to free its people and to defend the world from grave danger.” The following day, the U.S. Third Infantry Division and First Marine Expeditionary Force crossed the Kuwait border and raced into the Iraqi hinterland in a rapid two-pronged advance, while their British allies concentrated on Basra and the south. In seventy-two hours, U.S. forces covered 240 miles in a classic example of “indirect,” fast-moving, deep territorial penetration in the tradition of Sherman’s and Patton’s armies.

In general, cities and urban areas were bypassed where possible; support troops were left to pin down their defenders and mop up, while most of the army drove north toward Baghdad. Despite occasionally stiff resistance and sporadic ambushes from the Iraqi army and the paramilitary Saddam Fedayeen, American firepower crushed the poorly armed and often badly led defenders. Despite a sandstorm on March 26, which restricted visibility, the first U.S. units reached the outskirts of Baghdad on April 4, three hundred miles from their starting point, and began conducting armored “thunder raids” into the heart of the city.

On April 10, Baghdad fell to coalition forces, and the Iraqi leadership went into hiding. The quick victory was a triumph for Bush and a vindication of his secretary of defense Donald Rumsfeld’s doctrine of a nimble, high-tech army that was able to achieve rapid results through a combination of airpower and minimal ground forces. Whereas the Union had once taken four years to conquer 6 million Southerners with more than a million soldiers, in Iraq fewer than 160,000 troops had conquered a country of 30 million in what Bush described as “one of the swiftest and most humane military campaigns in history.” Despite the frequent media references to the shock and awe that preceded the war, the bombing of Iraq owed more to Kosovo than to the first Gulf War. According to the U.S. Army’s official history of the invasion, its planners were “haunted” by “images of Berlin, Hue, and Grozny” and determined to avoid “wanton physical destruction, rampant human misery, and post-fighting devastation” whose “human, political, and financial cost . . . would be unacceptable in a campaign of liberation.”7

As on previous occasions, these aspirations were not always realized in practice. An estimated 3,750 Iraqi civilians were killed and injured by cluster munitions air-launched or ground-launched against Iraqi cities, in addition to some 9,200 combatant casualties. But the most violent phase of the war began after the iconic toppling of Saddam Hussein’s statue on April 9. Baghdad was subjected to a different form of “wanton physical destruction” as a result of the security vacuum created by the collapse of the Iraqi state. On April 10, crowds of looters robbed, stripped, and sometimes burned offices, shops, former Baathist headquarters, and cultural, medical, and educational institutions, destroying or robbing nearly 60 percent of the content of both the National Museum and the National Library and Archive, a massive loss of irreplaceable books, sculptures, pottery, and documents, including cuneiform tablets dating back to Sumerian times.

The unwillingness of the occupying forces to provide the necessary security and protection mandated by international law was an alarming indication of the lack of planning for postinvasion chaos, a lack that was even more mystifying given the U.S. Army’s extensive experience with postcombat occupations. It is difficult to imagine that Sherman, with his very pragmatic attitude toward postwar stabilization and reconstruction in the South, would have approved of the catastrophic decision by the Coalition Provisional Authority to dissolve the Iraqi army, police, and bureaucracy overnight, and the disastrous consequences of this decision soon became even more glaringly apparent.

On May 1, George Bush announced from the deck of the carrier USS Abraham Lincoln that all major combat operations in Iraq were over. As the summer wore on, however, coalition troops found themselves subjected to daily attacks from an ad hoc array of demobilized soldiers and domestic and foreign organizations, using car bombs, suicide bombers, snipers, and the ubiquitous improvised explosive devices (IEDs), whose nineteenth-century precursors had so infuriated Sherman and his officers in Georgia. Some of the methods adopted to deal with this situation, such as the “porno interrogations” at Abu Ghraib and other detention centers, were historical novelties based on a catastrophically wrong-headed and often profoundly racist American understanding of Arab cultural norms, but others would have been entirely familiar to U.S. soldiers in the Philippines and Vietnam. First Lieutenant Paul Rieckhoff, a platoon leader in the First Brigade, Third Infantry Division, described the reaction of a Baghdad family to a no-knock raid on their home in search of weapons and insurgents. “They screamed and yelled in Arabic. We screamed and yelled in English—sweat pouring, babies wailing, muzzles swinging and hearts pounding. We huddled them into the far corner so we could cuff them, control them, and search the house. The women cried uncontrollably, the men were angry and proud—always glaring. . . . We occupied their home, invaded their personal space. . . . We stormed into their house like the Gestapo.”8

These “cordon-and-search” operations were a major factor in spreading the insurgency and were eventually replaced in many areas by milder knock-and-search variants or handed over to Iraqis. But many army commanders adopted a hard-line policy of collective punishment of Iraqi civilians to try to isolate the insurgents or simply get information, using well-established techniques of terror and coercion. “Hostile” villages and towns were cordoned off with barbed wire and dirt walls; the only entrances were controlled by soldiers; and electricity, food, and water were cut off for days in order to turn the people against the fighters in their midst.

In a press conference in March 2005, Jean Ziegler, the UN special rapporteur on the right to food, accused the coalition of “using hunger and deprivation of water as a weapon of war against the civilian population” in what he called a “flagrant violation of international humanitarian law.” In some cases, most of the people were forced to leave, while military-age men were ordered to remain in quarantined towns or neighborhoods designated as weapons-free zones that could be fired upon with impunity by U.S. troops. As in the Vietnam War, houses used by rebels were blown up, and crops and orchards were also destroyed as a punishment for “allowing” insurgents to use their property to fire on U.S. troops.

Despite strict rules of engagement designed to prevent indiscriminate retaliation, U.S. troops frequently responded to attacks by firing into densely packed neighborhoods. On March 31, 2004, insurgents at the resistance stronghold of Fallujah ambushed and mutilated four private military contractors. The killings provoked a wave of revulsion in the United States, and politicians and media commentators urged the military to “raze” a “sick” and “diseased” city and subject Fallujah to a Carthaginian punishment.9

On April 4, American units headed by the First Marine Expeditionary Force subjected the city to an aerial and ground assault involving seven hundred air strikes, attacks from AC-130 Specter gunships equipped with Gatling guns firing 1,800 rounds per minute, and the prolific use of snipers. In addition to known or suspected insurgent houses and bases, U.S. forces fired on ambulances, hospitals, and civilians fleeing their homes, in some cases even while the victims were waving white flags, killing an estimated six hundred civilians before the operation was called off without bringing Fallujah under American control.

Following George Bush’s reelection in November, the administration resolved to bring Fallujah, which it regarded as a symbol of the insurgency, back under control of the Iraqi Interim Government. To prepare, U.S. troops surrounded Fallujah and ordered civilians to leave. Some 350,000 people obeyed these orders, leaving an estimated 30,000–50,000 noncombatants and 4,500 insurgents in the city. On November 8, U.S. forces subjected Fallujah to a massive aerial and ground bombardment, followed by a ground assault by U.S. Marines, with British and Iraqi forces in support, in which insurgent fighters were shot or blown up in houses, mosques, and on the street, and burned to death with white phosphorus in “shake and bake” bombings.

As in the first siege, there were reports of civilians shot in their homes, in the streets, and in hospitals, in some cases while waving white flags. After three weeks, the military announced that the city had been brought under control. By this time, much of it had been reduced to what a Reuters correspondent called a “sea of rubble and death.” In a visit to Fallujah in December, the New York Times reporter Erik Eckholm found “a desolate world of skeletal buildings, tank-blasted homes, weeping power lines, and several palm trees.”10 In the aftermath of the battle, military planners talked of transforming the ruined city from a “bastion of militant anti-Americanism” into what one army officer described as “a benevolent and functional metropolis.”11

These predictions did not come true. By March 2006, less than 20 percent of Fallujah’s destroyed or damaged houses had been repaired, and only twenty-four out of eighty-one reconstruction projects in the city had been completed. Today, nearly ten years after the two sieges, doctors in the city continue to report a disturbing rise in the number of malformed children born to mothers in the city, which some attribute to the use of depleted-uranium antitank shells and other poisonous munitions by U.S. forces.

The “pacification” of Fallujah had no impact on the insurgency, but other cities were subjected to similar operations. In 2004 the epidemiologist Les Roberts watched “the shredding of entire blocks” by U.S. gunships in Sadr City in Baghdad, the base of support for Shiite leader Muqtadā al-Ṣadr’s Mahdi Army. During the siege of Ramadi in June 2006, some 70 percent of the population fled the cordoned-off city before U.S. forces carried out air strikes and a ground assault in which eight blocks of the city were systematically demolished and U.S. snipers on rooftops fired down at pedestrians. “They bombed the power stations, water treatment facilities, and water pipes,” a local sheikh told the Inter Press Service reporter Brian Conley. “This house is destroyed, that house is destroyed. You will see poverty everywhere. The things that the simplest human in the world must have, you won’t have it here.”12

In other cities, civilians were killed and wounded during close air-support missions or shot dead by snipers while buying food during curfews. Despite the rules of engagement, civilians were regularly shot at random “flash” checkpoints or “tactical control points” that were set up without warning on roads and highways, either because they failed to understand orders to stop or because panicky American soldiers shot at them in anticipation of suicide bomb attacks.

Insurgent car bombs and suicide bombings wreaked havoc among Iraqi civilians and security forces alike, and civilian “soft targets” were attacked with merciless ferocity. American occupation officials frequently emphasized the military’s restraint and humanity in comparison with their “terrorist” opponents, but numerous testimonies from American soldiers, Iraqi civilians, and foreign journalists tell a story that recalls the Philippines and Vietnam, of an army permeated with contempt for Iraqi “hajis” and “camel jockeys” and terrified soldiers brutalized by a shockingly violent and unpredictable guerrilla war. In August 2006, a military court investigating the killing of three Iraqi prisoners at Samarra by soldiers from the 101st Airborne Division’s Third Brigade heard testimony from a member of the unit who described “a culture of racism and unrestrained violence” encouraged by its commanding officer, Colonel Michael Steele, who reportedly gave knives to his troops as rewards for their kills in an attempt to foster a competitive body count of insurgents.

In 2008, Iraq Veterans Against the War conducted Winter Soldier hearings in Washington that recalled the 1971 Winter Soldier hearings held in Detroit by Vietnam Veterans Against the War. More than two hundred veterans described a war that bore little relation to its altruistic intentions. Two soldiers from the First Cavalry Regiment told the audience of a “weapons-free” assault on the Abu Ghraib neighborhood in April 2004 in which their unit bulldozed dozens of buildings, crushed vehicles, and left the streets “littered” with seven or eight hundred corpses of humans and animals. Various soldiers claimed that they were encouraged by their officers to take shovels or “drop weapons” with them on patrol that would be left on the bodies of civilians they shot to make it look as if they’d been attacking.13

Many U.S. soldiers respected Iraqis, and believed in their mission, and conducted themselves in accordance with the U.S. military’s best traditions, but as in Vietnam the army’s willingness to use force to coerce civilians made for an atrocity-producing environment in which killing and destruction were always likely. In April 2004, a senior British army officer echoed Reginald Thompson’s criticisms in Korea when he anonymously condemned the American tendency to use overwhelming firepower in densely populated residential areas, on the grounds that “the Americans’ use of violence is not proportionate and is over-responsive to the threat they are facing. They don’t see the Iraqi people the way we see them. They view them as Untermenschen?”14

Whether such bigotry was rare or rampant, destruction also had a strategic purpose. During the second assault on Fallujah, an anonymous Pentagon official told the New York Times reporters Thom Shanker and Eric Schmidt, “If there are civilians dying in connection with these attacks, and with the destruction, the locals at some point have to make a decision. Do they want to harbor the insurgents and suffer the consequences that come with that, or do they want to get rid of the insurgents and have the benefits of not having them there?”15

Generals James Franklin Bell and William Westmoreland had offered Filipinos and Vietnamese a similar choice in their respective wars. In 2003 Lieutenant Colonel Nathan Sassaman, commander of the Fourth Infantry’s 1-8 Battalion, ordered his troops to cordon off the village of Abu Hishma with barbed wire after a soldier was shot. Interviewed by New York Times reporter Dexter Filkins, the former star quarterback memorably predicted, “With a heavy dose of fear and violence, and a lot of money for projects, I think we can convince these people that we are here to help them.”16

In 2004 Sassaman was relieved of his command after two of his soldiers drowned a detainee. He subsequently criticized his superior officers for losing “sight of our primary purpose—to destroy the enemy with overwhelming force at every opportunity” and described the confusing “duality” of a war in which “there were many nights spent hunting, fighting, and killing fanatical insurgents, followed by a swift transition into daytime reconstruction efforts with peaceful Arab citizens.”17 This “duality” was a persistent feature of the Iraq War from the earliest days of the occupation, as military engineers built or repaired schools, soccer fields, mosques, and other public buildings, and civil affairs teams were sent out to reestablish local administrations and neighborhood councils and provide Iraqis with information about the democratic process in “governance operations.”

But reconstruction was patchy and often nonexistent; war, neglect, monumental levels of corruption, and inappropriate development strategies designed to favor well-connected American companies compounded the decay of Iraq’s infrastructure and its education and hospital systems. Unlike Germany or Japan, military-driven reconstruction never became a catalyst for a wider economic and social transformation, and its intended impact on the Iraqi population was negated by a war that allowed increasingly little space for the “humanitarian” components of “full-spectrum operations.”

The Carrot and Stick

In January 2005, an article in Newsweek claimed that Pentagon officials were preparing a new policy in Iraq called the Salvador Option, modeled on the “so-called death squads” that had hunted down and killed “rebel leaders and sympathizers” in El Salvador during the 1980s. The article quoted one official who justified such actions by saying, “The Sunni population is paying no price for the support it is giving to the terrorists. . . . From their point of view, it is cost-free. We have to change that equation.”18

Over the next few months, newspapers began reporting the kidnapping and murder of Iraqis by Shiite paramilitary groups, who dumped the bodies, bearing obvious marks of torture, as warnings in public places—the same methods used in the U.S.-backed war in El Salvador. Most of the victims were Sunni Muslims, and the militias were well armed and well equipped, with close connections to the Shia-dominated Ministry of the Interior in the Iraqi Interim Government. Their activities reached a peak of ferocity following the bomb attack on the Shia Golden Dome Mosque in Samarra in February 2006, when scores of Sunni civilians and insurgents were murdered by an array of “pop-up” militias.

As in El Salvador, American involvement in these operations is murky, but some paramilitary units, such as the Special Police Commandos, received training from U.S. military advisers, including Colonel James Steele, the former head of the U.S. military mission in El Salvador. The campaign of what can be described only as state terror carried out by the Shiite “pop-up” militias coincided with the escalation of raids carried out by Joint Special Operations Command (JSOC) hunter-killer teams and the British Special Air Service between 2006 and 2008, in which thousands of “terrorists” were captured or killed. Though lawyers were supposedly present during the target selection process, it has never been explained who these targets were or what determined the decision to capture or kill them. Whether these operations merely coincided with the sectarian violence directed by the Ministry of the Interior or were designed to complement it, the state-directed campaign of terror ended the last prospect of a national resistance to the occupation unified across sectarian lines, and it drove the Sunni population to regard the U.S. Army as the lesser of two evils, at least temporarily, to the point where its leaders were increasingly willing to turn to their former enemy for protection.

A 1994 counterinsurgency manual, Foreign Internal Defense Forces: Techniques for Special Forces, which was used by U.S. forces in Iraq, emphasized the importance of “national programs to win insurgents over to the government side with offers of amnesty and rewards” in addition to the “persuasive power” that “pressure from the security forces” was able to provide.19 From 2005 onward, an influential group of military intellectuals centered around Major General David Petraeus, commander of the 101st Airborne Division in Mosul, argued that the army had concentrated too much on “enemy-centric” counterinsurgency and “kinetic operations” that emphasized killing insurgents rather than winning over the Iraqi population.

To the advocates of COIN, as this approach to counterinsurgency became known, the army had dangerously isolated itself from ordinary Iraqis by sealing itself up in fortified compounds and conducting large-scale operations with armored columns. Borrowing from a strategy developed by the French army in Algeria known as clear-hold-build, the “COINistas” advocated a return to small-unit operations in which U.S. soldiers would drive insurgents from a particular locality and “live amongst the people” in smaller forward operating bases in order to protect and gain the trust of the local population and bring them under the control of the Iraqi government.

COIN was given official status in the U.S. Army/Marine Corps Field Manual 3-24 (FM 3-24), which was presented to military officers and selected journalists with great fanfare in February 2005. Hundreds of delegates listened to Petraeus and the manual’s authors expound the principles of population protection, nation building, cultural sensitivity, and “measured” force. These ideas were not entirely revelatory, and some commanders in Iraq already practiced them of their own volition. Nevertheless, some delegates were puzzled about how to fulfill FM 3-24’s requirements to “apply force without killing or crippling the enemy” and about maxims like “The best weapons for counterinsurgency do not fire bullets” and “Mounting an operation that kills 5 insurgents is futile if collateral damage leads to the recruitment of 50 more.” The former marine and military writer Francis West questioned the relevance of such methods in Iraq and exclaimed, “An insurgency—it’s war! The weapons we have, the reason people want us there, is we kill people!”20

That same year, Petraeus was appointed to command the Multi-National Force—Iraq (MNF-I), which was bolstered by twenty thousand extra troops in the Bush administration’s “surge.” Over the next two years, “population-centric” counterinsurgency appeared to achieve some success, as the overall level of violence declined by more than 90 percent and Sunni sheikhs and tribal leaders in the war-torn Anbar Province entered into negotiated agreements with U.S. forces and the Iraqi government. For Petraeus and his many admirers, the reduction in violence and the relative stabilization that followed was a triumph of COIN’s “non-kinetic” methodology.21

The idea that the Sunni population was won over by a gentler and more culturally attuned U.S. Army tended to overlook a number of contributing factors, however, such as the effective bribing of Sunni insurgents that accompanied the “Sunni Awakening,” the disenchantment of Sunni tribal sheikhs with the more extremist al-Qaeda-linked jihadist insurgents, the ethnic division of Baghdad and much of Iraq into Sunni and Shiite ghettoes that made further violence unnecessary, and the continued reliance on hard military power that also accompanied the advent of COIN. By the time the United States withdrew its forces from Iraq in 2011, the war and occupation had claimed the lives of 4,475 U.S. soldiers and wounded 32,220 more. The civilian death toll has been variously calculated from the hundreds of thousands to a million. Such an outcome might not seem the most obvious cause for celebration. After years of barely credible blunders that had brought the most powerful military in the world to the brink of strategic defeat, the surge success story was welcomed by the Bush administration and the American public, and COIN was regarded as the magic key that had unlocked the Iraq War, and could unlock other wars as well.

Afghanistan

In 2009, the incoming Obama administration announced plans for a troop surge in Afghanistan in an attempt to defeat the long-running insurgency that followed the toppling of the Taliban government in 2001–2. In June 2009, General Stanley McChrystal, the head of the Joint Special Operations Command in Iraq, was appointed head of the International Security Assistance Force (ISAF) and U.S. Forces Afghanistan (USFOR-A) in preparation for the Afghan surge. The former director of JSOC hunter-killer teams in Iraq seemed an unlikely practitioner of nonkinetic warfare. Nevertheless, McChrystal issued a tactical directive to American and ISAF military units shortly after his arrival, urging his commanders to adopt the COIN approach and “avoid the trap of winning tactical victories—but suffering strategic defeats—by causing civilian casualties or excessive damage and thus alienating the people.”22 Many U.S. officers conducted their operations in accordance with these principles. “I very deliberately told the Marines to focus on the population first and the enemy second. I will maintain that the firefight is a distraction that must be worked through in order to maintain contact with the population,” reported Third Reconnaissance Battalion commander Lieutenant Colonel Travis Homiak on marine operations in the Upper Sangin Valley.23

Social scientists, ethnologists, and anthropologists were also dispatched to Afghanistan as a part of the Pentagon’s human terrain system (HTS), to assist the army with information on the cultural norms, conditions, and attitudes that would enable more appropriate operational decisions in dealing with the local population. These developments were not met with universal enthusiasm. In a widely circulated letter written to the secretary of the army in August 2010, Colonel Harry Tunnell, commander of the Fifth Stryker Combat Brigade of the Second Infantry Division, accused “COIN dogma” of having “degraded our willingness to properly, effectively, and realistically train for combat” and claimed that “population-centric approaches to war have resulted in senior officers that are almost pacifistic in their approach to war.”24 These criticisms came from an infantry commander with a reputation for aggressive search-and-destroy tactics. In April 2011, a Rolling Stone investigation revealed that members of Tunnell’s brigade ran a secret “kill team” that murdered Afghan civilians and took mutilated body parts as souvenirs.25 Tunnell was not found to be responsible for these events, but he was relieved of his command after an internal army investigation concluded that he had fostered a culture of violence that might have affected his soldiers.

To officers like Tunnell, killing was the essential task of the infantry. In an article in Joint Forces Quarterly in 2009, Colonel Gian Gentile, a serving officer in Iraq and one of Petraeus’s most articulate critics, also claimed that the army’s “operational capability to conduct high-intensity fighting operations other than counterinsurgency has atrophied over the past 6 years” and called for a new balance between counterinsurgency and “operations at the higher end of the conflict spectrum.”26 In another critique of COIN, in Harper’s magazine in 2007, the political scientist Edward Luttwak denounced the “crippling ambivalence of occupiers who refuse to govern” and exhorted the U.S. military in Iraq to “out-terrorize the insurgents, the necessary and sufficient condition of a tranquil occupation.” An admirer of the scorched-earth “counterterror” policies of the Guatemalan army in the 1980s, Luttwak praised the reprisals and massacres carried out by Nazi armies in the World War II as “very effective . . . in containing resistance with very few troops.”27

These criticisms ignored the fact that COIN was not quite the radical departure from American military practice that its opponents or its supporters claimed. In Afghanistan, as in Iraq, the concern with “protecting the population” and “living amongst the people” in towns and cities was accompanied by significant degrees of “kinetic” force, including an escalation in rural night raids conducted by Special Forces and Afghan troops, in which real or suspected Taliban were often killed on the spot. Supposedly aimed at “high-value” and “mid-level” Taliban, the raiders often failed to distinguish among militants, sympathizers, and civilians. Non-combatants were often selected on the basis of their mobile phone numbers and shot simply because they happened to be in the wrong place at the wrong time. On February 12, 2010, a Special Forces unit raided a house in the village of Khataba in Paktia Province in the belief that a Taliban gathering was under way. Without warning, snipers opened fire and killed five guests and family members, including two pregnant women and a teenage girl; other troops beat up and arrested a number of male guests. The party was actually to celebrate the naming of a newborn baby.

Similar incidents took place in other areas. In 2009 Matthew Hoh, a senior U.S. diplomat in Kabul and former marine, resigned in protest at what he called a “Special Operations form of attrition warfare” that had increased popular support for the Taliban. In April 2010, McChrystal admitted to his own officers, “We’ve shot an amazing number of people and killed a number and, to my knowledge, none has proven to have been a real threat to the force.” Since his arrival, the number of IED attacks had soared from 250 per month to more than 900. In July that year, McChrystal was replaced by Petraeus, who continued to combine nonkinetic and population-centric counterinsurgency with an intensification of air strikes and special-forces raids. In August 2011, Petraeus left Afghanistan and the army to become director of the CIA. At his retirement ceremony, Admiral Michael Mullen, chairman of the Joint Chiefs of Staff, described him as one of the “great battle captains of American history,” alongside Grant, Marshall, and Eisenhower, and claimed that “Afghanistan is now a more secure and hopeful place than a year ago” as a result of his efforts.

These claims did not go unchallenged. In a book-length critique of COIN, Gian Gentile accused Petraeus and his colleagues of misapplying an already flawed and overrated strategy in Afghanistan and promoting a seductive but delusional form of sanitized warfare designed to conceal the unavoidable reality of “death, destruction, and human suffering.”28 In a speech in 2013 to the Association of the United States Army, George Bush’s former defense secretary Robert Gates made a more oblique reference to COIN when he quoted Sherman’s famous observation that “every attempt to make war easy and safe can only end in humiliation and disaster” and rejected “idealized, triumphalist, or ethnocentric notions of future conflict that aspire to upend the immutable principles of war: where the enemy is killed, but our troops and innocent civilians are spared.”29

Long-Distance War

Such criticisms tended to ignore the fact that the “immutable principles” that Gates outlined were not necessarily appropriate to wars that U.S. armies were now fighting. With its scaled-down professional army and a public with a limited appetite for foreign military adventures, the minimization of American casualties was even more of a priority than usual for U.S. military planners, and reducing the extent—or at least the visibility—of wartime destruction was an essential strategic component of “information operations.” These considerations have influenced the deployment of unmanned combat aerial vehicles (UCAVs) as an essential weapon of twenty-first-century American warfare. Initially used for surveillance during the Afghan War in 2001–2, these drones were subsequently fitted with missiles and have been used to kill hundreds of alleged terrorists and militants in various countries, including Iraq, Afghanistan, Somalia, Yemen, and Pakistan.

For the first time in military history, “pilots” thousands of miles away from the battle zone could observe, select, and kill specific persons in their cars, their homes, or public places, in town or countryside, in undeclared wars. From the perspective of the U.S. military and intelligence services, drones were the ideal weapon for media-managed warfare against a stateless enemy. They could simply kill people instead of trying to capture and arrest them, thereby eliminating the negative domestic political repercussions of taking casualties and the legal complications of due process and detention of “enemy combatants.”

Reaper and Predator drones are often presented to the public as the ideal instrument of “humane” and “surgical” warfare, minimizing physical destruction and making it possible to selectively eliminate “high-value” targets rather than civilians. In 2012 President Obama stated that “drones have not caused a huge number of civilian casualties” but are a “targeted, focused effort at people who are on a list of active terrorists trying to go in and harm Americans” and were used only when there was a “near certainty” that civilian casualties could be avoided.30

Such claims are difficult to confirm, given the absence of information about a target-selection process based on signature strikes. Any male of military age behaving “suspiciously” in a certain area acquires “signatures” that qualify him for killing as a member of the Afghan or Pakistani Taliban or an al-Qaeda affiliate. In May 2012, the New York Times reported that President Obama personally signed off on targets from a kill list drawn up by agents of the CIA and other spy agencies, who gathered “every week or so, by secure video teleconference, to pore over terrorist suspects’ biographies and recommend to the president who should be the next to die.”31

The primary battleground in the U.S. drone wars has been the Federally Administered Territorial Area (FATA) in northern Pakistan. Between 2004 and 2013, from 330 to 374 drone strikes were carried out in Waziristan, killing 2,000 to 4,700 people, including 400 to 900 civilians. Investigations carried out by Amnesty International and other NGOs in Waziristan have listed attacks on mosques, bakeries, weddings, funerals, houses, bus depots, and public places in which men, women, and children have been killed and wounded.32 On March 17, 2011, more than forty people were killed when two missiles were fired on a jirga, or tribal council, in the town of Datta Khel, North Waziristan. In October 2012, the son of sixty-seven-year-old midwife Momina Bibi told five members of the U.S. Congress how his mother was blown to pieces by a drone-fired missile in front of her grandchildren while picking vegetables in her garden the previous year.33

Some civilians have been killed by drones as a result of faulty intelligence information or as an accidental consequence of strikes not specifically aimed at them. Others were killed as a result of a “double tap” policy, in which drone-fired missiles are directed at first responders coming to help victims of the initial strike. In 2010, the Department of Defense began posting videos on You-Tube of drone missions, showing anonymous “militants” and “armed criminals” being blown up in Iraq and Afghanistan. This video-game footage of bad guys vanishing in puffs of smoke does not include the images of blasted homes, rubble, and civilian mourners taken by Pakistani photographer and antidrone campaigner Noor Behram, who has described the typical aftermath of a drone strike thus: “There are just pieces of flesh lying around. . . . You can’t find bodies. So the locals pick up the flesh and curse America. They say that America is killing us inside our own country, inside our own homes, and only because we are Muslims.”34

The Obama administration has paid scant attention to criticisms from the Pakistan government that these weapons may be creating more enemies than they kill, and it has rejected suggestions from lawyers and human-rights activists that drone killings may breach international laws of armed conflict and humanitarian war. A 2012 report by Stanford University researchers on the drone war on Waziristan describes a traumatized and exhausted population living in constant fear of the drones cruising out of sight and hearing above their heads, children who do not dare to go to school, and adults who avoid going to the mosque or any public gathering, who even avoid inviting guests to their own homes for fear of being targeted as hosts of Taliban conclaves. “We are always thinking that it is either going to attack our homes or whatever we do,” one survivor of a drone strike on his taxi told the Stanford University researchers. “It’s going to strike us; it’s going to attack us. . . . No matter what we are doing, that fear is always inculcated in us.”35

The drone war has also damaged Pashtun society politically and economically, through the killing of influential tribal elders and family breadwinners and the destruction or disruption of businesses and transportation. Given the facts, it’s hard to avoid the conclusion that the remote-control war in Waziristan is aimed not simply at militants but at all of the people, no matter how many support or merely tolerate the Taliban presence and regardless of whether they have any choice in the matter.

Future war? Simulated drone strike from video installation, “5000 Feet Is the Best.” . . .

Future war? Simulated drone strike from video installation, “5000 Feet Is the Best.” Courtesy of Omer Fast.

It’s symptomatic of the strategic incoherence at the heart of this “long war” that the United States has tried to fight terrorism by terrorizing other societies thousands of miles away on the basis of nebulous assumptions about their cultural behavior, such as carrying guns or gathering in groups, while anthropologists gather information on rural Afghans in an attempt to win their hearts and minds. In effect, technology appears to have defied Sherman’s dictum and proved that it is possible to make war “easy and safe”—for one side at least. In doing so, it has paved the way for a world in which fully autonomous drones may soon hover permanently over the world’s “lawless” spaces and select their own targets on the basis of computerized data without any human input.

Future War

The U.S. military’s embrace of drone warfare doesn’t mean it has relinquished the search for destructive supremacy that has always been intrinsic to America’s military objectives. In 2000, the Department of Defense published a remarkable document, Joint Vision 2020, which outlined its new grand strategy for future warfare. Elaborating on a strategic concept first mooted in a 1998 U.S. Space Command document, Vision for 2020, the DoD looked forward to the creation of “a force that is dominant in the full spectrum of military operations—persuasive in peace, decisive in war, preeminent in any form of conflict” that would make it possible for U.S. forces, “operating unilaterally or in combination with multinational and interagency partners, to defeat any adversary and control any situation across the full range of military operations” on land and sea, in the air, and in space.36

As a result of the “war on terror,” the United States has extended its visible military presence into Central Asia, Africa, and South America and made numerous less visible deployments of special operations forces and military advisers in countries throughout the world. These forces include what retired lieutenant colonel and leading COINista John Nagl has called an “industrial strength counterterrorism killing machine” run by the Joint Special Operations Command.37 In pursuit of this megalomaniacal vision of “full-spectrum dominance,” the Pentagon has used the soaring budgets of the past decade to expand its destructive arsenal with autonomous robots, laser weapons, thermobaric fuel bombs, bunker-busting mini-nukes, and new generations of UAVs, including small hand-launched models and the giant Gorgon Stare drone. In 2002 the DoD’s Quadrennial Defense Review outlined its intention to acquire “non-nuclear forces that can strike with precision at fixed and mobile targets throughout the depth of an adversary’s territory; active and passive defenses; and rapidly deployable and sustainable forces that can decisively defeat any adversary.”

For the past decade the military has been developing a “prompt global strike” system to make it possible for submarine-based missiles to “strike virtually anywhere on the face of the Earth within 60 minutes.” In the same period, the military’s cutting-edge Defense Advanced Research Projects Agency (DARPA) and a host of universities and military research institutes have continued to seek new technologies to extend the RMA into the indefinite future. Future developments include autonomous self-sustaining robo-soldiers, laser-guided bullets, cyborg “insects” that can spy or kill, and a semimythical weapon known as the rod of God or finger of God, which could destroy any target with a space-based laser.38

Where Sherman’s ragged and often barefoot armies tramped through mountains and swamps, the twenty-first-century U.S. Army envisages future generations of war fighters equipped with smart bulletproof nano-uniforms that could heal wounds, enable the wearers to scale walls and buildings, and incorporate load-bearing exoskeletons for super strength. Other research projects include development of super-speed drugs to enable soldiers to go days without sleeping and post-combat pills to eliminate PTSD. At the same time, the military is also carrying out research into nonlethal technologies to disable rather than kill, such as incapacitating foams, flight-inducing sounds or smells, shotgun Taser rounds, and pulsed-energy projectiles (PEPs) that use microwave beams to project exploding plasma at targets, causing temporary paralysis. At mock Middle Eastern cities in California and the Midwest, U.S. soldiers rehearse “military operations in urban terrain” (MOUT) to emulate “global slums” and “feral cities” in anticipation of future military operations against a constantly widening range of enemies that include terrorists, insurgents, drug cartels, and rogue states.

Whereas Sherman once feared that the United States might become “like Mexico” as a result of the Civil War, the projection of U.S. military power in the early twenty-first century is often presented as an essential bulwark against global chaos and implosion of the international order. In The Pentagon’s New Map (2004), the military geostrategist Thomas P.M. Barnett compared the U.S. military to a “SWAT team within any metropolitan police force” that would “enter and exit crime scenes according to circumstances” against an array of rogue states, terrorists, and drug lords that belonged to what he called the “non-integrating gap.” Barnett lauded the U.S. military’s unique ability to kill and destroy “bad actors while leaving behind societies otherwise unimpaired; it will surgically remove unwanted tissue, not riddle the body politic with smoking holes.” These “surgical” interventions, Barnett argued, would facilitate postconflict integration of targeted countries into the international economic order and enable the United States to direct military force against “bad guys, using weapons with a real moral dimension, such as smart bombs and new nonlethal forms of warfare that target enemy systems without harming people.”39

Others have questioned whether war can—or should—be fought “without harming people.” Some have argued that the wars of the past decade have not been destructive enough and have undermined the military’s core task of destroying the enemy regardless of the consequences. In a discussion paper for the National Intelligence Council (NIC) 2020 project written in May 2004, the former U.S. intelligence officer and military pundit Ralph Peters argued that these wars proved that “there is no substitute for shedding the enemy’s blood in adequate quantities; that an enemy must be convinced practically and graphically that he is defeated.” For Peters, such “virtuous destruction” might include attacks against property and infrastructure, and also against “hostile populations,” who “must be broken down to an almost childlike state . . . before being built up again.”40

In May 2012, Wired magazine published a July 2011 PowerPoint presentation by a West Point instructor, Lieutenant Colonel Matthew Dooley, for a course on Islam at the Defense Department’s Joint Forces Staff College in Norfolk, Virginia. Dooley considered the possibility that the United States might wage “near total war” against the world’s Muslim population in order to defeat terrorism. In such a war, Dooley suggested, the 1949 Geneva Convention might no longer be applicable, and the military might be obliged to emulate “the historical precedents of Dresden, Tokyo, Hiroshima, Nagasaki” and ensure that “Saudi Arabia [was] threatened with starvation, Mecca and Medina destroyed, Islam reduced to cult status.”41

This course was discontinued and denounced by the chairman of the Joint Chiefs of Staff General Martin Dempsey as “counter to our values of appreciation for religious freedom and cultural awareness.” But the precedents that Dooley referred to were also part of an American tradition that remains pertinent to the era of information warfare and knowledge-based war. In a 2007 study by Dr. Dan Plesch and Martin Butcher for the School of Oriental and African Studies on the potential consequences of a U.S. war on Iran, the authors claimed that the military was poised to hit over ten thousand targets in Iran with smart conventional weapons and that “hundreds of thousands” of Iranians would be killed due to the targets’ location in cities and other populated areas.42

The recent rapprochement between Iran and the United States suggests that this possibility may be averted, but the ability to inflict large-scale destruction both in the battle space and beyond it remains fundamental to America’s military preparedness. It’s no surprise that such threats may not be restricted to “feral cities” and “rogue states.” A 2008 study by the Center for Strategic and Budgetary Assessment suggested that U.S. Special Forces might be required to conduct “large-scale, overt unconventional warfare operations” against China in order to defend U.S. interests in the “global commons” of space, cyberspace, land, and sea.43 In recent years, Southeast Asia and the Pacific region have been progressively remilitarized in an ongoing attempt to encircle and “deter” China as part of the Obama administration’s “Pacific pivot.”

For all the U.S. military’s concern with counterinsurgency, hyperwar, and high-tech wars based on limited, surgical destruction, these preparations suggest that major conventional wars remain as much a part of its contingency planning for the future as its rehearsals for policing operations in the “feral cities” of the Third World and the global “littoral.” Whether these conflicts are fought by human beings, robots, or remote-control machines and cyborg insects, they will inevitably be waged among and often against civilians, as well as soldiers, and fought in accordance with the central aim of war that Sherman once defined to his wife after the battle of Shiloh: “to produce results by death and slaughter.”44 For all the alluring rhetoric of humanitarian warfare, kinetic operations, and nonlethal force, the ability to inflict decisive destruction is likely to remain a central component of the American way of war, and Sherman’s name will continue to provide inspiration to those who wish to intensify these capabilities and escalate our wars.