WHEN A BIPARTISAN CHORUS of foreign policy professionals denounced Donald Trump’s candidacy during the 2016 campaign, Trump fired back promptly, calling them “nothing more than the failed Washington elite looking to hold onto their power, and it’s time they were held accountable for their actions.”1 Their concerns about Trump may have been valid, but so was his depiction of an out-of-touch community of foreign policy VIPs whose unthinking pursuit of liberal hegemony had produced few successes and many costly failures.
In a perfect world, the institutions responsible for conducting or shaping U.S. foreign policy would learn from experience and improve over time. Policies that worked poorly would be abandoned or revised, and approaches that proved successful would be continued. Individuals whose ideas had helped the United States become stronger, safer, or more prosperous would be recognized and rewarded, while officials whose actions had repeatedly backfired would not be given new opportunities to fail. Advisors whose counsel proved sound would rise to greater prominence; those whose recommendations were lacking—or, worse yet, disastrous—would be marginalized and ignored.
This notion may sound idealistic, but it is hardly far-fetched. Any organization striving to succeed must hold its members—especially its leaders—accountable for results. No corporation seeking to stay in business would stick with a management team that never met a quarterly target, and no baseball team would keep the same manager and lineup after finishing dead last five years running. In a competitive world, holding people accountable is just common sense.
But it doesn’t work this way in American politics, and especially not in foreign policy. Instead, failed policies often persist and discredited ideas frequently get revived, while error-prone experts “fail upward” and become more influential over time. U.S. leaders sometimes turn to the same people over and over, even when they have repeatedly failed to accomplish the tasks they were previously given. The reverse is sometimes true as well: people who do get things right can go unrecognized and unrewarded, and they may even pay a considerable price for bringing unpleasant truths to light.
In short, when it comes to foreign policy, F. Scott Fitzgerald had it exactly backward. Far from having “no second chances in American life,” foreign policy practitioners appear to possess an inexhaustible supply of them. This worrisome tendency applies to both ideas and policies and to the people who conceive and implement them.
WHY BAD IDEAS SURVIVE
We would like to think that the government was getting wiser and that past blunders would not be repeated. And in some areas—such as public health, environmental protection, or transportation safety—there has been considerable progress. But the foreign policy learning curve is shallow, and bad ideas are remarkably resilient. Like crabgrass or kudzu, misguided notions are hard to eradicate, no matter how much trouble they cause or how much evidence is arrayed against them.
Consider, for example, the infamous “domino theory,” which has been kicking around since Dwight D. Eisenhower was president. During the Vietnam War, U.S. officials and influential pundits repeatedly claimed that withdrawal would undermine American credibility and produce a wave of realignments that would enhance Soviet power and, in the worst case, leave the United States isolated and under siege. The metaphor was evocative—assuming that states actually did behave like dominoes—and it played on fears that other states would flock to whichever superpower seemed most likely to triumph.2 Yet no significant dominoes fell after the United States withdrew from Vietnam in 1975; instead, it was the Soviet Union that collapsed some fourteen years later. Scholarly investigations of the concept found little evidence for its central claims, and the above two events should have dealt this idea a fatal blow.3 Yet it reemerged, phoenixlike, in recent debates over Afghanistan, Syria, and the nuclear agreement with Iran. Americans were once again told that withdrawing from Afghanistan would call U.S. credibility into question, embolden U.S. opponents, and dishearten key U.S. allies.4 In the same way, President Obama’s reluctance to intervene in Syria and his decision to pursue a nuclear deal with Iran is supposedly what led Russian president Vladimir Putin to act more aggressively in Ukraine.5 Despite a dearth of supporting evidence, it seems nearly impossible to quash the fear of falling dominoes.
Similarly, the French and American experience in Vietnam might have taught us that occupying powers cannot do effective “nation-building” in poor and/or deeply divided societies, and that lesson might have made future presidents wary of attempting regime change in the developing world. The Soviet defeat in Afghanistan in the 1980s and the turmoil the United States confronted in Somalia after 1992 should have driven the lesson home even more powerfully. Yet the United States has now spent more than a decade and a half trying unsuccessfully to do regime change and nation-building in Iraq, Afghanistan, Libya, Yemen, and several other places—at considerable cost but with scant success. The futility of this task could not have been more obvious when Barack Obama took office in 2009, but he still chose to escalate the war in Afghanistan, acquiesced in the ill-advised campaign to topple Muammar Gaddafi in Libya, and continued to interfere throughout the Arab and Islamic world despite abundant evidence that such actions strengthened anti-American extremism.
Why is it so hard for states to learn from mistakes? And on the rare occasions when they do learn, why are the key lessons so easily forgotten?
THE LIMITS OF KNOWLEDGE
Foreign policy is a complicated business, and observers invariably offer competing explanations for policy failures and draw different lessons from them. Did the United States lose in Vietnam because it employed the wrong military strategy, because its South Vietnamese clients were irredeemably corrupt and incompetent, or because media coverage undermined support back home? Did violence in Iraq decline in 2007 because “the surge worked,” because Al Qaeda overplayed its hand, or because prior ethnic cleansing had separated Sunnis from Shia and thus made it harder for either to target the other? Because policy implications depend on how the past is interpreted and explained, consensus on the proper “lessons” of a given policy initiative is often elusive.
“THIS TIME IS DIFFERENT”
The lessons drawn from past experience may also be discarded when policymakers believe that new knowledge, a new technology, or a clever new strategy will allow them to succeed where their predecessors failed. As Ken Rogoff and Carmine Reinhart showed in their prizewinning book This Time Is Different: Eight Centuries of Financial Folly, economists and financial professionals have repeatedly (and wrongly) concluded that they had devised new and foolproof ways to prevent financial panics, only to be surprised when the next one occurred.6
In much the same way, Vietnam taught a generation of U.S. leaders to be wary of counterinsurgency campaigns, but the lesson was forgotten as time passed and new technologies and doctrines made their way into the armed forces. The Vietnam experience had inspired the so-called Powell Doctrine, which prescribed that the United States intervene only when vital interests were at stake, rely on overwhelming force, and identify a clear exit strategy in advance.7 Yet after routing the Taliban in 2001, top U.S. officials convinced themselves that a combination of special operations troops, precision-guided munitions, and high-tech information management would enable the United States to overthrow enemy governments quickly and cheaply, avoiding lengthy occupations. The caution that informed the Powell Doctrine was cast aside, leading to new quagmires in Iraq and Afghanistan.
Those unhappy experiences guided Barack Obama’s more cautious approach to military intervention and his decision to rely on airpower and drones rather than ground troops in most instances. Yet the lesson of these earlier debacles was beginning to fade by 2014, as proponents of a more muscular foreign policy began insisting that the real problem was not the original decision to invade, but rather the decision to withdraw before total victory had been achieved.8 Senator Marco Rubio (R-FL) told an interviewer, “It was not a mistake to go into Iraq,” and Senator Lindsay Graham (R-SC) declared, “At the end of the day, I blame President Obama for the mess in Iraq and Syria, not President Bush.” In addition to masking culpability for the earlier blunder, such comments are intended to convince elites and the public to support more operations of this kind, and, if necessary, for longer.9 To the extent that these efforts to rewrite history succeed, earlier lessons will be forgotten and the same mistakes will be repeated.
IF YOU’RE STRONG, YOU DON’T HAVE TO BE SMART
A wealthy country like the United States has an array of well-funded universities, think tanks, and intelligence agencies to analyze global issues and figure out how to deal with them. These same assets should also help the country learn from experience and correct policies that aren’t working. But because the United States is already powerful and secure, mistakes are rarely fatal and the need to learn is not as great as it would be if America’s position were more precarious.
The tendency to cling to questionable ideas or failed practices will be particularly strong when some set of policy initiatives is inextricably linked to America’s core values and identity. Consider the stubbornness with which U.S. leaders pursue democracy promotion, despite its discouraging track record. History shows that building stable and secure democracies is a long, contentious process, and foreign military intervention is usually the wrong way to do it.10 As discussed in chapters 1 and 2, U.S. efforts to export democracy, or do nation-building more generally, have failed far more often than they have succeeded. Nonetheless, a deep attachment to the ideals of liberty and democracy make it hard for U.S. leaders to accept that other societies cannot be remade in America’s image.
When a large-scale upheaval like the Arab Spring occurs, therefore, U.S. leaders are quick to see it as a new opportunity to spread America’s creed. “Our national religion is democracy,” noted the Syria expert Joshua Landis in 2017, “when in doubt we revert to our democracy talking points … It is a matter of faith.”11 Even when U.S. leaders recognize that they cannot create “some sort of Central Asian Valhalla,” as former secretary of defense Robert Gates put it in 2009, they find it nearly impossible to stop trying.
CUI BONO?: BAD IDEAS DO NOT INVENT THEMSELVES
Lastly, bad ideas persist when powerful interests have an incentive to keep them alive. Although open debate is supposed to weed out dubious notions and allow facts and logic to guide the policy process, self-interested actors who are deeply committed to a particular agenda can interrupt this evaluative process. As Upton Sinclair once quipped, “It is difficult to get a man to understand something when his salary depends on his not understanding it.”
The ability of self-interested individuals and groups to interfere in the policy process appears to be getting worse, in good part because of the growing number of think tanks and “research” organizations linked to special interests. Their raison d’être is not the pursuit of truth or the accumulation of new knowledge, but rather the marketing of policies favored by their sponsors. And as discussed at greater length below, these institutions can also make it harder to hold public officials fully accountable for major policy blunders.
For example, the disastrous war in Iraq should have discredited and sidelined the neoconservatives who conceived and sold it, as the war showed that most, if not all, of their assumptions about politics were deeply flawed. Once out of office, however, most of them returned to well-funded Washington sinecures and continued to promote the same highly militarized version of liberal hegemony they had implemented while in government. When key members of the foreign policy elite are insulated from their own errors and hardly anyone is held accountable for mistakes, learning from past failures becomes nearly impossible.
In some cases, in fact, influential groups or individuals can intervene to silence or suppress views with which they disagree. In 2017, for example, the U.S. Holocaust Memorial Museum sponsored a careful scholarly study of the Obama administration’s handling of the Syrian civil war, which questioned whether greater U.S. involvement could have significantly reduced violence there. The 193-page report had no political agenda and was carefully done, but well-placed individuals who had previously called for the United States to intervene were outraged by the study’s findings and convinced the museum’s directors to withdraw it.12
Even in a liberal democracy, therefore, there is no guarantee that unsuccessful policies will be properly assessed and the ideas that informed them permanently discredited. Not surprisingly, the same principle applies to the people who devise and defend them.
FAILING UPWARD
U.S. foreign policy would work better if the political system rewarded success and penalized failure. Ideally, people who performed well would gain greater authority and influence and those who did poorly would remain on the margins. But this straightforward management principle does not operate very consistently in the realm of politics, including foreign policy. Instead of holding officials to account and weeding out poor performers, the system often displays a remarkable indifference to accountability.
TOO BIG TO FALL?
Aversion to accountability begins at the top, where malfeasance at the highest levels of government is routinely excused. After the 9/11 attacks, for example, the Bush administration and the Republican-controlled Congress reluctantly agreed to appoint an independent, bipartisan commission to investigate the incident and make recommendations. But it was clear from the start that leading politicians did not really want a serious inquiry: the commission’s initial budget was a paltry $3 million (later increased to $14 million), and Bush administration officials repeatedly stonewalled the commission’s investigations.13
Moreover, although one of the commission’s key tasks was exploring possible errors by the Clinton and Bush administrations, the cochairs, Thomas Kean and Lee Hamilton, chose the historian Philip Zelikow as executive director, despite his long association with then–national security advisor Condoleezza Rice, his role on Bush’s transition team, and his under-the-radar involvement with the administration itself.14
The commission eventually produced a riveting account of the 9/11 plot, but it declined to pass judgment on any U.S. officials. It was the worst attack on U.S. soil since Pearl Harbor and more than twenty-eight hundred people had died, yet apparently no one in the U.S. government was guilty of even so much as a lapse in judgment. As Evan Thomas of Newsweek later commented, “Not wanting to point fingers and name names … the 9/11 Commission shied away from holding anyone personally accountable” and “ended up blaming structural flaws for the government’s failure to protect the nation.” The historian Ernest May, who helped write the commission’s report and defended its efforts, later acknowledged that responsibility was assigned solely to institutions (such as the FBI or CIA), described the report as “too balanced,” and admitted that “individuals, especially the two presidents and their intimate advisors, received even more indulgent treatment.”15
A similar whitewashing occurred following the revelations that U.S. soldiers abused and tortured Iraqi prisoners of war at Abu Ghraib prison. Top civilian officials were directly responsible for the migration of “enhanced interrogation” techniques from the detention facility at Guantanamo to Abu Ghraib, as well as for the lax conditions that prevailed at the latter facility. Yet even though “the lawlessness and cruelty on the ground in Iraq clearly stemmed from the policies at the top of the Bush administration,”16 a series of internal reports—by Major General Antonio Taguba, by the U.S. Army’s Office of the Inspector General, and by a team of former officials appointed by Rumsfeld and headed by former secretary of defense James Schlesinger—assigned blame entirely to local commanders or enlisted personnel.17
In particular, the army inspector general’s report blamed the abuses on “unauthorized actions undertaken by a few individuals,” a conclusion the New York Times editorial board termed a “300-page whitewash.”18 The Schlesinger Report referred briefly to “institutional and personal responsibility at higher levels” but exonerated all the top civilians. In fact, one member of the panel, the retired air force general Charles Horner, explicitly cautioned against assigning blame for the abuses, saying, “Any attempt by the press to say so-and-so is guilty and should resign or things of this nature, they have an inhibiting effect upon this department finding the correct way to do things in the future.”19 And at the press conference releasing the report, Schlesinger—a longtime Washington insider—openly stated that Secretary of Defense Donald Rumsfeld’s resignation “would be a boon to all of America’s enemies.”20 In the end, a handful of enlisted personnel were convicted of minor offenses, one army general received a reprimand and was retired at lower rank, and none of the civilian officials overseeing their activities were sanctioned at all. As analysts at Human Rights Watch later concluded, these reports “shied away from the logical conclusion that high-level military and civilian officials should be investigated for their role in the crimes committed at Abu Ghraib and elsewhere.”21 Instead, the officials whose careers suffered were those who tried to bring these facts to light. In particular, Major General Taguba was falsely accused of leaking his report, shunned by many of his army colleagues, and subsequently ordered to retire sooner than he had intended.22
The Obama administration’s decision not to investigate or prosecute Bush administration officials accused of violating U.S. domestic laws regarding torture and committing war crimes fits this pattern as well. Despite considerable evidence that President Bush and Vice President Cheney authorized torture, the Justice Department declined to appoint a special prosecutor to investigate whether they or other top officials had violated U.S. or international law.23
President Obama justified this decision by saying “we need to look forward as opposed to looking backward,” and the political costs of such an investigation might well have outweighed the gains.24 Nonetheless, his decision to defer the day of reckoning for perpetrators of torture makes future recurrences more likely and casts doubt on America’s professed commitment to defend human rights and the rule of law.25
At this late date, pointing out that U.S. officials were never held accountable for serious violations of U.S. and international law is not exactly a revelation. The more important point is that such occurrences are part of a larger pattern.
THE NINE LIVES OF NEOCONSERVATISM
When it comes to U.S. foreign policy, the unchallenged world record holders for “second chances” and “failing upward” are America’s neoconservatives. Beginning in the mid-1990s, this influential network of hard-line pundits, journalists, think tank analysts, and government officials developed, purveyed, and promoted an expansive vision of American power as a positive force in world affairs. They conceived and sold the idea of invading Iraq and toppling Saddam Hussein and insisted that this bold move would enable the United States to transform much of the Middle East into a sea of pro-American democracies.
What has become of the brilliant strategists who led the nation into such a disastrous debacle? None of their rosy visions have come to pass, and if holding people to account were a guiding principle inside the foreign policy community, these individuals would now be marginal figures commanding roughly the same influence that Charles Lindbergh enjoyed after making naïve and somewhat sympathetic statements about Adolf Hitler in the 1930s.
That’s not quite what happened to the neocons. Consider the fate of William Kristol, for instance, who argued tirelessly for the Iraq War in his capacity as editor of the Weekly Standard and as cofounder of the Project for the New American Century. Despite a remarkable record of inaccurate forecasts and questionable political advice (including the notion that Sarah Palin would be an ideal running mate for John McCain in 2008), Kristol is still editor of the Weekly Standard and has been at various times a columnist for The Washington Post and The New York Times and a regular contributor to Fox News and ABC’s This Week.26
Similarly, although Deputy Secretary of Defense Paul Wolfowitz misjudged both the costs and the consequences of invading Iraq and helped bungle the post-invasion occupation, President Bush subsequently nominated him to serve as president of the World Bank in 2005. His tenure at the bank was no more successful, and he resigned two years later amid accusations of ethical lapses.27 Wolfowitz decamped to a sinecure at the American Enterprise Institute and was appointed chair of the State Department’s International Security Advisory Board during Bush’s last year as president.
The checkered career of Elliott Abrams is if anything more disturbing for those who believe that officials should be accountable and advancement should be based on merit. Abrams pleaded guilty to withholding information from Congress in the 1980s, after giving false testimony about the infamous Iran-Contra affair. He received a pardon from President George H. W. Bush in December 1992, and his earlier misconduct did not stop George W. Bush from appointing him to a senior position on the National Security Council, focusing on the Middle East.28
Then, after failing to anticipate Hamas’s victory in the Palestinian legislative elections in 2006, Abrams helped foment an abortive armed coup in Gaza by Mohammed Dahlan, a member of the rival Palestinian faction Fatah. This harebrained ploy backfired completely: Hamas soon learned of the scheme and struck first, easily routing Dahlan’s forces and expelling Fatah from Gaza. Instead of crippling Hamas, Abrams’s machinations left it in full control of the area.29
Despite this dubious résumé, Abrams subsequently landed a plum job as a senior fellow at the Council on Foreign Relations, where his questionable conduct continued. In 2013 he tried to derail the appointment of the decorated Vietnam veteran and former senator Chuck Hagel as secretary of defense by declaring that Hagel had “some kind of problem with Jews.” This baseless smear led the CFR president Richard Haass to publicly distance the council from Abrams’s action, but Haass took no other steps to reprimand him.30 Yet, apparently, the only thing that stopped the neophyte secretary of state Rex Tillerson from appointing Abrams as deputy secretary of state in 2017 was President Donald Trump’s irritation at some critical comments Abrams had voiced during the 2016 campaign.31
In an open society, neoconservatives and other proponents of liberal hegemony should be as free as anyone else to express their views on contemporary policy issues. But exercising that freedom doesn’t require the rest of society to pay attention, especially not to individuals who have made repeated and costly blunders. Yet neoconservatives continue to advise prominent politicians and occupy influential positions at the commanding heights of American media, including the editorial pages of The Wall Street Journal, The New York Times, and The Washington Post. This continued prominence is even more remarkable given that hardly any of them have been willing to acknowledge past errors or reconsider the worldview that produced so many mistakes.32
MIDDLE EAST PEACE PROCESSORS: A REVOLVING DOOR
Accountability has been equally absent from U.S. stewardship of the long Israeli-Palestinian “peace process.” Ending the long and bitter conflict between Israel and the Palestinians would be good for the United States, for Israel, and for the Palestinians, but the two-state solution Washington has long favored is now moribund despite repeated and time-consuming efforts by Republican and Democratic administrations alike. Yet presidents from both parties continued to appoint the same familiar faces to key positions and got the same dismal results each time.
During the first Bush administration, for example, Secretary of State James Baker’s primary advisors on Israel-Palestine issues were Dennis Ross, Aaron David Miller, and Daniel Kurtzer. Baker and his team did convene the 1991 Geneva Peace Conference—a positive step that laid the groundwork for future negotiations—but they failed to halt Israeli settlement construction or begin direct talks for a formal peace deal. Together with Martin Indyk and Robert Malley, these same individuals formed the heart of the Clinton administration’s Middle East team and were responsible for the fruitless effort to achieve a final status agreement between 1993 and 2000.
As Miller later acknowledged, in these years the United States acted not as an evenhanded mediator, but rather as “Israel’s lawyer.” U.S. peace proposals were cleared with Israel in advance, and Israeli proposals were often presented to the Palestinians as if they were American initiatives.33 Small wonder that Palestinian leaders had little confidence in U.S. bona fides and little reason to believe U.S. assurances that their interests would be protected.
This unsuccessful past was prologue to an even less successful future. After spending the Bush years as counselor for the Washington Institute for Near East Policy (WINEP), a prominent pro-Israel think tank, Dennis Ross joined Obama’s presidential campaign in 2008 and returned to the National Security Council during Obama’s first term. Originally assigned to work on U.S. policy toward Iran, over time Ross became more and more heavily involved in Israel-Palestine issues, reportedly clashing with Obama’s designated Middle East envoy, former senator George Mitchell.34 Ross was also deeply skeptical about a possible nuclear deal with Iran, and significant progress toward the 2015 agreement took place only after he left the White House at the end of Obama’s first term.35
Similarly, Indyk spent the Bush years as founding director of the Saban Center for Middle East Policy at Brookings, where he openly backed the Iraq War in 2003.36 When Secretary of State John Kerry decided to make a new push for an agreement in 2013, he picked not a fresh face with new ideas, but the well-worn Indyk, who in turn chose as his deputy David Makovsky, a hawkish neoconservative from WINEP who had coauthored a book with Ross in 2008.37
Revealingly, the one member of Clinton’s Middle East team who had trouble returning to government service was Robert Malley, who was also the most skeptical of the traditional U.S. approach. Malley was briefly affiliated with Obama’s campaign in 2008, only to be dropped after it was revealed that he had met with representatives of Hamas in the context of his duties at the nongovernmental International Crisis Group (ICG). These activities should not have disqualified him from advising a candidate—he was not serving in the U.S. government at that time, and communicating with Hamas was an integral part of his work at ICG—but the political liability was too great, and Obama quickly distanced himself. Malley returned to the NSC during Obama’s second term, but his duties were confined to Iran and the Gulf.
Resolving this long, bitter conflict would be a challenging task for anyone, and an entirely different set of U.S. officials might have failed to achieve an agreement between 1993 and 2016. One might also argue that only experienced diplomats with deep knowledge of the issues and the key players would stand any chance at all of reaching an agreement. Even so, the willingness of presidents and secretaries of state to recycle the same unsuccessful negotiators is troubling. The individuals who repeatedly failed to make peace were hardly the only people in America with intimate knowledge of these issues, and had Clinton, Bush, or Obama put this problem in the hands of experts who had a fresh and more evenhanded outlook, America’s long stewardship of the peace process might have been more successful. Given where the conflict was in 1993 and where it is today, and given the potential leverage the United States had over the protagonists, Washington could hardly have done worse.
INSIDERS ON INTELLIGENCE
The same reluctance to hold individuals and organizations accountable can also be found in the management and oversight of America’s vast intelligence community. By 2016 it was obvious to even casual observers that oversight of the intelligence agencies had gone badly awry. These organizations not only failed to detect or prevent the 9/11 attacks—despite numerous warning signs—they also played a supporting role in the Bush administration’s fairy tales about Iraq’s WMD programs and Saddam Hussein’s supposed connections to Al Qaeda.38 U.S. intelligence agencies suffered a further blow when a supposed informant (who turned out to be a double agent) detonated a suicide bomb that killed seven CIA employees and contractors in Afghanistan in December 2009. It took U.S. intelligence nine years to find Osama bin Laden, and it also failed to anticipate the Arab Spring, the Maidan uprising in Ukraine, or Russia’s seizure of Crimea in 2013. And in January 2018 The New York Times revealed that a former CIA officer had been arrested for providing China with the names of more than a dozen CIA informants, in what it called “one of the American government’s worst intelligence failures in recent years.”39
Last but not least, the vast trove of information on the NSA’s electronic surveillance programs leaked by former contractor Edward Snowden revealed serious security lapses within the agency and numerous violations of U.S. law. Subsequent revelations about NSA foreign surveillance activities (such as the hacking of German chancellor Angela Merkel’s cell phone) suggested that the NSA was now acting with scant regard for the potential risks or political fallout.
Yet despite these repeated lapses and abuses of power, no one in the intelligence community was held to account. In 2011, in fact, a lengthy investigation of CIA personnel policies by the Associated Press revealed “a disciplinary system that takes years to make decisions, hands down reprimands inconsistently, and is viewed inside the agency as prone to favoritism and manipulation.” Among other things, the investigation found that even after an internal review board had recommended disciplinary action for an analyst whose mistaken identification had led to an innocent German being kidnapped and held at a secret prison in Afghanistan for five months, the employee in question was promoted to a top job at the CIA’s counterterrorism center. Other officials involved in the deaths of prisoners in Afghanistan went undisciplined and received promotions instead. On the rare occasions when agency personnel were forced to resign, they sometimes returned to work as independent contractors.40
Immunity increases as one rises to the top. In March 2013 the director of national intelligence James Clapper told a congressional oversight committee that the NSA was not “willingly” collecting data on U.S. citizens, a statement he later conceded was false after Snowden’s files revealed that the NSA had been doing exactly that.41 Lying to Congress is a criminal offense, but Clapper was not investigated. On the contrary, a White House spokesperson soon confirmed that President Obama had “full confidence” in him.
The career of former CIA director John Brennan exhibited a similar Teflon-like quality. Brennan was reportedly Obama’s first choice as CIA director in 2009 but was passed over because his prior involvement in Bush-era interrogation and detention practices made Senate confirmation questionable. He joined the White House staff instead, where he managed the administration’s “kill list” of individuals deemed eligible for lethal “signature strikes.”42 In that capacity, Brennan gave a well-publicized speech in June 2011 defending the administration’s policy, claiming, in response to a question from the audience, that “for nearly the past year there hasn’t been a single collateral death [from counterterrorist drone strikes] due to the exceptional proficiency, precision of the capabilities we’ve been able to develop.”43
According to the independent Bureau of Investigative Journalism, however, a CIA drone strike in Pakistan had killed forty-two people attending a tribal meeting just three months earlier. The Pakistani government had issued a strong public protest, casting serious doubt on Brennan’s claim that he “had no information” about civilians being killed. Nonetheless, Obama nominated him to head the CIA in January 2013, and the Senate promptly confirmed his appointment.
Then, in March 2014, the Senate Intelligence Committee chairwoman Dianne Feinstein accused the CIA of monitoring the computers used by congressional staff members who were investigating the CIA’s role in the detention and torture of terrorist suspects and in other illegal activities. Such shenanigans were not entirely new, insofar as CIA officials had previously destroyed ninety-two videotapes documenting acts of torture, a move almost certainly intended to protect the perpetrators from further investigation or prosecution.44 Other reports suggested that CIA officials were also monitoring emails between Daniel Meyer, the intelligence community official responsible for whistle-blower cases, and Senator Chuck Grassley, a leading advocate of whistle-blower protection.45
The obvious intent behind these actions was to keep Senate investigators from holding the CIA accountable for acts of torture or other illegal conduct. Brennan vehemently denied the accusations and the Department of Justice declined to investigate them, but a subsequent investigation by the CIA’s own inspector general confirmed the bulk of Feinstein’s original charges.46
In response, Brennan made a limited apology and appointed an internal review board to consider disciplinary actions.47 A few months later, the review board attributed the problem to “miscommunication” and exonerated all CIA personnel involved of any wrongdoing.48 Despite these well-founded concerns about Brennan’s truthfulness, as well as the evidence that reliance on “enhanced interrogation (i.e., torture) had done considerable damage to America’s reputation and strategic position,” Obama reaffirmed his “full confidence” in him, just as he had previously done with DNI Clapper.49
Because secrecy is pervasive, maintaining effective oversight and accountability over the intelligence community is a perennial challenge. Although the Senate and House Select Committees on Intelligence are supposed to provide this oversight, they lack the resources, staff, or electoral incentive to perform this task on a consistent basis. Instead, Congress tends to get seriously involved only after significant abuses come to light, and it inevitably faces stiff resistance from the agencies it is supposed to be monitoring. Under the circumstances, effective oversight and genuine accountability are bound to be rare to nonexistent.50
Adding to the difficulty is the incestuous nature of the intelligence community itself. Clapper was a former U.S. Air Force officer who subsequently worked for the Defense Intelligence Agency, directed the National Geospatial-Intelligence Agency (NGA), and served as undersecretary of defense for intelligence, overseeing the NSA, the NGA, and the National Reconnaissance Office (NRO). Brennan was a twenty-five-year CIA veteran who had held top jobs under Republicans and Democrats and ran the interagency National Counterterrorism Center before working at the White House and being appointed CIA director. One of Brennan’s predecessors at the CIA, Michael Hayden, was a retired air force general and had also been director of the National Security Agency and the U.S. Cyber Command. Former NSA director Keith Alexander held a variety of intelligence posts in the army and ran the Central Security Service and the U.S. Cyber Command. And former secretary of defense Robert Gates spent most of his career at the CIA, eventually rising to the post of deputy director before moving to the Pentagon under George W. Bush.51
There are obvious benefits to having experienced hands in these positions, and replacing veteran intelligence experts with untrained amateurs could easily make things worse. But relying so heavily on “company men” (and women) inevitably creates a cadre of leaders who are strongly inclined to protect the organization and opposed to strict accountability. Thus, Gina Haspel, who replaced CIA director Mike Pompeo following the latter’s appointment as secretary of state, helped oversee the Bush-era torture program and reportedly authorized the shredding of videotapes documenting these illegal activities. As one associate later described her: “She went to bat for the agency and the bottom line is her loyalty is impeccable.”52 The inbred and self-protective nature of the intelligence world may have its virtues, but it is not without significant vices as well.
The combination of pervasive secrecy and a semipermanent caste of national security managers goes a long way to explaining the remarkable continuity between the Bush and Obama administrations, as well as the latter’s reluctance to hold Bush or his lieutenants responsible for possible transgressions and failures. When the same people are making policy and advising both Republican and Democratic presidents, when the public has little independent information about their activities, and when congressional oversight is resisted at every turn, bad judgment and serious misconduct can go undetected and unpunished for a long time. This failing might not be a serious problem if these agencies and their top leaders were as omniscient as they pretend to be, and if they were reliably committed to genuine external oversight and rigorous internal accountability, but the history of the past several decades suggests otherwise. As with the rest of the foreign policy community, accountability in the world of intelligence is the exception rather than the rule.
THE MILITARY
Lives are on the line whenever the United States goes to war. We might therefore expect the U.S. military to be a highly meritocratic enterprise that does not tolerate poor performance and holds its members strictly accountable. There are clearly cases where this principle holds true, as in the U.S. Navy’s recent decision to discipline the commander and a dozen crew members of the USS Fitzgerald after a collision with a merchant ship cost the lives of seven crew members.53
Unfortunately, like the rest of the foreign policy establishment, the U.S. military has become less accountable over time, and this trend has compromised its ability to fulfill its assigned missions.54 Secretaries of defense are fond of saying that the United States “has the best military in the world,” but this well-trained and well-equipped fighting force has compiled a mostly losing record since the 1991 Gulf War. The United States has fought half a dozen wars since 1990, and apart from some gross mismatches (Iraq in 1990 and 2003 and Kosovo in 1999), its performance has not been impressive.55 The historian and retired army colonel Andrew Bacevich sums it up well: “Having been ‘at war’ for virtually the entire twenty-first century, the United States military is still looking for its first win.”56
For starters, consider the number of scandals that have embarrassed the armed services in recent years. Official Pentagon reports have revealed an epidemic of sexual assault inside military ranks, with an estimated nineteen thousand cases of rape or unwanted sexual contact (against both male and female personnel) occurring every year.57 This same period also saw several prominent cheating scandals, as when thirty-four ICBM launch control officers colluded to falsify scores on their proficiency exams. The abuses at Abu Ghraib prison are well-known, but U.S. military personnel have also committed other war crimes and atrocities, including the killing of sixteen Afghan civilians by Staff Sergeant Robert Bales in 2012.58
Moreover, for all the technological sophistication, tactical proficiency, and individual gallantry displayed by U.S. personnel in recent decades, they have repeatedly failed to achieve victory. The United States did not achieve its stated goal of either a stable, democratic Iraq or a stable, democratic Afghanistan, despite spending trillions of dollars and losing thousands of soldiers’ lives. It has been unable to create effective security forces in Afghanistan despite devoting years of effort and spending billions of dollars. A daring U.S. raid eventually found and killed bin Laden, but a decade of drone strikes and targeted killings in more than half a dozen countries has not eliminated the terrorist threat—and may have made it worse.59
Yet, as Thomas Ricks points out, “despite these persistent problems with leadership, one of the obvious remedies—relief of poor commanders—remained exceedingly rare.”60 Instead, the most frequent reason for relieving military officers of command is sexual misconduct, affecting roughly one out of every three commanders fired after 2005.61 But the armed forces’ losing record in its recent wars suggests that its commanders are either not leading well or not advising their civilian counterparts to end wars of choice that cannot be won.
Nor are they being held accountable. During the initial phases of the Afghan War, for example, the commanding general Tommy Franks failed to commit U.S. Army Rangers at the Battle of Tora Bora, a blunder that allowed Osama bin Laden—the key target of the entire U.S. invasion—to escape into Pakistan.62 A few months later, a similar error during Operation Anaconda allowed several hundred Al Qaeda members to evade capture as well. Yet Franks was subsequently chosen to command the invasion of Iraq in 2003. His performance there was no better: the outmatched Iraqis were quickly defeated, but Franks’s failure to prepare for the post-invasion phase contributed to the full-blown insurgency that erupted after 2004.63
Even worse, the military has sometimes failed to hold officers and enlisted personnel fully accountable for more serious misconduct. In January 2004, troops under the command of the army lieutenant colonel Nathan Sassaman forced two handcuffed Iraqi prisoners to jump into the Tigris River, where one of them drowned. Sassaman was not present when the incident occurred, but he later ordered soldiers under his command to obstruct the army investigation of the incident. When the truth surfaced, the divisional commander Ray Odierno issued a written reprimand describing Sassaman’s conduct as “wrongful” and “criminal,” but did not relieve him of command. Although his once-promising career soon ended, Sassaman “was allowed to retire quietly.”64
Similarly, even after Staff Sergeant Frank Wuterich, the Marine Corps squad leader whose troops killed twenty-four unarmed Iraqi civilians at Haditha, admitted he had told his men to “shoot first and ask questions later,” a deal with army prosecutors led to his pleading guilty to a single charge of “neglectful dereliction of duty.” His rank was reduced to private, but he served no time in the brig and eventually received a “general discharge under honorable conditions” that left him eligible for full veterans’ benefits. None of the other eight marines charged in the case were ever tried.65
Even the careers of such highly decorated commanders as Generals David Petraeus and Stanley McChrystal illustrate a certain reluctance to hold prominent commanders fully accountable. A talented soldier with a flair for public relations, Petraeus enjoyed a glowing reputation as the driving force behind the 2007 “surge” in Iraq. McChrystal was also hailed as a hard-charging counterinsurgency expert whose leadership had helped turn the tide in Iraq and was going to do the same in Afghanistan. Both generals eventually suffered embarrassing personal setbacks: McChrystal was relieved of command after a Rolling Stone article described him and his staff making disparaging remarks about President Obama and Vice President Joe Biden; and Petraeus later resigned as director of the CIA after an extramarital affair with his biographer became public. He later pleaded guilty to charges of having given his paramour classified information and lying to the FBI, but he was given probation and a fine and served no jail time.
These missteps did not hold either man back for long. Petraeus joined a private equity firm, became a nonresident senior fellow at Harvard’s Kennedy School, cochaired a task force at the Council on Foreign Relations, and taught a course at the City University of New York. By 2016 he was back in the public eye: making regular media appearances, testifying on Capitol Hill, appearing in the Financial Times’ weekly profile “Lunch with the FT,” and being interviewed as a potential candidate for secretary of state in the Trump administration. McChrystal decamped to Yale, where he taught courses on leadership to carefully screened undergraduates. Both men also received lucrative speakers’ fees in retirement, as other former officers have.
What went largely unnoticed in the glare of their individual indiscretions were their limited accomplishments as military leaders. Like the other U.S. commanders in Iraq and Afghanistan, Petraeus and McChrystal failed to achieve victory. The much-heralded surge in Iraq in 2007 was a tactical success but a strategic failure, for the political reconciliation it was intended to foster never materialized.66 No workable political order could be created absent that reconciliation, and so the ramped-up U.S. effort was largely for naught.67 Similarly, McChrystal’s short-lived tenure in Afghanistan did not reverse the course of the war, and the escalation he helped force on a reluctant Obama did not produce a stable Afghanistan either.
To be sure, it is doubtful that any strategy could have brought the United States victory in Iraq or in Afghanistan after 2004, and neither Petraeus nor McChrystal bears primary responsibility for these failures. As Bacevich notes, holding commanders accountable during protracted counterinsurgency wars is more difficult “because traditional standards for measuring generalship lose their salience.”68 But like their predecessors in Vietnam, Petraeus, McChrystal, and other U.S. commanders do bear responsibility for not explaining these realities to their civilian overseers or to the American people. On the contrary, both men consistently presented upbeat (if carefully hedged) assessments of the U.S. effort in both countries and repeatedly advocated continuing the war, offering assurances that victory was achievable provided the United States did not withdraw prematurely.69
More recent events suggest that little has changed. In November 2017, the current U.S. commander in Afghanistan, General John Nicholson, announced that the United States had finally “turned the corner,” even though the Taliban were now in control of more territory than at any time since the original U.S. invasion.70 Unfortunately, that corner had been turned many times previously: commanding general Dan K. McNeill had spoken of “great progress” in 2007, and David Petraeus, Barack Obama, and Secretary of Defense Leon Panetta had all claimed that the United States had “turned the corner” back in 2011 and 2012.71 Meanwhile, the U.S. Special Inspector General for Afghanistan Reconstruction reported that military officials in Kabul had begun classifying performance data on Afghan casualties and military readiness, making it harder for outsiders to determine if the war was going well and even more difficult to determine if commanders in the field are performing well or not.72
These anecdotes—and the larger pattern that they illustrate—do not mean that accountability is completely absent. Former secretary of defense Robert Gates relieved his first commander two months after taking office and continued to fire incompetent military leaders throughout his tenure.73 More recently, the commander of the U.S. Seventh Fleet, Admiral Joseph Aucoin, was relieved after a series of collisions and accidents involving U.S. warships. Senior military officials have also expressed their own concerns about eroding ethical standards and are said to be trying to address them.74 On the whole, however, the U.S. military exhibits the same reluctance to hold leaders accountable as the rest of the foreign policy community.
This combination of chronic failure and lack of accountability has repeatedly compromised the nation-building efforts that liberal hegemony encourages. As of 2016, for example, the United States has spent more than $110 billion on assorted reconstruction projects in Afghanistan, as part of the broader effort to help the Afghan people, strengthen the Kabul government, and marginalize the Taliban. Unfortunately, audits by the Pentagon’s own Special Inspector General for Afghan Reconstruction (SIGAR) documented a depressing record of waste, fraud, and mismanagement, along with numerous projects that failed to achieve most of their stated objectives.75 Yet as Special Inspector John Sopko told reporters in 2015, “nobody in our government’s been held accountable, nobody’s lost a pay raise, nobody’s lost a promotion. That’s a problem.”76
In fairness, these failures are not due primarily to those who have commanded or fought in America’s recent wars, and the U.S. Armed Forces are still capable of impressive military operations. Rather, this poor record reflects the type of wars that liberal hegemony requires—namely, long counterinsurgency campaigns in countries of modest strategic value. The fault lies not with the men and women who were sent to fight, but with the civilian leaders and pundits who insisted that these wars were both necessary and winnable.
ACCOUNTABILITY IN THE MEDIA
As discussed in previous chapters, a vigorous marketplace of ideas depends on a vigilant, skeptical, and independent media to ensure that diverse views are heard and to inform the public about how well their government is performing. This mission requires journalists and media organizations to be held accountable as well, so that errors, biases, or questionable journalistic practices do not corrupt public understanding of key issues.
One might think that the explosion of new media outlets produced by the digital revolution would multiply checks on government power and that increased competition among different news outlets might encourage them to adopt higher standards. The reverse seems to be true, alas: instead of an ever-more vigiliant “fourth estate,” the growing role of cable news channels, the Internet, online publishing, the blogosphere, and social media seems to be making the media environment less accountable than ever before. Citizens can choose which version of a nearly infinite number of “realities” to read, listen to, or watch. Anonymous individuals and foreign intelligence agencies disseminate “fake news” that is all too often taken seriously, and such “news” sites as Breitbart, the Drudge Report, and InfoWars compete for viewers not by working harder to ferret out the truth, but by trafficking in rumors, unsupported accusations, and conspiracy theories. Leading politicians—most notoriously, Donald Trump himself—have given these outlets greater credibility by repeating their claims while simultaneously disparaging established media organizations as biased and unreliable.77
The net effect is to discredit any source of information that challenges one’s own version of events. If enough people genuinely believe “The New York Times is fake news,” as former congressman Newt Gingrich said in 2016, then all sources of information become equally valid and a key pillar of democracy is effectively neutered.78 When all news is suspect, the public has no idea what to believe, and some people will accept whatever they are told by the one with the biggest megaphone (or largest number of Twitter followers).
Unfortunately, the commanding heights of American journalism have contributed to this problem by making major errors on some critical foreign policy issues and by failing to hold themselves accountable for these mistakes. These episodes have undermined their own credibility and opened the door for less reliable and more unscrupulous rivals.
The most prominent recent example of mainstream media malfeasance is the role prestigious news organizations played in the run-up to the 2003 Iraq War. Both The Washington Post and The New York Times published false stories about Iraq’s alleged WMD programs, based almost entirely on fictitious material provided by sources in the Bush administration. As the Times’ editors later acknowledged, the stories were poorly reported and fact-checked, containing numerous errors, and they undoubtedly facilitated the Bush administration’s efforts to sell the war.79
But the Times and the Post were not alone: the vaunted New Yorker magazine also published a lengthy article by the journalist Jeffrey Goldberg describing supposed links between Osama bin Laden and the Iraqi dictator Saddam Hussein, connections that turned out to be wholly imaginary.80 A host of other prominent media figures—including Richard Cohen, Fred Hiatt, and Charles Krauthammer of The Washington Post; Bill Keller and Thomas L. Friedman of The New York Times; Paul Gigot of The Wall Street Journal; and Fred Barnes, Sean Hannity, and Joe Scarborough of Fox News—all jumped on the pro-war bandwagon along with mass-market radio hosts like Rush Limbaugh.
Yet with the sole exception of the Times reporter Judith Miller—who wrote several of the false stories and eventually left the newspaper in 2005 with her reputation in tatters—none of the reporters or pundits who helped sell the war paid any price for their blunders.81 Goldberg switched from hyping the threat from Iraq to issuing equally inaccurate warnings about a coming war with Iran, but these and other questionable journalistic acts did not prevent him from becoming editor in chief of The Atlantic in 2016.82 Other pro-war journalists continued to defend the war for years from lofty positions within the media hierarchy, apparently feeling no responsibility or guilt for having helped engineer a war in which thousands died.83 And in the rare cases where one of them did admit they were wrong—as managing editor Bill Keller of the Times eventually did—the mea culpa was accompanied by a cloud of excuses and a reminder that lots of other people got it wrong too.84
The situation was no better at The Washington Post. After taking over the editorial page in 2000, Fred Hiatt hired a string of hard-line neoconservatives and transformed it, in the words of James Carden and Jacob Heilbrunn, into “a megaphone for unrepentant warrior intellectuals.”85 The Post enthusiastically promoted the invasion of Iraq in 2003 (by one count printing twenty-seven separate editorials advocating the war), and it described Secretary of State Colin Powell’s tendentious and error-filled presentation to the UN Security Council as “irrefutable.” Its editorial writers saw the invasion as a triumph, writing in May 2004, “It’s impossible not to conclude that the United States and its allies have performed a great service for Iraq’s 23 million people,” and expressing confidence that Iraq’s nonexistent WMD would eventually be found.86 The Post defended the decision to invade for years afterward, with the deputy editorial page editor Jackson Diehl opining that the real cost of the war wasn’t the lives lost or the trillions of dollars squandered, but rather the possibility that the experience might discourage Washington from intervening elsewhere in the future.87
Yet the Post’s disturbing record was not confined to Iraq. The editorial board led the successful campaign to derail the nomination of Ambassador Chas W. Freeman to head the National Intelligence Council in 2009 and the unsuccessful effort to block Chuck Hagel’s nomination as secretary of defense in 2012, in both cases by distorting Freeman’s and Hagel’s past records and present views. A 2010 editorial scorned Obama for believing “the radical clique in Tehran will eventually agree to negotiate” over its nuclear program, which is precisely what Iran eventually did.88 The Post columnist Marc Thiessen denied that waterboarding was torture and said it was permissible under Catholic teachings, and Thiessen later received “Three Pinocchios” from the Post’s own in-house fact-checker for a 2012 column falsely accusing President Obama of skipping his daily intelligence briefings. Then, in 2014, Thiessen wrote an alarmist column suggesting that terrorists might inoculate themselves with Ebola and fly to the United States in order to infect Americans, a claim quickly dismissed by knowledgeable experts.89
As the most prominent newspaper in the nation’s capital, the Post has significant impact on elite opinion. If there were even a modest degree of accountability in the leading newspaper in the nation’s capital, or even a commitment to publishing a more representative range of opinion, Hiatt’s performance in this important gatekeeper’s role would have led to his dismissal long ago. And if the Post’s leadership were genuinely interested in publishing a diverse range of opinion on its op-ed pages, its stable of regular columnists would be rather different from its current lineup. But that is not how major news organizations operate in the Land of the Free.
What does get prominent media figures into trouble? As the cases of Jayson Blair, Stephen Glass, and Janet Cooke reveal, outright fabrication of stories or sources can end a journalist’s career. Similarly, the NBC newscaster Brian Williams lost his job after falsely claiming that he had been embedded with a U.S. helicopter crew in Iraq (though he was eventually given a news slot on the MSNBC cable channel), and the Fox News host Bill O’Reilly, the Today show host Matt Lauer, and the MSNBC political analyst Mark Halperin were all dismissed after reliable accounts of persistent sexual harassment came to light.90 Making openly racist, sexist, homophobic, or obscene comments can be grounds for dismissal, and so can statements that are overly critical of Israel, as UPI’s Helen Thomas and CNN’s Jim Clancy and Octavia Nasr all learned to their sorrow.91
Being overtly committed to peace and skeptical of military intervention may be a problem too. In 2002, for example, the talk show legend Phil Donahue was fired by MSNBC, allegedly for giving airtime to antiwar voices, thereby creating anxiety for executives who believed the network should do more “flag-waving” in the wake of 9/11.92 But being consistently wrong or flagrantly biased does not seem to be a barrier to continued employment and professional advancement, even at some of America’s most prestigious publications.
As some of the sources I have relied upon in this book demonstrate, many contemporary journalists produce reportage and commentary that challenges official policy and tries to hold government officials to account. Yet accountability in the media remains erratic, and questionable journalistic practices continue to this day. When combined with the emergence of alternative media outlets such as Breitbart, not to mention even more extreme sources of “fake news,” it is no wonder that public trust in regular media outlets is at an all-time low.93 This situation is a serious threat to our democratic order, for if citizens do not trust information gleaned from outside official circles, it will be even easier for those in power to conceal their mistakes and manipulate what the public believes.
PROPHETS WITHOUT HONOR:
WHAT HAPPENS WHEN YOU’RE RIGHT?
The failure to hold error-prone people accountable has a flip side—namely, a tendency to ignore or marginalize those outside the consensus even when their analysis or policy advice is subsequently vindicated by events. Being repeatedly wrong carries few penalties, and being right often brings few rewards.
In September 2002, for example, thirty-three international security scholars paid for a quarter-page advertisement on The New York Times’ op-ed page, declaring “War with Iraq Is Not in the U.S. National Interest.”94 Published at a moment when most of the inside-the-Beltway establishment strongly favored war, the ad warned that invading Iraq would divert resources from defeating Al Qaeda and pointed out that the United States had no plausible exit strategy and might be stuck in Iraq for years. In the sixteen-plus years since the ad was printed, none of its signatories have been asked to serve in government or advise a presidential campaign. None are members of elite foreign policy groups such as the Aspen Strategy Group, and none have spoken at the annual meetings of the Council on Foreign Relations or the Aspen Security Forum. Many of these individuals hold prominent academic positions and continue to participate in public discourse on international affairs, but their prescience in 2002 went largely unnoticed.
The case of U.S. Army colonel Paul Yingling teaches a similar lesson. Yingling served two tours in Iraq, the second as deputy commander of the 3rd Armored Cavalry Regiment. His experiences there inspired him to write a hard-hitting critique of senior army leadership, which was published in the Armed Forces Journal in March 2007 under the title “A Failure of Generalship.” As Yingling put it in a subsequent article, “Bad advice and bad decisions are not accidents, but the results of a system that rewards bad behavior.” The article identified recurring command failures in Iraq and became required reading at the Army War College, the Command and General Staff College, and a number of other U.S. military institutions, but Yingling barely received promotion to full colonel in 2010. After being passed over for assignment to the Army War College (a sign that his prospects for further promotion were bleak), he retired from the army to become a high school teacher.95
The career trajectories of Flynt and Hillary Mann Leverett illustrate the same problem in a different guise. Until 2003 the Leveretts were well-placed figures in the foreign policy establishment. Armed with a Ph.D. from Princeton, Flynt Leverett was the author of several well-regarded scholarly works and had worked as a senior analyst at the CIA, as a member of the State Department’s Policy Planning Staff, and as senior director for Middle East affairs on the National Security Council from 2002 to 2003. After leaving government, he worked briefly at the Saban Center at Brookings before moving to the American Strategy Program at the New America Foundation. Hillary Mann graduated from Brandeis and Harvard Law School, worked briefly at AIPAC, and held a number of State Department posts during the 1990s. The two met during their government service and were married in 2003.
Disillusioned by the Iraq War and the general direction of U.S. Middle East policy, the Leveretts soon became forceful advocates for a fundamentally different U.S. approach to Iran. In addition to making frequent media appearances and starting a website that dealt extensively with events in Iran, in 2013 they published a provocative book entitled Going to Tehran: Why America Must Accept the Islamic Republic.96
Going to Tehran recommended that the United States abandon the goal of regime change and make a sustained effort to reach out to Iran. It challenged the prevailing U.S. belief that Iran’s government had scant popular support and that tighter economic sanctions would compel it to give up its entire nuclear research program. Most controversial of all, their analysis of public opinion polls and voting results led them to conclude that incumbent president Mahmoud Ahmedinejad had won the disputed Iranian presidential election of 2009, and that the anti-Ahmedinejad Green Movement that emerged in the wake of the election did not have majority support.
The Leveretts did not deny that there were irregularities in the election or that many Iranians opposed the clerical regime, and they described the suppression of the Greens (in which roughly a hundred people died) as involving “criminal acts” by the regime in which opponents were “physically abused” or in some cases deliberately murdered. Yet they insisted that the election results were consistent with a wide array of preelection polls and that Ahmedinejad would still have won had no fraud occurred—albeit by a smaller margin.
As one might expect, the Leveretts’ departure from Washington orthodoxy provoked a furious response. Critics denounced them as apologists for Tehran, accused them of being in its pay, and portrayed the pair as callously indifferent to the fate of the protesters who were killed or arrested in the postelection demonstrations. Yet the backlash against the Leveretts occurred not because they had made repeated analytic or predictive errors; they became pariahs because they had challenged the consensus view that the Islamic Republic was deeply unpopular at home and therefore vulnerable to U.S. pressure.
In 2010, for example, an otherwise critical profile of the pair in The New Republic conceded that “it’s not obvious that [the Leveretts’] analysis is wrong,” and another critic, Daniel Drezner of the Fletcher School, later acknowledged that they had correctly anticipated that the Green Movement would not succeed.97 The Leveretts also argued that Iran would never agree to dismantle its entire nuclear enrichment capability—and it didn’t—and their insistence that the regime was not on the brink of collapse despite increasingly strict sanctions has been borne out as well. They correctly questioned whether the outcome of Iran’s 2013 election was preordained and suggested that the eventual victor—Hassan Rouhani—had a real chance, even though other prominent experts had downplayed his prospects.98
The point is not that the Leveretts are always right or that their critics are always wrong.99 Rather, it is that they are now marginal figures even though their record as analysts is no worse than that of their critics, and in some cases better, largely because they had the temerity to challenge the pervasive demonization of Iran’s government. The Leveretts’ own combativeness may have alienated potential allies and contributed to their outsider status as well, though they are hardly the only people in Washington with sharp elbows.100 Meanwhile, those who have remained within the familiar anti-Iran consensus are viewed as reliable authorities despite repeated analytical errors, and they still enjoy prominent positions at mainstream foreign policy organizations and remain eligible for government service should the political winds blow their way.
A world that took accountability seriously—instead of preferring people who were simply loyal and adept at staying “within the lines”—would look for people who had the courage of their convictions, were willing to challenge authority when appropriate, and had expressed views that were subsequently vindicated by events. In such a world, a reluctant dissident such as Matthew Hoh might have had a rather different career. A former Marine Corps captain and State Department official who had served two tours in Iraq, Hoh first attracted public notice when he resigned his position as the senior civilian authority in Afghanistan’s Zabul province in 2009, having become convinced that the U.S. effort there could not succeed. In his words: “I have lost understanding of and confidence in the strategic purposes of the United States’ presence in Afghanistan … my resignation is based not upon how we are pursuing this war, but why and to what end.” His superiors viewed Hoh as a talented and dedicated officer and tried to persuade him to stay on, but he held firm to his decision and eventually landed a short-term post as staff director at the New America Foundation’s Afghanistan Study Group, which favored a rapid U.S. disengagement from the war.
Subsequent events have shown that Hoh’s skepticism about U.S. prospects in Afghanistan was correct. The Council on Foreign Relations highlighted his resignation letter as an “essential document” about the Afghan War, and Hoh received the Nation Institute’s Ridenhour Prize for Truth-Telling in 2010. But instead of being rewarded for his foresight and political courage, he found the “Washington national security and foreign policy establishment” effectively closed to him—“no matter how right he was.”101 Beset by lingering post-traumatic stress disorder and other problems from his combat experience, Hoh ended up unemployed for several years. Meanwhile, those who had promoted and defended the unsuccessful Afghan “surge”—thereby prolonging the war to little purpose—received prestigious posts in government, think tanks, the private sector, and academia.
In some ways, Hoh’s case parallels that of other recent dissenters and whistle-blowers, including Jesselyn Radack, Peter Van Buren, Thomas Drake, John Kyriakou, and, most famous of all, Edward Snowden and Chelsea Manning. But unlike Snowden or Manning, whose actions broke the law, Hoh’s only “error” was having the courage to go public with his doubts about U.S. strategy.102
These (and other) examples raise a fundamental question: If the people who repeatedly get important foreign policy issues wrong face little or no penalty for their mistakes while those who get the same issues right are largely excluded from positions of responsibility and power, how can Americans expect to do better in the future?
CONCLUSION
To be clear, U.S. foreign policy would not become foolproof if a few editors and pundits were replaced, if more generals were relieved for poor performance, or if advisors whose advice had proved faulty were denied additional opportunities to fail. Foreign policy is a complicated and uncertain activity, and no one who wrestles with world affairs ever gets everything right.
Moreover, the desire to hold people accountable could be taken too far. We do not want to oust government officials at the first sign of trouble or fire a reporter because he or she gets some elements of a complicated story wrong. No one is infallible, and people often learn from their mistakes and get better over time. Moreover, if we want to encourage public officials to innovate, to take intelligent chances, and to consider outside-the-box initiatives, we need to accept that sometimes they are going to fail. Instead of ostracizing people at the first mistake, a better course would be to identify the ideas, individuals, or policies that led to trouble and acknowledge the mistakes openly. But when blunders occur repeatedly and the people who make them cannot or will not admit it, we should look to someone else to do the job.
Unfortunately, the present system does not encourage systematic learning, and it does not hold people to account even when mistakes recur with depressing frequency. As discussed in chapter 2, a permissive condition for the absence of accountability is America’s fortuitous combination of power and security, insulating the country from policy mistakes and allowing follies to go uncorrected.
But perhaps the greatest barrier to genuine accountability is the self-interest of the foreign policy establishment itself. Its members are reluctant to judge one another harshly and are ready to forgive mistakes lest they be judged themselves. Even when prominent insiders break the law, they have little trouble getting prominent friends and former associates to organize campaigns for acquittal or clemency.103
“To get along, go along” is an old political adage, and it goes a long way to explain why the foreign policy establishment tolerates both honest mistakes and less innocent acts of misconduct. Strict accountability would jeopardize friendships—especially in a town as inbred as Washington, D.C.—and going public with criticisms or blowing the whistle on serious abuses carries a high price in a world where loyalty counts for more than competence or integrity. Provided they don’t buck the consensus, challenge taboos, or throw too many elbows, established members of the foreign policy community can be confident of remaining on the inside no matter how they perform.
The neophyte senator Elizabeth Warren (D-MA) offered us a revealing look at this phenomenon in her 2014 book A Fighting Chance. Newly elected and preparing to head to Washington, she asked her Harvard colleague Lawrence Summers, a former Treasury secretary with a lengthy Washington résumé, for advice on how to be effective. As she recounts: “He teed it up this way: I had a choice. I could be an insider or I could be an outsider. Outsiders can say whatever they want. But people on the inside don’t listen to them. Insiders, however, get lots of access and a chance to push their ideas. People—powerful people—listen to what they have to say. But insiders also understand one unbreakable rule: They don’t criticize other insiders.”104
Until Trump. A wealthy New York real estate developer and reality show host who inherited a fortune is hardly a genuine outsider, but Trump’s campaign, transition period, and early months in office showed scant respect for established figures in either political party and displayed particular contempt for the foreign policy establishment and many of its core beliefs. Trump’s skepticism was understandable, perhaps, even if his own ideas seemed ill-informed and his own character deeply worrisome.
But a critical question remained unanswered: Could an impulsive, Twitter-wielding president and a group of untested advisors make a clean break with liberal hegemony? Would they be able to overcome the reflexive opposition of the foreign policy community, or would it eventually contain and co-opt them? If Trump tried to challenge the foreign policy Blob, would he be able to put a better strategy in place or just make things worse? The next chapter describes what Trump did and how he fared.
Spoiler alert: the results are not pretty.