The disintegration of the Soviet Union at the end of 1991 removed the main obstacle that had prevented the United States from building a fully global empire after 1945. For a brief moment it even looked as if the forces that had swept away communism in Russia might do the same to the ruling Communist Party in China. Thus, US governments after President Ronald Reagan (1981–89) found themselves in a unique position of global power about which their predecessors could only dream.
Very few expected this unipolar moment to be permanent. Other powers, it was widely assumed, would eventually rise to the point where they could challenge American dominance, but that was expected to be some way in the future. In the meantime, there was a window of opportunity to strengthen the global rules and institutions in such a way that US interests would be securely protected while using the nation’s military dominance to punish those governments or nonstate actors (NSAs) that stepped out of line.
The first challenge was to build a truly global system of free market capitalism that would sweep away the last remaining barriers to the penetration of American investment. The Cold War had ended not just with the collapse of the Soviet Union but also with an ideological victory for free markets over state control of production. Translating this victory into a global framework that provides a secure position for private enterprise has been a constant theme of all recent US presidential administrations.
This new wave of globalization has coincided with trends in capitalism that privilege information and communications technology (ICT). Ensuring that American companies occupy a dominant position in these new ICT markets has been a high priority for all US governments since the end of the Cold War. The results have been spectacularly successful, with American firms establishing a hegemonic position by the start of the new millennium across all the subsectors that make up ICT.
Those hoping for a peace dividend after the end of the Cold War were swiftly disappointed. The technological gap between the US military and the rest of the world grew wider, so that the decline in the quantity of those in the armed forces was offset by an increase in the quality of equipment used. US military bases mushroomed around the world now that Cold War restrictions were removed. And space became a new area in which the United States sought dominance.
US military invasions—with or without UN approval—have been frequent since the end of the Cold War, but other forms of intervention (such as the use of drones to target individuals) have also become common. In particular, sanctions have become a weapon of choice for all presidents in the projection of US power. And forcing individuals and companies to pay taxes owed to the federal government has led to even greater use of extraterritorial powers by American administrations.
Much has been made of the differences among post–Cold War administrations in relation to foreign policy. Yet these can be exaggerated. It is true that President Barack Obama (2019–17)—unlike President George W. Bush (2001–9)—would not have invaded Iraq in 2003 if he had been in power at the time. However, Presidents Bill Clinton (1993–2001), George H. W. Bush (1989–93), and Donald Trump (2017–) probably would have. And on the principle of the use of force against America’s enemies without the backing of international law, there has been no divergence among the five heads of state. All administrations have been committed to the preservation of American empire, although they may have differed over tactics in relation to particular issues.
The unipolar moment provided the United States with an opportunity to exert imperial power in an unprecedented fashion. America moved almost seamlessly from a Cold War world in which it faced certain limits to one in which it was virtually unconstrained. Many Americans concluded that the projection of their empire was the best way to secure world order and that belief was reinforced rather than undermined by the horrors of 9/11. It was now America’s “destiny,” as many members of the foreign elite were quick to point out, to impose order on a chaotic world, and many Americans were only too happy to accept this new imperial burden.
Globalization in the modern era was well under way even before the end of the Cold War. However, there were many barriers to the liberalization of trade in goods and services while international capital flows were also restricted in numerous ways. With the launch of the Uruguay Round in 1986, US negotiators pushed a new agenda designed to remove these obstacles, but they faced considerable resistance from the governments of many other countries.
The end of the Cold War broke this resistance. The American ideological victory was captured in a seminal, if hubristic, article by Francis Fukuyama: “The triumph of the West, of the Western idea, is evident first of all in the total exhaustion of viable systematic alternatives to Western liberalism. . . . What we may be witnessing is not just the end of the Cold War, or the passing of a particular period of post-war history, but the end of history as such: that is, the end point of mankind’s ideological evolution and the universalization of Western liberal democracy as the final form of human government.”1
The Uruguay Round was completed in 1994, and the World Trade Organization (WTO) replaced GATT the following year. The United States now had a truly global institution that met its requirements for promoting free market capitalism. All trade in goods and services was covered (with only minor exceptions), most obstacles to foreign investment were removed, and intellectual property was heavily protected. Disputes could now be settled through the WTO without countries being able to block arbitration panels, and the US government would no longer need to rely so heavily on domestic legislation to secure its global interests.
Just as important, the United States now had an institution which almost all countries outside the WTO aspired to join.2 The most important excluded countries included the Russian Federation (successor state to the USSR) and the People’s Republic of China (PRC), but there were many others. Since WTO decisions on new members are by consensus, the United States was able to impose the conditions under which these countries could participate.3 As Joseph Stiglitz explained in the case of developing countries:
The U.S. has bargained with dozens of countries. It knows what are likely to be sensitive clauses, provisions that can have large effects, either in terms of benefits or costs. It has large staffs which can write, review, and analyze such agreements, clause by clause.
Because of the size of its economy, it has virtually every industry that might be affected by the agreement, and they are, in effect, at the table . . . trade negotiators from several developing countries engaged in “negotiations” with the U.S. have repeatedly said these are not negotiations in any meaningful sense. There can be some negotiations around the edges . . . but on any core issue, there is no give.4
The results were impressive, with the WTO presiding over a huge increase in world trade and cross-border investments in its first two decades despite occasional financial crises.5 By 2017 membership had grown to 164 countries, with Iran as the largest economy still outside the WTO. China secured membership in 2001 after fifteen years of negotiations (it originally applied to join GATT) but was forced to sit alongside Taiwan.6 Russia took even longer to gain entry but was finally admitted in 2012.7
The dispute settlement process under the WTO was a source of particular satisfaction for the United States. Frustrated by what happened (or more often did not happen) under GATT rules, Congress in 1974 passed Section 301 of the Trade Act that threatened unilateral sanctions against “unreasonable and unjustifiable” barriers to US exports. The threat was usually sufficient to secure what the US government wanted, but the outcome could leave a sour taste.
The WTO dispute settlement procedures are different from those of GATT, as the process is nonvoluntary and binding. This might be seen as a threat to US sovereignty, but the opposite has been the case. WTO panels can rule that US practice is illegal, but only the US legislature can change the law. If the United States refuses to comply, a foreign government can retaliate against American exports. However, the asymmetry in size between the United States and other economies means that this is a very risky strategy and has only rarely been carried out. In the words of Robert Lawrence, an expert on trade policy,
Far from limiting U.S. sovereignty, this form of retaliation actually favors the United States. The United States has a very large market and its ability to grant concessions makes it unusually influential in a system in which agreements are based on reciprocity. The ability to retaliate also provides the United States . . . the greatest ability, first, to bargain in the shadow of the law, i.e., use the implicit threat of bringing cases to induce compliance from other countries; second, to enforce rulings against other countries in the event of noncompliance; and third, to withstand retaliation in the event that others are authorized to take action against it. While all WTO members are formally equal, the system’s design gives some countries more power than others.8
Success in establishing the WTO, however, came at a price. Civil society in member states (including the United States) became increasingly dissatisfied with the way the WTO operated and in particular for the disdain with which its decisions appeared to treat environmental and labor concerns. Protestors followed WTO negotiators around the world,9 but US administrations ignored them and called for a deepening of the WTO process with the launch of the Doha Round in 2001.10 This was too ambitious, however, and it soon ran into difficulties that made its completion impossible.11
BOX 8.1
THE US DOLLAR: EMPIRE’S GREATEST ASSET?
National currencies play a key part in imperial expansion, and the role of the US dollar in expanding the American empire is no exception. The dollar emerged from the Second World War as the world’s main reserve currency. This gave an enormous advantage to the US economy, but it could not at first be fully exploited by US governments as there was an obligation under the Bretton Woods system to sell gold at a fixed dollar price. This placed restraints on US fiscal and monetary policy.
These restraints disappeared with the collapse of the Bretton Woods system in 1971. From then onward, the US economy could increase both balance of payments and fiscal deficits almost without limit in the knowledge that the deficits would be financed by issuing debt in US dollars. Countries with surpluses would be obliged to recycle these back to US capital markets as long as the dollar remained the international reserve currency of choice. As a result, the total debt surged to nearly $20 trillion at the end of fiscal year 2017, with a third of it held outside the United States.
Globalization deepened US capital markets and removed restrictions on the free movement of capital around the world. American financial institutions facilitated the movement of funds, earning huge profits in the process. Quantitative easing after the financial crisis in 2008 then led to such low rates of interest that borrowing in US dollars became almost costless.
Access to cheap finance from the rest of the world in US dollars has allowed the US government to have both more “guns and butter” rather than choosing between them, as happens in other countries. Military spending has expanded without a commensurate reduction in consumer spending. As one observer has commented, “[T]oday’s international economic architecture ensures that the normal operation of world market forces . . . tends to yield disproportionate benefits to Americans, and confers autonomy on US policymakers while curbing the autonomy of others. . . . The economic benefits that accrue to the United States as a consequence of the normal working of market forces within this particular framework provide the financial basis of American military supremacy” (Wade, “The Invisible Hand of the American Empire,” 64).
It was fortunate, therefore, for the United States that it had been pursuing a twin-track strategy toward globalization for many years. Initially suspicious of a regional approach to globalization,12 America had taken its first steps in this direction in 1985 with the signing of the Israel-US Free Trade Agreement. This was seen at the time as justified on strategic rather than economic grounds, but it paved the way for the Canada-US Free Trade Agreement (CUFTA) that came into force in January 1989.
Canada and the United States were each other’s largest trade partners at the time. However, the US market was far more important to the Canadian economy than the other way around. When Congress started preparing the ground for an even tougher version of Section 301 in the form of the Super 301 provisions of the Omnibus Trade and Competitiveness Act (finally passed in 1988), the Canadian government looked for ways to avoid unilateral threats from the US government to its exports and proposed a bilateral agreement.
The result was CUFTA, which went way beyond what was being discussed in the Uruguay Round and gave Canada the assurances it needed that its exports would not be subject to unilateral measures. Canada, of course, secured access to the US market in almost all goods and services. In return, however, Canada had to accept much more liberal measures on cross-border investments and strict rules on expropriation of US-controlled firms.13
CUFTA would now become the template for numerous free trade agreements (FTAs) signed by the United States with other countries or regions. Each was more ambitious than the last, with a particular focus by US negotiators on protection of intellectual property, public procurement, protection of foreign investments against “unfair” expropriation, and regular reviews of trading partners’ labor and environmental practices. Overall this network of FTAs, with the United States at the center, secured for the US government a framework for free market capitalism that is at the heart of its vision of globalization.
CUFTA was succeeded by NAFTA—the North American Free Trade Agreement—that was signed in 1992 by Canada, Mexico, and the United States. It was the first FTA between developed and developing countries and gave Mexico almost unrestricted access to its most important markets. However, it gave the US government something that was much more important: a guarantee that Mexico would commit to free market capitalism as long as NAFTA survived. The implications of this for the rest of Latin America, and indeed the rest of the developing world, were huge.
NAFTA also included strict rules for investor-state dispute settlement (ISDS) under Chapter 11 of the treaty, which would become the model for other FTAs. These procedures would also be written into bilateral investment treaties (BITs) and international investment agreements (IIAs). American companies have made frequent (and often successful) use of these provisions, but foreign companies have been much less successful in the United States itself. Indeed, the US trade representative (USTR) could proudly claim in 2015, “ISDS in U.S. trade agreements is significantly better defined and restricted than in other countries’ agreements. . . . Because of the safeguards in U.S. agreements and because of the high standards of our legal system, foreign investors rarely pursue arbitration against the United States and have never been successful when they have done so. . . . Over the past 25 years, under the 50 agreements the U.S. has which include ISDS, the United States has faced only 17 ISDS cases, 13 of which were brought to conclusion. . . . Though the U.S. government regularly loses cases in domestic court, we have never once lost an ISDS case.”14
Officials working for the US government negotiated the network of agreements that underpinned free market capitalism in a globalized world. However, they did not work alone and were supported at every stage by a plethora of US multinational enterprises (MNEs). These companies, as was shown in Chapter 6, had organized themselves though institutions such as the Business Roundtable to push for public policies more favorable to corporate interests. Globalization, and in particular the liberalization of financial markets, was at the top of the agenda.
Their efforts were very successful and even succeeded in sweeping away domestic restrictions imposed during the New Deal to avoid future financial crises.15 As a result, the financial sector became increasingly dominant, with its share of domestic corporate profits rising from 16 percent to 1985 to over 40 percent three decades later. Not surprisingly, donations from financial institutions to political candidates far outstripped those of other corporate sectors.16
A revolving door began to operate between Wall Street and Washington, DC, giving finance even greater influence over key policy decisions. Ultimately this would lead inexorably to the US financial crisis that began in 2008 and spread to other parts of the world. The precrisis atmosphere was captured by Simon Johnson, chief economist (2007–8) at the IMF, when he stated,
The American financial industry gained political power by amassing a kind of cultural capital—a belief system. Once, perhaps, what was good for General Motors was good for the country. Over the past decade, the attitude took hold that what was good for Wall Street was good for the country. The banking-and-securities industry has become one of the top contributors to political campaigns, but at the peak of its influence, it did not have to buy favors the way, for example, the tobacco companies or military contractors might have to. Instead, it benefited from the fact that Washington insiders already believed that large financial institutions and free-flowing capital markets were crucial to America’s position in the world.17
The lobbying efforts by American MNEs paid off and helped to bring about a globalized world that privileged cross-border capital flows. The stock of US foreign direct investment (FDI) abroad rose from $0.7 trillion in 1990 to $6.3 trillion in 2014. Although the US share of world FDI has fallen (hardly surprising in view of the rapid growth of MNEs outside the United States), the share in 2014 was still nearly one quarter. And the total stock was four times greater than that of the country ranked second (Germany).18
It was not only MNEs that helped to spread the US version of free market capitalism across the world. The US-controlled institutions and NSAs outlined in Chapters 5 and 6 also played a key role. Indeed, even before the end of the Cold War the IMF and the World Bank had promoted a package of reforms that came to be known as the Washington Consensus. This was then applied to all those countries in need of assistance from the Washington-based institutions.
At first, the package was relatively uncontroversial and concentrated mainly on macroeconomic stabilization. By the end of the Cold War, however, the reforms embraced deregulation, privatization, and the liberalization of financial markets. These reforms were then promoted in the countries of the former Soviet Union, where the IMF and World Bank were assisted by a new institution (the European Bank for Reconstruction and Development—EBRD) that had been created with US support specifically for this purpose.19
The impact of this enlarged package—dubbed by its critics “stabilize, liberalize, and privatize”20—was dramatic. It affected not only those countries in Latin America, Africa, and Asia already inside the US sphere of influence but also the countries of the former Soviet Union and its allies such as Vietnam. In Russia itself the reforms, encouraged by the Harvard Institute for International Development (HIID), with funds from USAID, led to a fire sale of state-owned assets for a fraction of their value and the rapid privatization of the economy.21
Inevitably, there was a backlash against the Washington Consensus and the neoliberal agenda promoted by the US government. Indeed, the backlash constitutes part of the retreat from empire analyzed in Part III of this book. However, it is important to emphasize how successful the US government had been in seizing the opportunity provided by the unipolar moment to spread its vision of free market capitalism across the world. Many countries embraced the vision with enthusiasm and sought closer ties with the United States through trade and investment. And even in those countries that experienced the most hostile reaction, the reforms promoted by the United States and its institutional allies were not totally eclipsed.
The United States enjoyed a commanding lead over all other countries in communications during most of the Cold War. The launch by the Soviet Union in 1957 of Sputnik 1—the world’s first artificial satellite—may have come as a huge shock to most Americans, but the response of the administrations of Presidents Dwight D. Eisenhower (1953–61) and John F. Kennedy (1961–63) was swift and ultimately successful. The National Aeronautics and Space Administration (NASA) was established in 1958 and the United States quickly caught up, and then overtook, the Soviet Union in terms of prowess in space.
With the collapse of the Soviet Union, the Russian Federation was unable to sustain its expensive space program at the same level as before and US governments looked to a future in which US leadership in space would be unchallenged, its commercial interests assured, and its right to police space unquestioned. It was the very model of an imperial policy for the post–Cold War period. In its statement of national space policy in 2010, the US government affirmed all these assumptions: “The United States is committed to encouraging and facilitating the growth of a U.S. commercial space sector that supports U.S. needs, is globally competitive, and advances U.S. leadership in the generation of new markets and innovation-driven entrepreneurship. . . . The United States will employ a variety of measures to help assure the use of space for all responsible parties, and, consistent with the inherent right of self-defense, deter others from interference and attack, defend our space systems and contribute to the defense of allied space systems, and, if deterrence fails, defeat efforts to attack them.”22
Broadly speaking, US administrations achieved their goals for space during the unipolar moment. In 2015, according to the Union of Concerned Scientists, the United States had nearly half of all operating satellites in space and four times more than both Russia and China. Out of 549 US satellites in operation, 250 were commercial (the rest were government, military or civilian), so that space became an important arena for US business. And when US domination of space became increasingly contested, the US government responded with the establishment of US Cyber Command (see Section 8.3) to protect the nation’s interests against cyberwarfare while refining its capability to carry out such attacks itself.
The launch of Sputnik 1 had other far-reaching implications for US hegemonic pretensions. President Eisenhower had responded immediately to Sputnik by establishing in 1958 the Defense Advanced Research Projects Agency (DARPA) as a branch of the Department of Defense.23 DARPA focused on space until its expertise in this area was transferred to NASA, at which point it started to invest in technologies that would transform the US economy and lead to the creation of the giant technology companies that are so dominant globally today.
DARPA has always seen its role as giving the US global leadership in the technologies that matter to national security. With a large budget and a small staff, that meant working with researchers in universities, companies, and government departments. The results, it is fair to say, have been extraordinary and have given the US leadership in many fields that have commercial as well as military benefits. In the agency’s own words, “Working with innovators inside and outside of government, DARPA has repeatedly delivered on [its] mission, transforming revolutionary concepts and even seeming impossibilities into practical capabilities such as precision weapons and stealth technology, but also such icons of modern civilian society such as the Internet, automated voice recognition and language translation, and Global Positioning System receivers small enough to embed in myriad consumer devices.”24
DARPA invested heavily in computers and funded computer science departments across the country. Its activities in this area were overseen by its Information Processing Techniques Office, established in 1962, which contributed to the fabrication of computer chips in the 1970s. At the same time US authorities were increasingly concerned about the risk to communications from a nuclear attack. This led DARPA to invest heavily in a decentralized network of communication stations that would become known as ARPANET.
Controlled at first by the US military, ARPANET was gradually opened up to nonmilitary users in the 1980s to become the Internet. With the invention of the World Wide Web by Tim Berners-Lee in 1989, the Internet was poised to take off.25 At first its management was assigned to a single researcher based in California under the watchful eye of the US Department of Commerce (DoC), but the Internet’s growth was so rapid that this soon had to change. In 1998 therefore a private not-for profit organization was registered in California operating under a memorandum of understanding with the DoC.
The Internet Corporation for Assigned Names and Numbers (ICANN), as the organization is called, has always been based in Los Angeles. It operated under the benevolent tutelage of the DoC until the US government was confident that its mode of working was consistent with US strategic interests. Although the contract with the DoC has now ended, the US government has been able to resist calls for ICANN to be replaced with an interstate body operating under the auspices of the UN or the International Telecommunications Union.
American entrepreneurs, helped by the twin advantages of a huge domestic market and a very supportive government, have been quick to exploit the enormous commercial possibilities opened up both by personal computing and the Internet. And, unlike their multinational predecessors a century ago, there has been virtually no time lag between the launch of their operations in the domestic and foreign markets. The Internet has in fact made their services accessible to foreigners often at the same time as Americans.
These companies therefore enjoyed a dominant global position almost from the beginning. A few companies not based in the United States eventually caught up, but as late as 2015 six out of the top ten information technology companies ranked by revenue were American (including Apple, HP, and Microsoft).26 And in the case of those companies where most business is done over the Internet, the dominance of US companies in 2015 was even more remarkable. No fewer than thirteen out of the top twenty ranked by revenue were US companies, and the list includes some of the most famous corporate names in the world such as Amazon, Google, eBay, Facebook, and Twitter.27
Internet companies are responsible for almost all communications today.28 This means the United States enjoys a dominant role in global communications by virtue of the size of its large Internet companies. Yet US corporate dominance is even greater than these figures imply. Of the seven Internet companies in the top twenty that are not American, five are Chinese.29 These Chinese companies are enormous, but their services are accessible mainly to those who understand Chinese characters. Thus, consumers outside China are almost entirely dependent on US companies.
This is a degree of global dominance about which earlier generations of US MNEs could only dream. Even consumers in Russia have come to rely heavily on American Internet giants. Not surprisingly, it has brought accusations of excessive power, and the European Union in particular has used its legislative authority to try and curb the quasi-monopolistic position of some of these companies. Yet, despite this, the companies have remained strong and have not had to alter their behavior in significant ways.
The rapid growth of global electronic communications, much of which passed through servers located in the United States, created extraordinary temptations for the US intelligence services. After rampant illegality during the Vietnam War, the FBI, the CIA, and army intelligence had been the subject of a Senate investigation known as the Church Committee Report.30 This had led to the Foreign Intelligence Surveillance Act (FISA) in 1978, which sought to end the previous abuses while giving the intelligence services a great deal of flexibility in dealing with non-US persons31 and external threats.
BOX 8.2
GOOGLE SEARCH
When Google became a subsidiary of Alphabet in 2015, it signaled how much the company had diversified since its humble origins in 1998 when Larry Page and Sergey Brin incorporated Google in a garage in California. However, Google Search with its famous PageRank algorithm remains the jewel in the crown.
The majority of searches in virtually every country are done on Google (the main exceptions are China, Japan, Russia, and South Korea). Indeed, Google has more than 90 percent market share in many countries and a deal on search cooperation with Yahoo makes the effective share even greater. Thus, Google Search has a huge impact on global perceptions of just about everything including history, current affairs, and the United States itself.
Google has been accused of many things, including bias toward its own content, monopoly power, and collaborating with state censorship in China. And when small companies successfully use search engine optimization (SEO) to build up a presence on the Web, they can find their footprint disappear overnight when Google modifies its PageRank algorithm.
As a gigantic MNE with a Russian-born president (of Alphabet) and an Indian-born CEO (of Google), the company might seem to epitomize an international spirit that rises above nationalism. However, Google is very much a US company and its search results are anything but neutral. Frequently, the user is directed to other large US companies or influential US think tanks who seem to have mastered SEO more successfully than those from other countries.
The top of the search results is often Wikipedia, a US company that invariably adds editorial comment challenging the veracity of articles highly critical of US policy. An article on cyberwarfare by the United States, for example, carries a health warning. An article on free markets, on the other hand, merely encourages the reader to add more references.
Search can never be entirely neutral, so it is no surprise that Google often reveals an American bias. Its market dominance, however, makes it a prized asset for US governments and a useful tool in the preservation of American empire.
This might have left it unclear what to do about electronic surveillance inside the United States for foreign intelligence purposes. FISA, however, established a Foreign Intelligence Surveillance Court (FISC) and required government agencies to seek a warrant from the court for such activities. In addition, Executive Order 12333 had been passed in 1981 setting out guidelines “requiring each element of the Intelligence Community to have in place procedures prescribing how it can collect, retain, and disseminate intelligence about US persons.”32
It is fair to say, therefore, that the intelligence services were sufficiently well placed to exploit the opportunities provided by the growth of global electronic communications after the birth of the Internet in a perfectly legal fashion. However, the failure of the intelligence agencies to prevent the terrorist attacks of September 11, 2001, led to a wave of national panic that swept away many of the remaining constraints on their activities.
The new powers were contained in the Patriot Act. Among other things, it amended FISA by adding Section 215 that authorizes the FISC to force private companies to turn over to the government “any tangible things including books, records, papers, documents, and other items.” This could include financial information about a US citizen or telephone calling data, but it was assumed by the American public at large that it did not include access to all their traffic across the Internet.
This assumption proved to be false. The firewall that FISA had tried to build between the communications of US and non-US persons was almost impossible to maintain in the age of the Internet. Indeed, President George W. Bush immediately after 9/11 had secretly authorized the National Security Agency (NSAg) “to conduct foreign intelligence surveillance of individuals who were inside the United States without complying with FISA.”33
When this came to light in 2005, Congress tried to reimpose the firewall by passing the Protect America Act in 2007 and the FISA Amendments Act of 2008. The latter included Section 702, which authorized the FISC to approve annual certifications on surveillance while leaving it to the NSAg to decide which individuals to target. These laws gave the intelligence services unrestrained freedom to intercept the communications of non-US persons. However, they failed to protect US persons, as was made clear by the revelations starting in 2013 of Edward Snowden, an employee of a private firm subcontracted by the NSAg.
Snowden, through a number of journalists he trusted, revealed the existence of a clandestine surveillance program run by the NSAg called PRISM. The program collected data from all the main US Internet companies under Section 702 of the amended FISA. According to Snowden, Microsoft was the first to come on board (in 2007), followed by Yahoo (2008); Google, Facebook, and Paltalk (2009); YouTube (2010); AOL and Skype (2011); and Apple (2012). This was a broad range of companies, but Snowden also revealed that 98 percent of PRISM production came from just three companies (Microsoft, Yahoo, and Google).
PRISM gave the intelligence services access to information across the globe on US and non-US persons that their predecessors in the Cold War could barely have imagined was possible. Once revealed, it caused outrage around the world and led to a number of judicial challenges in the US courts themselves. President Obama reacted by establishing a Review Group on Intelligence and Communications Technologies that made a number of recommendations.34 However, the Obama administration consistently defended the legality of PRISM, and Congress had little appetite to make anything other than minor changes to existing laws. The result was the USA Freedom Act, passed in June 2015, which still leaves the NSAg with enormous powers of surveillance against both US and non-US persons.35
The imperial state had created the conditions under which US companies would come to dominate global communications, and now those same companies were helping the state to protect its empire by allowing it to spy on domestic and foreign targets. It seemed a match made in heaven. Technology, however, is a double-edged sword, and the US government as well as some US companies soon discovered that they were vulnerable to the clandestine surveillance of other states and NSAs.
Most of these cyberattacks came from outside the United States. However, one of the most damaging came from inside when a member of the US armed forces passed classified information to WikiLeaks, a website specializing in the release of government and corporate secrets. Cablegate, as it came to be called, was made up of 251,287 cables sent from 274 US embassies and consulates around the world, covering the years 2003–10.36
Although clearly a threat to national security, there is no doubt that Cablegate offered a unique perspective on the nature of American empire and its mode of operation after the end of the Cold War. While the cables revealed an empire in robust shape, able and willing to intervene against governments not acting in US interests, it also showed a diplomatic corps that was knowledgeable—at least in the larger countries—about local conditions and sending reasonably accurate information back to Washington, DC, where the key decisions would then be made.
The US government was embarrassed by both Cablegate and Edward Snowden’s revelations but did not significantly change its behavior. As an empire in an age of global communications, it had little choice in the matter. Julian Assange, the founder of WikiLeaks, was only slightly exaggerating when he wrote, “Carved into stone or inked into parchment, empires from Babylon to the Ming dynasty left records of the organizational center communicating with its peripheries. However, by the 1950s students of historical empires realized that somehow the communications medium was the empire. Its methods for organizing the inscription, transportation, indexing and storage of its communications, and for designating who was authorized to read and write them, in a real sense constituted the empire. When the methods an empire used to communicate changed, the empire also changed.”37
The collapse of the Soviet Union and the dissolution of the Warsaw Pact38 left the United States in a very privileged position. Yet so sudden was the collapse that it took US authorities some time to adjust to the new realities. Consequently, the administration of President George H. W. Bush, whose key figures were steeped in the legacy of the Cold War, struggled to develop a strategy that matched the moment.
President Bill Clinton came to office under very different circumstances from those of his predecessor. Not only had the USSR imploded but the successor state (the Russian Federation) had seen its economy shrink dramatically as a result of the painful shift from a planned to a market economy. With the deepening of economic reforms in China, the United States now had an opportunity to spread its vision of a free market global economy more widely than ever before. In the words of President Clinton’s first National Security Strategy (NSS) in 1995,
[W]e have unparalleled opportunities to make our nation safer and more prosperous. Our military might is unparalleled. We now have a truly global economy linked by an instantaneous communications network, which offers growing scope for American jobs and American investment. . . .
Never has American leadership been more essential—to navigate the shoals of the world’s new dangers and to capitalize on its opportunities. American assets are unique: our military strength, our dynamic economy, our powerful ideals and, above all, our people.
The establishment of a world free market was a long-standing ambition of American empire. So was the need for a military with global reach. What was new this time was the growing belief that US empire would best be served by the promotion of democracy abroad—or at least an American version of democracy—on the grounds that US security, free market economies and democracies are mutually reinforcing. As the NSS stated, “Secure nations are much more likely to support free trade and maintain democratic structures. Nations with growing economies and strong trade ties are more likely to feel secure and to work toward freedom. And democratic states are less likely to threaten our interests and more likely to cooperate with the U.S. to meet security threats and promote sustainable development.”39
The promotion of free market capitalism and democracy around the globe placed a huge burden on the US military, which had seen its budget and manpower cut since the mid-1980s. However, the end of the Cold War coincided with the so-called Revolution in Military Affairs (RMA) that put more emphasis on information superiority and precision-guided weaponry. The RMA played to US strengths and contributed to the formation of a new military-industrial complex in which US companies would provide the technologies that would leave the nation’s forces with no serious rival. In Joint Vision 2020, a document compiled in 2000 by all branches of the military, it was said: “The overarching focus of this vision is full spectrum dominance—achieved through the interdependent application of dominant maneuver, precision engagement, focused logistics, and full dimensional protection. Attaining that goal requires the steady infusion of new technology and modernization and replacement of equipment. However, material superiority alone is not sufficient. Of greater importance is the development of doctrine, organizations, training and education, leaders, and people that effectively take advantage of the technology.” 40
“Full spectrum dominance” included a role for nuclear weapons as outlined by the first Nuclear Posture Review (NPR) in 1994. This role was expected to be less important than in the past, but the following year the Clinton administration successfully pushed for an indefinite extension of the nuclear Non-Proliferation Treaty (NPT). This held out the prospect of a permanent monopoly of nuclear weapons by the officially recognized nuclear states (China, France, Russia, the United Kingdom, and the United States). With France and the United Kingdom as NATO allies, Russia in steep decline, and China focused on a “peaceful rise” through rapid economic growth, the United States was able to concentrate its efforts on ensuring that other states did not acquire nuclear weapons.41
Full spectrum dominance would not have been possible in the age of the Soviet Union. To ensure that its successor would never be able to match the military prowess of the USSR, the US authorities had engaged in a series of negotiations that had proceeded very smoothly as a result of the asymmetry in power between the two states. The Cooperative Threat Reduction Program, sponsored by Senators Sam Nunn and Richard Lugar, provided funds and expertise to decommission nuclear, biological, and chemical weapon in the states of the former Soviet Union (FSU).42 And President George H. W. Bush himself had, in the last days of his presidency, signed the second Strategic Arms Reduction Treaty (START II) with President Boris Yeltsin in January 1993.43
NATO, the military alliance established in 1949, now had no clear purpose. Logically, it should have been dissolved. Yet this ignored the key role that NATO played in US imperial designs, and the exact opposite happened. In 1994 the Clinton administration had succeeded in securing a commitment in favor of expansion at the NATO Summit. By 2009 twelve new states in Central and Eastern Europe had joined,44 pushing the boundaries of NATO right up to the Russian Federation in defiance of what had previously been promised.45
The risks of NATO expansion were recognized at the time and were articulated in an open letter to President Clinton by fifty former senators, cabinet secretaries, and ambassadors.46 Others had written privately, warning that expansion “risked endangering the long-term viability of NATO, significantly exacerbating the instability that now exists in the zone that lies between Germany and Russia, and convincing most Russians that the United States and the West [were] attempting to isolate, encircle, and subordinate them, rather than integrating them into a new European system of collective security.” 47 All such concerns, however, were swept aside by the Clinton administration as it sought to assert US hegemony across the globe.
Although President Clinton made extensive use of the opportunities for expanding the US empire after the end of the Cold War, intervening abroad on frequent occasions (see Section 8.4), there were some influential Americans for whom his administration’s imperial efforts were insufficient. A warning signal was fired by an article in 1996 in Foreign Affairs, the flagship journal of the Council on Foreign Relations, in which William Kristol and Robert Kagan outlined a US role for the long-term: “What should that role be? Benevolent global hegemony. Having defeated the ‘evil empire,’ the United States enjoys strategic and ideological predominance. . . . The aspiration to benevolent hegemony might strike some as either hubristic or morally suspect. But a hegemon is nothing more or less than a leader with a preponderant influence and authority over all others in its domain. This is America’s position in the world today.” 48
These prophetic words then launched the Project for the New American Century (PNAC), a Washington-based think tank that outlined its neoconservative vision for the next administration in September 2000. Among other ambitions, it established four core missions for US military forces: “Defend the US homeland; fight and decisively win multiple, simultaneous major theater wars; perform the ‘constabulary’ duties associated with shaping the security environment in critical regions; and transform US forces to exploit the ‘revolution in military affairs.’ ” 49
It also advocated the maintenance of nuclear strategic superiority, the restoration of military strength, the repositioning of US forces around the globe, the development and deployment of global missile defenses, the control of the new “international commons” of space and cyberspace, and a sharp increase in defense spending as a share of GDP.
These ideas proved so influential that many of the leading figures in the administration of President George W. Bush were drawn from the ranks of the PNAC, including Vice President Richard Cheney, Defense Secretary Donald Rumsfeld, Defense Policy Board chairman Richard Perle, and Deputy Defense Secretary Paul Wolfowitz. Other members of the PNAC who joined the administration in less prominent capacities included John Bolton, Elliott Abrahams, and Robert Zoellick.
There can be little doubt that the Bush administration, with the PNAC and other neoconservative groups behind it, would have enhanced the American imperial project in numerous ways even without the terrorist attacks of 9/11. Those atrocities, however, accelerated the process and provided a much greater level of public support than would otherwise have been the case. A new strategy rapidly took shape that was outlined in the 2002 NSS. In that document, the Bush administration laid out the framework for US empire with a special emphasis on the use of “preemption” to forestall the possibility of future attacks and the need for an expansion of the US military presence across the globe: “The United States has long maintained the option of preemptive actions to counter a sufficient threat to our national security. . . . To forestall or prevent such hostile acts by our adversaries, the United States will, if necessary, act preemptively. . . . To contend with uncertainty and to meet the many security challenges we face, the United States will require bases and stations within and beyond Western Europe and Northeast Asia, as well as temporary access arrangements for the long-distance deployment of U.S. forces.”50
President Bush was correct to remind readers of NSS 2002 that preemption has a long history in the United States. From the Indian Wars of the eighteenth century onward, the US government had often fought preemptive “wars of choice” as part of its imperial expansion. Preemption, of course, is more difficult if the state is constrained by international treaties, which is why US history has been replete with interstate agreements that failed to secure ratification. Yet this did not sit easily after the Second World War with a semiglobal empire that depended so heavily on international arrangements to constrain other states’ behavior.
The Bush administration resolved this dilemma after 9/11 through unilateral action. As early as December 2001, it withdrew from the Anti-Ballistic Missile Treaty (this had major implications for arms control as Russia then withdrew from START II).51 It then “unsigned” the Rome Statute of the International Criminal Court (ICC) in 2002 and instead signed bilateral agreements with dozens of states to exempt US military and government personnel from the ICC’s jurisdiction.52 In the same year it refused to ratify the Optional Protocol to the Convention against Torture, and in 2008 did the same with the Convention on Cluster Munitions.
NSS 2002 had also referred to the need to expand the geographical scope of US military bases. This duly happened, and by the end of the Bush presidency the United States was operating 909 military facilities in countries and territories outside the fifty states with 190,000 troops and 115,000 civilian employees.53 The bases now spread deep into the FSU and its allies with facilities in the Czech Republic, Poland, Bosnia & Herzogovina, Kosovo, Macedonia, Bulgaria, Romania, Georgia, Uzbekistan, Tajikistan, and Kyrgyzstan. There was also a new emphasis on Africa, with facilities in some twenty different countries.
Even this did not tell the full story, as some of the overseas expansion took place in small bases known colloquially as lily pads that did not always show up in the official figures. These facilities have few if any US troops and sometimes rely on private contractors, which operate drones and surveillance aircraft.54 There have also been “access agreements,” giving the US forces the right to use foreign airfields, ports, and military bases without the need to establish a US facility. In the first fifteen years after the Cold War, “the number of agreements permitting the presence of U.S. troops on foreign soil more than doubled, from forty-five to more than ninety.”55
The expansion of the US military presence abroad required a reorganization of the command structure (there are now nine commands in total, six being geographical and three thematic). Every independent state in the world is assigned to one of the six geographical commands, and two new ones were created after 9/11. The first was United States Northern Command (USNORTHCOM) with responsibility for air, land and sea approaches to North America out to five hundred nautical miles. The second was United States Africa Command (USAFRICOM), with responsibility for all fifty-four independent African states.
BOX 8.3
GUANTÁNAMO BAY
The US government had set its sights on Guantánamo Bay in Cuba as a suitable base for its navy long before the Spanish-American War in 1898. It was not until Spain’s surrender, however, that plans could be put into action.
Under two agreements in 1903, the second of which was replaced by a new one in 1934, the United States leased indefinitely an area on the south coast of Cuba “for the purposes of coaling and naval stations” with the proviso that “no corporation shall be permitted to establish or maintain a commercial, industrial or other enterprise.”
The rent was set at $2,000 per year, but Cuba was required to purchase all private lands and real property in the area. The United States agreed to advance Cuba the money to do so on condition that the funds be deducted from future rents. Thus, it is very unlikely that Cuba received any rental income for many years.
The rent was subsequently doubled, and at the time of the Cuban Revolution the United States was paying $4,085 per year. The first check in 1959 was cashed (due to “confusion,” according to Fidel Castro), but none were afterward. Thus, the US Navy has had free use of Guantánamo Bay since 1960, although the Department of Defense in its Base Structure Report for 2015 estimated the value of the facilities at $3.65 billion.
The base has had many uses since the Cuban Revolution, including as a temporary home for Haitian refugees, as a prison for many captured in the “global war on terror,” and as a place where suspects can be tortured without breaching US law (Guantánamo Bay is sovereign Cuban territory and therefore not subject to US Supreme Court rulings).
Cuba regularly calls for the restoration of the base but can do nothing as that requires the agreement of both sides. If the case were to go to an international tribunal, however, Cuba would certainly win, as the base is no longer used exclusively for its original purposes while commercial activities (McDonald’s, Burger King, etc.) take place there. In any case, normalization of the bilateral relationship—a publicly stated goal of both sides following the restoration of diplomatic relations in 2015—will require the return of the base to Cuba at some point.
The three thematic commands were created before 9/11, but their responsibilities changed afterward. In particular, United States Strategic Command (USSTRATCOM) created in 2009 a subcommand for cyberspace called United States Cyber Command (USCYBERCOM), with a mission to carry forward US imperial ambitions for the control of cyberspace. The importance of USCYBERCOM cannot be underestimated, and it has been tasked with preemptive strikes if needed. In its second strategy paper in 2015, it was stated: “If directed by the President or the Secretary of Defense, the U.S. military may conduct cyber operations to counter an imminent or on-going attack against the U.S. homeland or U.S. interests in cyberspace. The purpose of such a defensive measure is to blunt an attack and prevent the destruction of property or the loss of life. [The Department of Defense] seeks to synchronize its capabilities with other government agencies to develop a range of options and methods for disrupting cyberattacks of significant consequence before they can have an impact, to include law enforcement, intelligence, and diplomatic tools.”56
Throughout the period of the semiglobal empire, US military spending was enormous both in absolute and relative terms. Although spending as a share of GDP fell after the Cold War, the downward trend was reversed after 9/11. At its peak in 2011, spending on the military reached $711 billion, which was 4.6 percent of GDP and represented nearly 40 percent of global defense expenditure. And this high figure excluded many items, such as veterans’ benefits and services, which might be considered part of national defense. No other country comes even close to these levels of defense spending and in only a very few cases is military expenditure higher as a share of GDP.57
The largest part of the military budget is consumed by personnel pay and the operation of bases (inside and outside the United States). However, a large part ($166 billion in 2014–15) is spent each year on procurement and research and development. Nearly all this money goes to thousands of defense contractors in the United States that make up the new military-industrial complex. Yet most of it goes to a small number of firms, including Lockheed Martin, Boeing, Raytheon, Northrop Grumman, and General Dynamics, that all contribute generously to candidates on both sides of the political divide.
The military-industrial complex also includes the private military contractors to whom defense is increasingly outsourced. These companies played an especially important role in Iraq, where the numbers employed are estimated to have reached nearly 100,000. As a result of controversies surrounding their lack of accountability, some have changed their name, while others have adopted different corporate structures.58 However, they still play a key role in US security planning even if the issue of accountability is far from being resolved.
The end of the Cold War removed many of the constraints that had previously held back the United States from intervening abroad. As a result, interventions have been frequent under all presidential administrations since 1990. All five presidents (Bush Sr., Clinton, Bush Jr., Obama, and Trump) have used both multilateral and unilateral forms of intervention, and none of them has been unduly concerned whether their actions were consistent with international law. While there have undoubtedly been differences in the approach taken to intervention by each president, none has questioned the imperial project and the conviction that the United States is the world’s leader.
The “gold standard” for US intervention has been multilaterally through the UN and therefore with the blessing of international law. The two states (China and Russia) most likely to block US initiatives in the United Nations Security Council (UNSC) during the Cold War were now much less likely to do so—albeit for very different reasons.59 This gave the UN a new raison d’être for American administrations, as UNSC resolutions could now be drafted by US diplomats with much less risk of a veto being applied by one of the permanent members.
In the first forty-five years of UN operations, roughly 650 UNSC resolutions were passed—an average of fewer than 15 per year. In the next twenty-five years there would be roughly 1,600—an average of 64 per year. Furthermore, many of the resolutions passed after the Cold War ended were adopted under Chapter 7 of the UN Charter, which allows for coercion by member states—especially under Article 41 (sanctions) and Article 42 (military intervention).
UN-supported sanctions were only used twice during the Cold War. Since then, they have been adopted some twenty-five times. Not all of these resolutions were initiated by the United States, but most of them were. The first was passed against Iraq following its invasion of Kuwait in August 1990. There then followed numerous UN sanctions designed to deal with conflicts in the former Yugoslavia (1991–96), Haiti (1993–94), Somalia (1992 onward), Liberia (1992–2001), Angola (1993–2002), Rwanda (1994–2008), Sierra Leone (1997–2010), and Kosovo (1998–2001).
American administrations also pushed successfully for UN-sponsored sanctions to deal with two issues of special US concern. The first, nuclear proliferation, was of long standing, and UN sanctions were duly adopted against North Korea and Iran. The second, international terrorism, became a top priority in the 1990s after a series of bombings and assassinations threatened US interests and after the formation of al-Qaeda by Osama bin Laden. UN sanctions were adopted even before 9/11 against Libya (under President Muammar al-Gaddhafi), Sudan (under President Omar al-Bashir), Afghanistan (under the Taliban), and al-Qaeda itself.60
Sanctions could do terrible damage to the civilian population at large. US administrations, with others, therefore pushed for targeted sanctions designed to punish the culpable while protecting the innocent. These “smart” sanctions were much more efficacious, but even so they did not always work. Thus, US administrations needed to consider alternatives to sanctions, and their preferred option was usually a UNSC resolution under Article 42 since this complied with international law.
All US presidents since the end of the Cold War have adopted this strategy.61 Following the failure of sanctions to persuade Saddam Hussein to withdraw his forces from Kuwait, the administration of President George H. W. Bush in November 1990 was able to secure a resolution (UNSCR 678) authorizing the use of military force against Iraq.62 A coalition of twenty-eight countries was then put together led by the US military, which supplied nearly 75 percent of all troops. The Iraqi Army was crushed and Kuwait regained its independence, but the sanctions remained in place against Iraq until Saddam Hussein was overthrown in 2003.
UNSCR 678 contained the crucial phrase “all necessary means” that was effectively code for the use of military force. This was the same form of words used in UNSCR 794 in November 1992 authorizing action in Somalia. This time, however, the resolution explicitly recognized that the US military would take the lead. A Unified Task Force (UNITAF) was then dispatched to Mogadishu under US leadership to tackle the warlords and distribute humanitarian assistance. It handed back authority to the UN in May 1993.
US intervention in Somalia was always controversial domestically in view of the lack of clarity in the mission and numerous setbacks.63 By contrast, there was little opposition to US intervention in Haiti in 1994 following adoption of UNSCR 940 with the stated purpose of deposing the military leaders who had overthrown President Jean-Bertrand Aristide in 1991. The same was even more true of UNSCR 1368, adopted on September 12, 2001, that authorized the United States to use “all necessary means” in response to the terrorist attacks the day before.64 This was followed by the US-led invasion of Afghanistan the following month.
President Obama came to power promising a different kind of foreign policy. However, it was not long before he, too, turned to the UN to seek approval for “all necessary means” to protect civilians in Libya. UNSCR 1973 was duly passed in March 2011, but with five abstentions (Brazil, China, Germany, India, and Russia). The high level of abstentions was due to the (justified) fear that the United States and its allies would use the resolution to remove Colonel Gaddhafi from power.65
All administrations after the end of the Cold War had therefore succeeded in gaining the support of the UN for US military action. Sometimes, however, that support was ambiguous or even nonexistent. This issue came to the fore in the former Yugoslavia as it broke apart into a series of separate states. Determined to prevent this, Serbia had committed war crimes (especially in Bosnia & Herzegovina) to which the UNSC had responded with sanctions, the authorization of peacekeepers, and the application of a no-fly zone.
How to enforce these UN resolutions in the face of Serbian intransigence exercised US administrations in the 1990s. Gradually, NATO emerged as the preferred instrument and the US-controlled organization carried out a series of bombings that played a crucial role in bringing President Slobodan Milosevic to the negotiating table in 1995.66 NATO bombings were so effective that the UN secretary-general eventually gave the UN military commander the authority to request NATO air strikes, thus putting beyond doubt the legality of NATO actions.
When Kosovo sought its independence from Serbia a few years later, it looked at first as if history would repeat itself. However, China and Russia made it clear they would use their veto in the UNSC to block the use of force against the Serbian military. NATO, at the request of the Clinton administration, then carried out a series of bombing raids that were not authorized by the UN. This meant the NATO action was illegal, although its defenders would still claim it was “legitimate.” 67 NATO, led as always by US forces, would then go on to play a key role in counterinsurgency operations in Afghanistan after the fall of the Taliban, in suppressing piracy off the coast of East Africa, and in the overthrow of Colonel Gaddhafi in Libya.
The Clinton administration was unconcerned by accusations of illegality, as US governments have never felt constrained by international law. This was made abundantly clear in early 2003, when President George W. Bush failed to secure a resolution in the UNSC authorizing “all necessary means” to overthrow Saddam Hussein in Iraq. Since NATO was also divided, the Bush administration went ahead with a “coalition of the willing” in defiance of international law after securing overwhelming support from Congress.
BOX 8.4
US EXTRATERRITORIALITY
Extraterritoriality—defined both as the application of national laws outside the nation’s territory and as immunity from foreign laws by the nation’s citizens—has always been practiced by imperial powers, including the United States. However, there was a big increase in extraterritoriality during the unipolar moment.
The first examples come from legislation aimed at Cuba. The 1992 Cuban Democracy (“Torricelli”) Act prohibited foreign subsidiaries of US companies from trading with Cuba. The 1996 Helms-Burton Act went further and penalized all foreign companies “trafficking” in properties formerly owned by US citizens or Cubans who later became US citizens. Directors of such companies were unable to visit the United States, and many “establishment” figures were affected (including a deputy director of the Bank of England).
The spread of US bases, private security companies, and US Special Forces around the world also led to a big increase in extraterritoriality as US administrations used their imperial authority to ensure that their citizens were not subject to local laws while operating in these countries. In particular, the administration of President George W. Bush (2001–9) put enormous pressure on countries to sign “Article 98” agreements to prevent US forces from coming under the scope of the International Criminal Court when in action abroad.
The “war on drugs” became another area for the growth of extraterritoriality. Non-US citizens have frequently been arrested by American agents on foreign soil, and countries have been pressured into signing shiprider agreements. US law also forces foreign companies to disassociate themselves from named kingpins or face sanctions of various forms.
The pursuit of America’s own citizens for tax evasion has also led to a huge extension of extraterritoriality by US governments. The 2010 Foreign Account Tax Compliance Act (FATCA), for example, requires non-US financial institutions all over the world to provide information to the Internal Revenue Service or be subject to sanctions on their US operations.
US administrations now had a new multilateral instrument to intervene abroad when UN and/or NATO approval was not forthcoming. A “coalition of the willing” was next employed by President Bush in 2004 when Canada and France joined forces with the United States in Haiti to force President Aristide into exile. A peacekeeping mission (MINUSTAH), was subsequently authorized by the UNSC, but this could not disguise the illegal nature of the overthrow of Aristide himself.
“Coalitions of the willing” proved popular during the Obama administration as well. They have been used in the Middle East, where UN- or NATO-sponsored actions have not been an option.68 In particular, US intervention in Syria after 2011 (training and arming rebels opposed to President Bashar al-Assad) has been part of an informal coalition. President Obama also used a “coalition of the willing” to bomb Islamic State of Iraq and Syria (IS) targets in Iraq and Syria after IS made big territorial gains in 2014.69
When multilateral intervention—with or without UN approval—is not possible, US governments have never hesitated to employ unilateral action. This was true during the Cold War and it continued afterward. Indeed, the list of unilateral interventions is a very long one, and still expanding, although there are almost certainly some episodes of which we are still unaware as a result of the relevant information not yet having been declassified.
Some of the unilateral interventions are well known. They include sanctions against individual states, of which the most enduring have been against Cuba despite the fact that the US government is almost completely isolated in the United Nations on the issue.70 Examples of other unilateral sanctions applied during the unipolar moment include those against Belarus, Burma, Iran, Russia, and Venezuela.71 Unlike the Cuban ones, these sanctions were aimed at key individuals or strategic sectors of the economy rather than applied across the board.
The opposition of US governments to the trade in illegal narcotics has also produced a swath of unilateral interventions. Although the “war on drugs” goes back to 1971, there has been a huge increase in unilateralism since the end of the Cold War as a result of changes in legislation. This includes an annual survey by the US State Department that can lead to “decertification” of a country that is deemed not to be cooperating with US antidrug agencies.72 It also includes the 1999 Foreign Narcotics Kingpin Designation Act that has been used not only to target individuals but also businesses.
US presidents may have complained bitterly about cyberwarfare against some of the nation’s agencies and companies, but this form of unilateral intervention has almost certainly been carried out by the US government itself. Indeed, as the National Commission for the Review of the Research and Development Programs of the United States Intelligence Community stated in 2013, “Failure to properly resource and use our own R&D to appraise, exploit, and counter the scientific and technical developments of our adversaries—including both state and non-state actors—may have more immediate and catastrophic consequences than failure in any other field of intelligence.”73
It is therefore no surprise to learn that US agencies have hacked into Chinese servers on numerous occasions. More controversially, US programmers are likely to have been behind the Stuxnet cyberworm that attacked Iran’s nuclear facilities at Natanz in 2010 and which may have set back Iranian nuclear ambitions by several years.
Unilateral intervention in the Cold War had been mainly about destabilizing the governments of foreign states. In the aftermath of the Cold War, however, it was a range of NSAs that led to a huge expansion in US intervention. International terrorist groups figured prominently among these NSAs, and the Clinton administration in particular had taken unilateral action against the best known (al-Qaeda) even before 9/11.
Following 9/11, legislation was swiftly passed giving future US governments almost unrestricted freedom of action in the “war on global terrorism.” The 2001 Authorization for Use of Military Force (AUMF) stated, “The President is authorized to use all necessary and appropriate force against those nations, organizations, or persons he determines planned, authorized, committed, or aided the terrorist attacks that occurred on September 11, 2001, or harbored such organizations or persons, in order to prevent any future acts of international terrorism against the United States by such nations, organizations or persons.”74
This domestic law has been used extensively by Presidents Bush, Obama, and Trump to intervene unilaterally in different parts of the world in ways that were largely unavailable to previous commanders in chief. These interventions have included the use of unmanned aerial vehicles, usually called drones, which can be used for targeted assassinations including of US citizens.75 The main countries affected have been Afghanistan, Pakistan, Somalia, Syria, and Yemen, and the numbers killed, including civilians, have been extensive. This practice (almost certainly illegal under international law) had been outlawed in 1976 by President Gerald R. Ford (1974–77) but returned with a vengeance after 9/11.
There has also been a significant expansion in US Special Forces since 2001, of which the most important is Joint Special Operations Command (JSOC). US Special Operations Command (SOCOM) operates in dozens of countries (between 70 and 90 on any given day, according to its own version of events) and has operated in nearly 150 since 9/11. Unlike the “covert” operations practiced by the CIA, the “clandestine” operations of US Special Forces do not require presidential approval or regular reports to Congress, as they are covered by a blanket executive order.76
This use of “hard” power stirred up anti-Americanism not only in the countries where it was carried out, but in many of those that were not directly affected as well. At the same time “soft” power, defined as the ability to attract and co-opt rather than coerce, use force, or give money as a means of persuasion, was clearly not working. Gradually, the idea of “smart” power began to take shape in which the “hard” and “soft” forms would be combined to secure US interests without provoking so much hostility.77 As a result, the State Department produced the first Quadrennial Diplomacy and Development Review (QDDR) in 2010.
It was clear from the first QDDR that cooperation among different agencies was still going to lead to an imperial agenda in which smart power would very often be exercised unilaterally:
When the work of these agencies is aligned, it protects America’s interests and projects our leadership. We help prevent fragile states from descending into chaos, spur economic growth abroad, secure investments for American business, open new markets for American goods, promote trade overseas, and create jobs here at home. . . . We support civil society groups in countries around the world in their work to choose their governments and hold those governments accountable. . . . This is an affirmative American agenda—a global agenda—that is uncompromising in its defense of our security but equally committed to advancing our prosperity and standing up for our values.78
Smart power therefore changed nothing. It was essentially a more sophisticated way of promoting US imperial interests without, it was hoped, attracting as much hostility as through the use of hard power alone. Intervention, both unilateral and multilateral, therefore continued very much as before and to many—including most anti-imperialists—it seemed as if the empire was expanding. And yet the opposite was in fact the case. The empire was already in full retreat, and had been for some years. This is the subject of Part III.