1
The expanding scope of cybersecurity

This chapter explores the history of cyberspace and related security concerns, from the creation of computers to the present day. The development of computers has been spurred on by human innovation, from mechanical tabulators to digital machines linked together in networks. Security concerns have played a central role in this history. The governments of the United States and the United Kingdom funded early research into computers during the Second World War to improve anti-aircraft defense and codebreaking. In the aftermath of the Second World War, the US Department of Defense (DoD) sponsored much of the early research into computing and the development of computer networks to advance US capabilities and power. This stream of government funding has allowed scientists, in government, academia and the industry, to develop computers and networking as a collaborative tool to share resources and information. Openness and transparency – two values that are essential to the scientific process – guided their work, and helped to shape the early development of what would become the Internet.

Growing access to computers spurred on a number of online communities, allowing humans to express themselves and interact in a new domain: cyberspace. Human activities such as commerce and entertainment, but also espionage and theft, found new expressions online. The 1990s witnessed a rise of cyber threats affecting all levels of society and government. Cyberspace became a cause of concern for governments who feared that the sensitive data they kept on computers could be stolen, manipulated and potentially weaponized, to damage national defense, the economy or even the social fabric of society. As human reliance on computers has expanded, cyberspace has provided new opportunities to exert power, and to threaten the domestic and international order.

Today, cyberspace is a complex socio-technical-economic environment embraced by 3 billion individuals, and millions of groups and communities. All these actors benefit from the opportunities, facilitated by cyberspace, to share resources and information. Yet digital networks host an ever-growing number of cyber threats, which can disrupt human activities, in the digital and physical world. Cyberspace is now a global security issue that transcends national, social and cultural boundaries. As a result, cybersecurity has begun to emerge as an important issue in the field of International Relations, where researchers and practitioners are only now starting to consider the nature of cyber threats, and the most appropriate frameworks for mitigating them.

A brief history of the computer

The history of the computer can be traced back to nineteenth-century England, when mathematics professor Charles Babbage designed his analytical engine. Machines like the analytical engine relied on mechanical components such as levers and gears to compute complex calculations. What distinguished Babbage’s machine from others – at least on paper, since his engine never came close to being built – was that it was programmable. While machines could only perform a single function, computers like Babbage’s could be programmed to perform multiple functions.1 New applications for mechanical computers emerged in the following decades. American inventor Herman Hollerith developed a method of storing information as holes punched into cards. His machine was used to tabulate the 1890 US census.2 The US military developed mechanical computers to improve the use of bombsights on military aircraft in the 1930s. An aimer would input parameters such as speed, altitude and direction, and the bombsight calculated the point at which to aim.3 Human “computers” continued to direct the “program” for these machines. This dependence on humans limited the speed at which mechanical computers could support their activities.

The first generation of electronic computers emerged in the 1930s. This new technology used electric relays and switches to make calculations. The Atanasoff–Berry Computer (ABC), created by physics professor John Vincent Atanasoff and his graduate assistant Clifford Berry, is often considered to be the first electronic digital computer. The ABC was designed to solve systems of linear equations using binary numbers. The first prototype of the ABC was built in 1939, with the final rendition weighing more than 700 pounds. The advent of the Second World War oriented research into electronic computers. In the United States, researchers working at the Bell Laboratories developed an anti-aircraft gun director, the M-9, that could direct itself without human interaction using data fed by radar tracking. On the other side of the Atlantic, British codebreakers built another set of electronic computers, called the Colossus, to decode text generated by the Lorenz cipher, which was used by the German army to protect high-level messages. The use of these computers was characterized by a high level of secrecy, which limited any subsequent transfer of knowledge to the research and commercial sectors.4 Storybox 1.1 presents a brief history of the ENIAC (figure 1.1), one of the first electronic general-purpose computers ever made, which is sometimes presented as the prototype from which most modern computers evolved.

Storybox 1.1 The ENIAC

In the aftermath of the Second World War, the US government and a group of researchers at the University of Pennsylvania completed the Electronic Numerator, Integrator, Analyzer and Computer (ENIAC). Unlike the ABC or Colossus, the ENIAC ran calculations at electronic speed, without being slowed by any mechanical parts. The ENIAC was also the first general-purpose computer, able to solve a large set of numerical problems through re-programming. Despite its versatility, this new computer was primarily used to calculate artillery firing tables, taking over the work of hundreds of humans working on missile tables.5 The development of the ENIAC inspired a number of similar projects seeking to develop programmable computers that could store information.6

image

Figure 1.1 ENIAC, c.1946 (University of Pennsylvania Archives)

The next generation of computers emerged in 1947 and was characterized by the use of transistors, which increased their reliability.7 This second generation was the first to be used for commercial purposes, leading, most notably, to the success of International Business Machines (IBM), a company aiming to produce computers for all. By the end of the 1950s, dozen of companies contributed to an emerging computer industry in the United States and the United Kingdom, but also in continental Europe and Japan. The development of general-purpose computers required programs to give them functionality. The development of computer programming languages (to facilitate the tasking of computers) and operating systems (to manage the flow of work of a computer), throughout the 1950s, eventually led to the distinction between hardware and software.8 The US DoD maintained an essential role, sponsoring specific computer languages and operating systems, and indirectly driving the industry.9 Computers remained extremely expensive, which limited the market for them and, by extension, their uses. By 1962, there were around 10,000 computers worldwide, most of them located in the United States.

The invention of the integrated circuit allowed for the development of a third generation of computers. The minicomputers that arose in the mid-1960s were smaller, more powerful and more reliable. A new digital (r)evolution was taking place. Computers moved from being a government machine, to an esoteric hobby and then, finally, to a household item. A number of inventions facilitated this evolution. Human–computer interactions improved following the invention of multiple tiled windows on computer screens, text-editing software, and the mouse.10 New financial and manufacturing applications contributed to the emergence of a market for software and further expanded the uses of computers.11 Developments in hardware, specifically the creation of affordable personal computers (PCs), significantly widened access to computers, inspired new uses and forged communities of practice. In 1975, Steve Wozniak and Steve Jobs announced the Apple II computer, an off-the-shelf computer targeted directly for the consumer and small businesses. A few years later, IBM introduced its first PC. At a cost of $1,565, this PC was not beyond the reach of small companies and hobbyists. The percentage of American households with computers more than doubled in the 1980s, and continued to expand exponentially in the 1990s, eventually exceeding 40%.12 In 2014, 80% of adults in the United States, 78% in Russia, 59% in China, 55% in Brazil and (only) 11% in India had a working computer in their household.13 In a little over half a century, government support and scientific innovation transformed the digital computer from an advanced research tool to a household item. The increasingly prominent role played by computers in modern society has opened up countless opportunities for individual pursuits.

An open network of networks

The convergence between computing and communication is a defining feature of the digital age. Networked communication over wires has existed since the telegraph (1844), and evolved with the creation of the telephone (1876) and the teletype writer (1914). These machines all rely on circuits to communicate signals, voice or written messages, from one point to another. Their development led to the installation of wired connections across continents, forming vast communication networks. In the aftermath of the Second World War, scientists established connections between two digital computers relying on a single line of communication. If the line between these computers broke down, the communication ended. The practice of time-sharing allowed multiple users to get access to a computer simultaneously to use its processing power. Only a limited number of powerful research computers existed in the United States, and many researchers, who were far removed from their locations, sought to gain access to them.14 As a result, accessing computers typically required extensive organization and travel. One solution to this problem was to connect computers together as a part of a larger network. Such a network would allow users to access specific computers and data remotely, and would ensure greater reliability than a simple connection between two computers. If one computer stopped functioning properly, other computers could keep the network running.

Computer networking flourished in the United States thanks to a robust and open research culture and substantive government support.15 Most of the support behind the development of early computer networks came from the US DoD and its Advanced Research Projects Agency (ARPA), an organization established in 1960 to fund high-risk high-gain research. ARPA was particularly keen to develop computer networking to facilitate the sharing of research findings at universities and other government-sponsored laboratories throughout the nation. In 1969, ARPA funding helped establish the first computer network and ancestor of the Internet: ARPANET. Reflecting on the role of the US military in the maturation of computer networking, national security scholar Derek Reveron notes that “there has always been an implicit national security purpose for the Internet.”16

Networking digital computers created a new means for humans to communicate with each other. ARPANET first served as a platform for researchers to develop and refine networking technologies that linked a handful of universities and research centers. On 29 October 1969, researchers at the University of California in Los Angeles tried to log in to a computer at the Stanford Research Institute, and began typing the required “log in” command. Their computer crashed after the two first letters; the first message ever sent on the ARPANET was “lo.” Figure 1.2 shows how the original ARPA network grew from 2 nodes in 1969, to 55 in 1977.17 Each of these nodes represents a processor, which can be considered as a router. The two maps show the type of institutions hosting these nodes: research universities and institutes, national laboratories, military and intelligence agencies. The 1977 map shows the nascent complexity of a network that connected processors through both landlines and satellite connections. On the other side of the Atlantic, French and British researchers were developing similar networks (CYCLADES in France, and the Mark I project at the UK National Physical Laboratories), but funding difficulties limited their progress.18

image

Figure 1.2 ARPANET in December 1969 (top) and July 1977 (bottom)

Source: Frank Heart, Alex McKenzie, John McQuillian and David Walden, ARPANET Completion Report, Bolt, Beranek and Newman Inc., Burlington, MA, January 4, 1978, III-79-III-89.

In the United States, more and more universities and research centers joined or wanted to join ARPANET. To help users connect together, Vint Cerf, a researcher at Stanford University, and Robert Khan from ARPA, developed a common transmission protocol (Transmission Control Protocol or TCP) that explained how to establish a connection and transmit messages on their network. A separate protocol was then developed to couple existing networks together. This protocol, the Internet Protocol (IP), facilitated the internetworking of digital computers. The complete description of these two protocols and the rights to use them were freely available to all. Any network that wanted to join simply had to follow the Transport Control and Internet protocols. This open access policy – inspired by the academic practice of information sharing – defined the architecture of the early Internet and allowed the network to grow in scale.19 Information technology pioneer and former ARPA employee Barry Leiner and his colleagues explained that the early Internet was developed as “a general infrastructure on which new applications could be conceived.”20 This approach stood in stark contrast to the way in which the US DoD (still the main funder of the Internet at the time) conceived of information sharing. In the defense and national security communities, the dissemination of information tends to be tightly controlled to prevent unauthorized disclosures. An important implication of the open architecture of the Internet is that no single entity could control access to and uses of the network. While this approach spurred a tremendous growth in network users, it would also pose a number of challenges. Malicious actors soon started to use the Internet to steal resources and information, and to mislead network users.

As networking became increasingly common, its value increased and researchers developed new applications, beyond sharing machines for their processing power. Initial Internet applications focused on communication means. Researchers started posting messages and replying to each other’s comments on editable pages, or bulletin boards, accessible online. In 1972, computer programmer Ray Tomlinson wrote a program for reading, composing and sending messages from one computer to another. The electronic mail was born. Within a year, this new application generated 73 percent of all ARPANET traffic.21 Soon the value of networked communication became clear to everyone involved, and an increasing number of universities and US government agencies joined the web of networks. Yet the early Internet continued to be mostly populated by researchers, supported by investment from the US government.

The spread of personal computers in the 1980s fed a growing interest in internetworking, and fostered a shift toward an increasingly social network, away from its previous principal application as a research hub. As well as a technological evolution, the growth of internetworking is a socio-cultural phenomenon. For Ceruzzi, social and cultural factors differentiate the Internet from previous networks. The emergence of PC hobbyists and communities of practices focusing on computers – for example, gamers – fostered internetworking from the bottom up in the 1980s.22 It was in this context that science fiction writer William Gibson coined the term “cyberspace.” Gibson imagined cyberspace as a virtual world of data linked by computer networks and accessible through consoles.23

The core infrastructure of the ARPANET had to expand to host new users. The network expanded beyond the United States, with new nodes established at University College London and then at the NORSAR research laboratory in Norway in 1982.24 As more and more researchers, scientists and engineers joined the ARPANET, the military community developed its own network (MILNET) in 1983, to separate sensitive military from civilian traffic and implement stricter security requirements. ARPANET remained the preferred platform for industry, academia and government research. In 1985, the US National Science Foundation (NSF) launched its own network (NSFNET) to connect the supercomputing centers it funded across the country and promote advanced research and education on networking.25 Any university receiving funding for an Internet connection was required to use NSFNET and provide access for users. In a few years’ time NSFNET took over as the main hub for internetworking and ARPANET was decommissioned in 1990. The data routes developed for this scientific network eventually formed a major part of the Internet’s backbone.26 Growing commercial interest in ARPANET and then NSFNET changed the nature of internetworking, transforming the research hub into a popular communication platform. In 1994, the US government decided to turn over official control of the regional backbone connections of NSFNET to private interests. Commercial Internet Service Providers (ISPs) started offering their services to individual users, who could connect online using their phone landline.27 As the Internet expanded, commercial providers of hardware (e.g. computers and modems) and software (e.g. browsers) multiplied.

The spread of the Internet

Internetworking became mainstream in the 1990s with the invention of the World Wide Web, which facilitated interpersonal communications in cyberspace. In 1990, a group of scientists based at the European Council for Nuclear Research (CERN) laboratory in Switzerland created a new document format to present information in a set of linked computer documents (Hypertext Transfer Protocol or HTTP). This protocol allowed users to link each document to another one through specific words, phrases or images. In 1991, these innovations spurred the creation of a new application which compiled and linked multimedia documents and made them available to any network user: the World Wide Web (WWW). An exponential number of interlinked HTTP documents became accessible online thanks to a system that identified them: the Uniform Resource Locator (URL), colloquially known as a web address. Subsequently, researchers at the University of Illinois developed a browser software named Mosaic, which simplified public access and ability to surf the web.28

Internet usage expanded dramatically in the following years, spurred by the convergence of increasingly cheap and powerful computers, the multiplication of modems facilitating Internet connection through phone lines, and new browsers like Internet Explorer and Netscape (1995).29 At the dawn of the twenty-first century, further technological developments, such as the spread of wireless broadband technology, commonly known as Wi-Fi, and the rise of smartphones, facilitated further access to the Internet.30 Figure 1.3 shows how the number of Internet users exploded from 2.6 million in 1990 to over 3.4 billion in 2016 (that is, roughly half of the world population). This rapid expansion has been matched by an exponential growth in the number of websites and applications. Amazon, Craigslist and eBay were all established in 1995, the Chinese e-commerce giant Alibaba was founded a few years later in 1999. Together, they have reshaped the way millions of humans shop and access consumer goods. In the last two decades, the Internet has become an increasingly social space, connecting individuals online through communities of interest. American companies such as LinkedIn (2002), Facebook (2004) and Twitter (2006) have provided new platforms for networks of “friends” to form. The Chinese Renren Network (2005) and Russian platform VKontakte (2006) developed similar services. The advent of these social media and other online applications empowered Internet users to shape and diversify the content of the “Web 2.0”.31

image

Figure 1.3 Internet users since 1990 (Max Roser, OurWorldinData.org)

The web has now become a central part of millions of lives across the globe. With the multiplication of social platforms and wearable technologies like smart phones and smart watches, cyberspace is now a central arena for life in the twenty-first century. Statistics reveal that more Internet users are now located in Asia (China and India account for more than 1.1 billion users) than in Europe and North America. Today, more than 70 percent of Internet users – some 2.5 billion people – interact on social media. The advent of the Internet has generated countless opportunities to learn from, and connect with, humans from across the globe. The borderless nature of this digital realm has opened space for millions of individuals to develop communities of interest and express their ideas and identity. Large Internet communities have formed around common interests like sports, cats and dogs, or environmental protection. The advent of the Internet is reshaping human lives, social bonds and politics, and challenging traditional, state-centric ideas about human interactions across the globe.32 As the Internet and broader cyberspace become ever more prominent in the daily routines of billions of humans, vulnerability to cyberattacks and accidental disruptions has expanded dramatically.

The rise of cyber threats

The rise of cyber threats is directly linked to the growing complexity of computers and the scope of human activities in cyberspace, from research and military affairs to economic and social activities.33 Given that the US government played a central role in the early development of computers and computer networks, the pre-history of cyber threats is American-oriented. Michael Warner, the historian of the US Cyber Command, distinguishes four successive insights that marked the history of national cybersecurity in the United States. Early US concerns about cyber threats can be traced back to the practice of multiprogramming, which broadened access to the information stored on computers. Warner finds that Congress started expressing concerns about this practice in the 1960s when government officials realized that computers could spill sensitive data (first insight). These concerns were relatively limited in the early days of networking when the overall number of ARPANET users was in the hundreds. In 1972, US intelligence agencies tested the security of their networks and found that a number of sensitive databases could be accessed from a single computer. The second key insight reached by the US government was that sensitive government data could be stolen, perhaps even manipulated, from a single point of access. The threats posed by such cyber intrusion rose to a new level when computers became essential to modern weapon systems and military decision-making. The networking of computers, and the open architecture that characterized internetworking, created vulnerabilities in government and military systems which could, for example, increase the risk that adversaries would disrupt military command and control systems used for missile warning. The threat of intrusion was sufficiently serious to push President Reagan to sign a National Security Decision Directive warning about the risk of “interception, unauthorized electronic access, and related forms of technical exploitation” to telecommunications and automated information processing systems. The directive put the National Security Agency (NSA) in charge of monitoring all “government telecommunications systems and automated information systems.”34

A number of incidents confirmed growing US government concern in the following years. In 1986, a hacker penetrated sensitive systems connecting computers at the Lawrence Berkeley National Laboratory and MITRE Corporation. This remote attack relied on the ARPANET and MILNET to infiltrate dozens of computers at the Pentagon and a number of other military bases. Clifford Stoll, a systems administrator at Lawrence Berkeley Lab, first identified the attack and began to investigate its origins with the help of the Federal Bureau of Investigation (FBI) and the telecommunication company AT&T. The subsequent investigation revealed that hackers had sold the information they collected on the US networks to the Soviet Committee for State Security (the KGB). Stoll subsequently wrote a book that publicized what is often considered to be the first publicly known case of cyberespionage.35 In 1988, a malicious code spread on the Internet and slowed thousands of computers used by US government and private organizations. This malware, nicknamed the Morris worm, demonstrated the inherent vulnerability of computer networks. The ability of cyber threats to spread across networks reinforced the need for cooperation between users. The effects of the Morris worm prompted the Defense Advanced Research Projects Agency (ARPA changed its name to DARPA in 1972) to establish a Computer Emergency Response Team (CERT) in order to coordinate information and responses to such computer vulnerabilities.

The public debate on cyber threats took a new turn in the early 1990s when experts started to discuss the military implications of cyberattacks. The public debate on cyber war, which we will examine in chapter 6, can be traced back to the early 1990s as analysts forecasted what war would look like in the twenty-first century.36 At the time, cyber war was perceived as a form of information warfare, which sought to decapitate and exploit the command and control structure of the enemy forces. Cyberattacks provided new, electronic means to disrupt computer systems or corrupt the data they hosted. Computers, Warner notes, had become a weapon of war (third insight). The inclusion of computer attack in the US military arsenal would soon lead to a fourth insight: other countries may utilize computers for a similar purpose.37

In 1997, the US DoD ran an exercise codenamed ELIGIBLE RECEIVER, and found that a moderately sophisticated adversary could inflict considerable damage on sensitive US government networks. During this exercise, a small team of government hackers working at the NSA attacked the computer systems of multiple US military commands, gained administrative access to them, tampered with email messages, and disrupted operational systems. Government networks seemed wide open to electronic attacks. Shortly after this exercise, the US government suffered a series of real attacks that confirmed the existence of significant weaknesses. In February 1998, a series of cyberattacks targeted DoD unclassified computer networks at multiple Air Force bases, the National Aeronautics and Space Administration (NASA), and federal laboratories associated with the military. This operation, codenamed SOLAR SUNRISE, showed that the DoD detection systems were insufficient to protect government networks against unwanted cyber intrusions. Worse still, investigations into the attacks revealed that the attackers were not a foreign intelligence service, but two California teenagers who had been directed by Israeli hacker Ehud Tenenbaum.38

The rise of cyber threats has affected not only government networks and capabilities but also society. Originally, hackers were computer geeks motivated by curiosity and entertainment. The term “hacking” itself initially meant playing with machines, and hackers sought to demonstrate their aptitude for programming. A 1981 New York Times article describes how “skilled, often young, computer programmers” would “probe the defenses of a computer system, searching out the limits and the possibilities of the machine.”39 Computer industry pioneer Bill Gates, the founder of Microsoft, revealed on the BBC that he hacked into the computer system of his school as a teen so that he could attend all-girl classes.40 Other hackers were driven by more nefarious purposes. In 1981, a hacker who used the nickname Captain Zap was convicted for breaking into AT&T computers and changing the company’s billing system to create discounted rates during business hours.

The threat hackers posed to the corporate world, and by extension to the economy, generated increasing media coverage in the 1980s, eventually drawing the attention of legislators and entrepreneurs.41 The US Congress passed the Federal Computer Fraud and Abuse Act in 1986 to prohibit unauthorized access to federal computers and traffic in computer passwords. A few years later, the Parliament of the United Kingdom passed the Computer Misuse Act in response to a 1988 court case targeting two hackers who gained unauthorized access to British Telecom’s viewdata service (an early teletext network). This bill, and the growing scope of cyber threats, inspired a number of other countries, from Ireland to Canada, to draft legislation on cybercrime in the following years.42

In the 1990s, concern with cybersecurity deepened and became a security issue of societal proportions. Most of the cyber threats that are commonly known today – viruses, worms, trojans, denial of service (DOS) attacks – emerged at the same time that the Internet spread and computer networking became an increasingly prominent part of modern life. The rise of the World Wide Web diversified the uses of the Internet and expanded the scope of cyber threats. The growth of e-commerce, for example, created new opportunities for criminals to make profit online. The possibility to make online payments also led to new types of scam relying on emails, for example. Confronted with a rise in cyber threats, the corporate world developed new answers. The computer security company McAfee was established in 1987, and released its VirusScan the same year. An anti-virus software industry developed in the following years to serve not only the government and private sectors, but also users accessing the Internet from home.43

Critical infrastructure and key sectors of modern society such as agriculture, banking, healthcare, transportation, water and power rely on computer networks that control and supervise data streams (also called SCADA systems). In 2007, the Idaho National Laboratory ran an experiment, the Aurora Generator test, demonstrating that a computer program could disrupt a diesel generator used as a part of an electric grid, and cause it to explode.44 Some experts fear that such an explosion could generate a cascading failure of an entire power grid. Growing concern about cyberattacks on critical infrastructure has pushed a number of governments to devise strategies to protect societal reliance on cyber infrastructure. The 2003 US “National Strategy to Secure Cyberspace” noted that “threats in cyberspace have risen dramatically” and emphasized the need for public–private engagement to secure cyberspace.45 Since critical infrastructure is privately owned in most advanced countries, working across the public–private divide is essential to ensure national cybersecurity. As such, dozens of countries have adopted a similar approach to their national cyber security and developed specific organizations and strategies to counter cyber threats to the public and private sectors.46

A global security issue

The dawn of the twenty-first century saw the advent of increasingly sophisticated attacks and malware that spread around the world. The internationalization of cyberspace transformed cybersecurity into a global security issue. While most cyberattacks came from the United States and Europe in the 1990s, by the early 2000s they went international. In 2000, the Love Bug or “ILOVYOU” computer worm originated in the Philippines and spread across the World Wide Web (initially relying on an email attachment). The virus overwrote files hosted by tens of millions of computers causing an estimated loss of $10 billion in work hours.47 The Zotob worm originated in Morocco in August 2005, and caused troubles at CNN, the New York Times, the US Senate, and the Centers for Disease Control and Prevention in the United States.48

Research shows the increasingly global character of cyber threats. The Center for Strategic and International Studies (CSIS), a renowned think tank based in Washington DC, has compiled a list of significant cyber incidents since 2006. While this list does not represent the whole spectrum of cyber threats, it illustrates their growing scope. The current list (2018) compiles 296 cyber incidents on government agencies, defense and high-tech companies, or economic crimes with losses of more than a million dollars. Figure 1.4 shows a rise in the number of significant cyber incidents since 2006, and figure 1.5 shows the diverse geographic spread of the attacks (by continent where victims are located). Most of the significant cyber incidents identified by CSIS researchers have targeted victims in North America, Europe and Asia. This should come as no surprise since the penetration rate of the Internet is highest on these three continents.

image

Figure 1.4 The growth of significant cyber incidents, 2006–2017 (CSIS)

image

Figure 1.5 Geographic spread of significant cyber incidents by continent, 2006–2017 (CSIS)

In the last decade, most sophisticated threats, generally developed by well-resourced state actors, have affected computer systems across the public–private divide. A widely consulted report released in 2011 by cybersecurity company Mandiant exposed how one of China’s cyber espionage units stole “hundreds of terabytes of data from at least 141 organizations across a diverse set of industries beginning as early as 2006”.49 This Advanced Persistent Threat (APT) targeted intellectual property – the bedrock of the modern economy – in a number of sectors, including information technology, financial services, construction and manufacturing, the chemical and energy industries, and aerospace, to name a few. Altogether, victims were located in 15 countries ranging from the United States and Canada, to Belgium, France and the United Kingdom, as well as Israel, the United Arab Emirates, India, Japan, Taiwan and South Africa. The fact that a company authored this widely discussed report further illustrates the development of a robust market for cybersecurity, which serves government as well as private-sector clients.

Given the global scope of cyberspace and cyber threats (see figure 1.5), social scientific research on cybersecurity should strive to consider cases beyond the United States. While the United States has historically played a leading role in the development of computers and internetworking, and thus continues to attract many cyberattacks, cybersecurity is truly a global security concern. This book includes a number of case studies and examples from other countries, as well as non-state actors, to reflect this global scope. Cybersecurity is a global problem that requires a global approach. Yet, most of the physical layer of cyberspace and the governance of security threats takes place at the nation-state level, and national governments have developed the most advanced offensive and defensive capabilities in cyberspace. As a result, multiple levels of analysis (individual, organizational, societal, national and global) can help in understanding cybersecurity.

Discussion questions

1. In what ways has the development of cyberspace affected contemporary threats to cybersecurity?

2. What are the key turning points in the history of cybersecurity?

3. Read the latest CSIS list of significant cyber incidents and pick three events that caught your attention. What is the significance of these cyber incidents?

Further resources

Internet history timeline, see: www.computerhistory.org/internethistory.

Paul E. Ceruzzi, Computing: A Concise History (Cambridge, MA: MIT Press, 2012).

Werner Herzog, “Lo and Behold: Reveries of the Connected World,” video documentary (available on Netflix).

Barry M. Leiner et al., “The Past and Future History of the Internet,” Communications of the ACM 40/2 (1997): 102–8.

Michael Warner, “Cybersecurity: A Pre-history,” Intelligence and National Security 27/5 (2012): 781–99.

Notes