You ask what I am? . . . I am all the people who thought of me and planned me and built me and set me running . . . I am all the things they wanted to be and perhaps could not be, so they built a great child, a wondrous toy.
—RAY BRADBURY, I Sing the Body Electric!
“WHAT IS INTERNET, ANYWAY?”
It was 1994, and Today show cohost Bryant Gumbel was struggling on live TV to read the “internet address” for NBC’s newly unveiled email hotline. In confusion, the anchorman turned from the teleprompter. Was “Internet” something you could mail letters to? he asked. Cohost Katie Couric didn’t know. Eventually, an offscreen producer bailed them out. “Internet is the massive computer network,” he shouted, “the one that’s becoming really big now.”
Today, their exchange seems quaint. Roughly half the world’s population is now linked by this computer network. It is not just “really big”; it is the beating heart of international communication and commerce. It supports and spreads global news, information, innovation, and discovery of every kind and in every place. Indeed, it has become woven into almost everything we do, at home, at work, and, as we shall see, at war. In the United States, not only is internet usage near-universal, but one-fifth of Americans now admit that they essentially never stop being online. The only peoples who have remained truly ignorant of the internet’s reach are a few un-networked tribes of the Amazon and New Guinea. And for them, it is just a question of time.
But using the internet isn’t really the same as understanding it. The “internet” isn’t just a series of apps and websites. Nor is it merely a creature of the fiber-optic cable and boxy servers that provide its backbone. It is also a galaxy of billions of ideas, spreading through vast social media platforms that each pulse with their own entropic rhythm. At the same time, it is a globe-spanning community vaster and more diverse than anything before it, yet governed by a handful of Silicon Valley oligarchs.
As revolutionary as the internet may seem, it is also bound by history. Its development has followed the familiar patterns etched by the printing press, telegraph, television, and other communications mediums before it. To truly understand the internet—the most consequential battlefield of the twenty-first century—one must understand how it works, why it was made, and whom it has empowered.
In other words, “What is Internet, anyway?”
The answer starts with a memo that few read at the time, for the very reason that there was no internet when it was written.
“In a few years, men will be able to communicate more effectively through a machine than face to face. That is a rather startling thing to say, but it is our conclusion.”
That was the prediction of J.C.R. Licklider and Robert W. Taylor, two psychologists who had seen their careers pulled into the relatively new field of computer science that had sprung up during the desperate days of World War II. The “computers” of their day were essentially giant calculators, multistory behemoths of punch cards and then electric switches and vacuum tubes, applied to the hard math of code breaking, nuclear bomb yields, and rocket trajectories.
That all changed in 1968, the year Licklider and Taylor wrote a paper titled “The Computer as a Communication Device.” It posited a future in which computers could be used to capture and share information instead of just calculating equations. They envisioned not just one or two computers linked together, but a vast constellation of them, spread around the globe. They called it the Intergalactic Computer Network.
Reflecting their past study of the human mind, Licklider and Taylor went even further. They prophesied how this network would affect the people who used it. It would create new kinds of jobs, build new “interactive communities,” and even give people a new sense of place, what Licklider and Taylor called “to be on line.” So long as this technology was made available to the masses, they wrote, “surely the boon to humankind would be beyond measure.”
The information that could be transmitted through this prospective network would also be fundamentally different from any communication before. It would be the most essential form of information itself. The “binary information digit,” or “bit,” had first been proposed in 1948 by Claude Shannon of Bell Labs. The bit was the smallest possible unit of data, existing in either an “on” or “off” state. By stringing bits together, complex instructions could be sent through computers, with perfect accuracy. As acclaimed physicist John Archibald Wheeler observed, breaking information into bits made it possible to convey anything. “Every particle, every field of force, even the space-time continuum itself,” Wheeler wrote, “derives its function, its meaning, its very existence, from answers to yes-no questions, binary choices, bits.”
By arranging bits into “packets” of information and establishing a system for sending and receiving them (packet switching), it was theoretically possible for computers to relay instructions instantly, over any conceivable distance. With the right software, the two men foretold, these bits could be used to query a database, type a word, even (in theory) generate an image or display video. Years before the internet would see its first proof of concept, its theoretical foundation had been laid. Not everyone recognized the use for such a system. When a team of researchers proposed a similar idea to AT&T in 1965, they were bluntly rejected. “Damn it!” exclaimed one executive. “We’re not going to set up a competitor to ourselves!”
Luckily, Licklider and Taylor were in a position to help make their vision of computers exchanging bits of information a reality. The two were employees of the Pentagon’s Advanced Research Projects Agency (ARPA). After the surprise of the Sputnik space satellite launch in 1957, ARPA was established in 1958 to maintain U.S. parity with Soviet science and technology research. For the U.S. military, the potential of an interconnected (then called “internetted”) communications system was that it could banish its greatest nightmare: the prospect of the Soviet Union being able to decapitate U.S. command and control with a single nuclear strike. But the real selling point for the other scientists working for ARPA was something else. Linking up computers would be a useful way to share what was then an incredibly rare and costly commodity: computer time. A network could spread the load and make it easier on everyone. So a project was funded to transform the Intergalactic Computer Network into reality. It was called ARPANET.
On October 29, 1969, ARPANET came to life when a computer at UCLA was linked to a computer at Stanford University. The bits of information on one machine were broken down into packets and then sent to another machine, traveling across 350 miles of leased telephone wire. There, the packets were re-formed into a message. As the scientists excitedly watched, one word slowly appeared on the screen: “LO.”
Far from being the start of a profound statement such as “Lo and behold,” the message was supposed to read “LOGIN.” But the system crashed before the transmission could be completed. Fittingly, the first message in internet history was a miscommunication.
Despite the crash, the ARPANET team had accomplished something truly historic. They hadn’t just linked two computers together. They had finished a race that had spanned 5,000 years and continually reshaped war and politics along the way.
The use of technology to communicate started in ancient Mesopotamia sometime around 3100 BCE, when the first written words were pressed into clay tablets. Soon information would be captured and communicated via both the most permanent forms of marble and metal and in the more fleeting forms of papyrus and paper.
But all this information couldn’t be transferred easily. Any copy had to be painstakingly done by hand, usually with errors creeping in along the way. A scribe working at his maximum speed, for instance, could only produce two Bibles a year. This limitation made information, in any form, the scarcest of commodities.
The status quo stood for almost 4,000 years, until the advent of the printing press. Although movable type was first invented in China, traditional Mandarin Chinese—with its 80,000 symbols—was too cumbersome to reproduce at scale. Instead, the printing press revolution began in Europe in around 1438, thanks to a former goldsmith, Johannes Gutenberg, who began experimenting with movable type. By 1450, he was peddling his mass-produced Bibles across Germany and France. Predictably, the powers of the day tried to control this disruptive new technology. The monks and scribes, who had spent decades honing their hand-copying techniques, called for rulers to ban it, arguing that mass production would strangle the “spirituality” of the copying process. Their campaign failed, however, when, short on time and money, they resorted to printing their own pamphlet on one of Gutenberg’s new inventions. Within a century, the press became both accessible and indispensable. Once a rare commodity, books—some 200 million of them—were now circulating widely in Europe.
In what would become a familiar pattern, the new technology transformed not just communications but war, politics, and the world. In 1517, a geopolitical match was struck when a German monk named Martin Luther wrote a letter laying out 95 problems in the Catholic Church. Where Luther’s arguments might once have been ignored, the printing press allowed his ideas to reach well beyond the bishop to whom he penned the letter. By the time the pope heard about the troublesome monk and sought to excommunicate him, Luther had reproduced his 95 complaints in 30 different pamphlets and sold 300,000 copies. The result was the Protestant Reformation, which would fuel two centuries of war and reshape the map of Europe.
The technology would also create new powers in society and place the old ones under unwelcome scrutiny. In 1605, a German printer named Johann Carolus found a way to use his press’s downtime by publishing a weekly collection of “news advice.” In publishing the first newspaper, Carolus created a new profession. The “press” sold information itself to customers, creating a popular market model that had never before existed. But in the search to profit from news, truth sometimes fell by the wayside. For instance, the New-England Courant, among the first American newspapers, published a series of especially witty letters by a “Mrs. Silence Dogood” in 1722. The actual writer was a 16-year-old apprentice named Benjamin Franklin, making him—among many other things—the founding father of fake news in America.
Yet the spread of information, true or false, was limited by the prevailing transportation of the day. In ancient Greece, the warrior Phei-dippides famously ran 25 miles from Marathon to Athens to deliver word of the Greeks’ victory over the Persian army. (The 26.2-mile distance of the modern “marathon” dates from the 1908 Olympics, where the British royal family insisted on extending the route to meet their viewing stands.) It was a race to share the news that would literally kill him. When Pheidippides finally sprinted up the steps of Athens’s grand Acropolis, he cried out to the anxious city leaders gathered there, “Nikomen! We are victorious!” And then, as the poet Robert Browning would write 2,400 years later, he collapsed and died, “joy in his blood bursting his heart.”
From the dawn of history, any such messages, important or not, could only be delivered by hand or word of mouth (save the occasional adventure with carrier pigeons). This placed a sharp upper bound on the speed of communication. The Roman postal service (cursus publicus, “the public way”), established at the beginning of the first millennium CE, set a record of roughly fifty miles per day that would stand essentially unchallenged until the advent of the railroad. News of world-changing events—the death of an emperor or start of a war—could only travel as fast as a horse could gallop or a ship could sail. As late as 1815, thousands of British soldiers would be mowed down at the Battle of New Orleans simply because tidings of a peace treaty ending the War of 1812—signed two weeks earlier—had yet to traverse the Atlantic.
The world changed decisively in 1844, the year Samuel Morse successfully tested his telegraph (from the Latin words meaning “far writer”). By harnessing the emerging science of electricity, the telegraph ended the tyranny of distance. It also showed the important role of government in any communication technology now able to extend across political boundaries. Morse spent years lobbying the U.S. Congress for the $30,000 needed to lay the thirty-eight miles of wire between Washington, DC, and Baltimore for his first public test. Critics suggested the money might be better spent testing hypnotism as a means of long-distance communication. Fortunately, the telegraph won—by just six votes.
This was the start of a telecommunications revolution. By 1850, there were 12,000 miles of telegraph wire and some 20 telegraph companies in the United States alone. By 1880, there would be 650,000 miles of wire worldwide—30,000 miles under the ocean—that stretched from San Francisco to Bombay. This was the world that Morse’s brother had prophesied in a letter written while the telegraph was still under development: “The surface of the earth will be networked with wire, and every wire will be a nerve. The earth will become a huge animal with ten million hands, and in every hand a pen to record whatever the directing soul may dictate!”
Morse would be applauded as “the peacemaker of the age,” the inventor of “the greatest instrument of power over earth which the ages of human history have revealed.” As observers pondered the prospect of a more interconnected world, they assumed it would be a gentler one. President James Buchanan captured the feeling best when marking the laying of the first transatlantic cable between the United States and Britain, in 1858. He expressed the belief that the telegraph would “prove to be a bond of perpetual peace and friendship between the kindred nations, and an instrument designed . . . to diffuse religion, liberty, and law throughout the world.” Within days, that transatlantic cable of perpetual peace was instead being used to send military orders.
Like the printing press before it, the telegraph quickly became an important new tool of conflict, which would also transform it. Beginning in the Crimean War (1853–1856), broad instructions, traveling weeks by sea, were replaced—to the lament of officers in the field—by micromanaging battle orders sent by cable from the tearooms of London to the battlefields of Russia. Some militaries proved more effective in exploiting the new technology than others. In the Wars of German Unification (1864–1871), Prussian generals masterfully coordinated far-flung forces to the bafflement of their foes, using real-time communications by telegraph wire to replace horseback couriers. As a result, the telegraph also spurred huge growth in war’s reach and scale. In the American Civil War (1861–1865), Confederate and Union soldiers, each seeking an edge over the other, laid some 15,000 miles of telegraph wire.
The telegraph also reshaped the public experience and perception of conflict. One journalist marveled, “It gives you the news before the circumstances have had time to alter . . . A battle is fought three thousand miles away, and we have the particulars while they are taking the wounded to the hospital.”
This intimacy could be manipulated, however. A new generation of newspaper tycoons arose, who turned sensationalism into an art form, led by Harvard dropout turned newspaper baron William Randolph Hearst. His “yellow journalism” (named for the tint of the comics in two competing New York dailies, Hearst’s New York Journal and Joseph Pulitzer’s New York World) was the kind of wild rumormongering American readers couldn’t get enough of—and that helped spark the Spanish-American War of 1898. When one of his photographers begged to return home from Spanish-controlled Cuba because nothing was happening, Hearst cabled back: “Please remain. You furnish the pictures and I’ll furnish the war.” Concern over the issue of telegraphed “fake news” grew so great that the St. Paul Globe even changed its motto that year: “Live News, Latest News, Reliable News—No Fake War News.”
The electric wire of the telegraph, though, could only speak in dots and dashes. To use them required not just the infrastructure of a telegraph office, but a trained expert to operate the machine and translate its coded messages for you. Alexander Graham Bell, an amateur tinkerer whose day job was teaching the deaf, changed this with the telephone in 1876. Sending sound by wire meant users could communicate with each other, even in their offices and homes. Within a year of its invention, the first phone was put in the White House. The number to call President Rutherford B. Hayes was “1,” as the only other phone line linked to it was at the Treasury Department. The telephone also empowered a new class of oligarchs. Bell’s invention was patented and soon monopolized by Bell Telephone, later renamed the American Telephone and Telegraph Company (AT&T). Nearly all phone conversations in the United States would be routed through this one company for the next century.
Telegraphs and phones had a crucial flaw, though. They shrank the time and simplified the means by which a message could travel a great distance, but they did so only between two points, linked by wire. Guglielmo Marconi, a 20-year-old Irish-Italian tinkering in a secret lab in his parents’ attic, would be the first to build a working “wireless telegraphy” system, in 1894.
Marconi’s radio made him a conflicted man. He claimed that radio would be “a herald of peace and civilization between nations.” At the same time, he aggressively peddled it to every military he could. He sold it to the British navy in 1901 and convinced the Belgian government to use it in the brutal colonization of the Congo. In the 1904–1905 Russo-Japanese War, both sides used Marconi radios.
Radio’s promise, however, went well beyond connecting two points across land or sea. By eliminating the need for wire, radio freed communications in a manner akin to the printing press. One person could speak to thousands or even millions of people at once. Unlike the telegraph, which conveyed just dots and dashes, radio waves could carry the human voice and the entire musical spectrum, turning them into the conveyor of not only mass information but also mass entertainment.
The first radio “broadcast” took place in 1906, when an American engineer played “O Holy Night” on his violin. By 1924, there were an estimated 3 million radio sets and 20 million radio listeners in the United States alone. Quite quickly, radio waves collided with politics. Smart politicians began to realize that radio had shattered the old political norms. Speeches over the waves became a new kind of performance art crossed with politics. The average length of a political campaign speech in the United States fell from an hour to just ten minutes.
And there was no better performer than Franklin Delano Roosevelt, elected president in 1932. He used his weekly Fireside Chats to reach directly into the homes of millions of citizens. (After the December 7, 1941, Pearl Harbor attack, four-fifths of American households listened to his speech live.) In so doing, he successfully went over the heads of the political bosses and newspaper editors who fought to deny him a third and fourth term. So powerful were FDR’s speeches that on the night of an important speech intended to rally listeners against Germany, the Nazis launched a heavy bombing raid against London to try to divert the news.
But radio also unleashed new political horrors. “It would not have been possible for us to take power or to use it in the ways we have without the radio,” said Joseph Goebbels, who himself was something new to government, the minister of propaganda for Nazi Germany. Goebbels employed nearly a thousand propagandists to push Adolf Hitler’s brutal, incendiary, enrapturing speeches. Aiding the effort was a giveaway with a catch: German citizens were gifted special radios marked with swastikas, which could only receive Nazi-broadcasted frequencies.
Just like the telegraph, radio would be used to foment war and become a new tool for fighting it. On the eve of the 1939 German invasion of Poland, Hitler told his generals, “I will provide a propagandistic casus belli. Its credibility doesn’t matter. The victor will not be asked whether he told the truth.” The following six years of World War II saw not just tanks, planes, and warships linking up by radio, but both sides battling back and forth over the airwaves to reshape what the opposing population knew and thought. As Robert D. Leigh, director of the Foreign Broadcast Intelligence Service, testified before Congress in 1944:
Around the world at this hour and every hour of the 24, there is a constant battle on the ether waves for the possession of man’s thoughts, emotions, and attitudes—influencing his will to fight, to stop fighting, to work hard, to stop working, to resist and sabotage, to doubt, to grumble, to stand fast in faith and loyalty . . . We estimate that by short wave alone, you as a citizen of this radio world are being assailed by 2,000 words per minute in 40–45 different languages and dialects.
Yet the reach and power of radio was soon surpassed by a technology that brought compelling imagery into broadcasts. The first working television in 1925 showed the face of a ventriloquist’s dummy named Stooky Bill. From these humble beginnings, television soon rewired what people knew, what they thought, and even how they voted. By 1960, television sets were in nine of ten American homes, showing everything from The Howdy Doody Show to that infamous presidential debate between Richard M. Nixon and John F. Kennedy, won by the more “telegenic” candidate. In the United States, television forged a new sense of cultural identity. With a limited number of broadcasts to choose from, millions of families watched the same events and news anchors; they saw the same shows and gossiped eagerly about them the next day.
Television also changed what military victory and defeat looked like. In 1968, the Vietcong launched the Tet Offensive against South Vietnam and its American allies. The surprise operation quickly turned into a massive battlefield failure for the attackers. Half of the 80,000-strong Vietcong were killed or wounded; they captured little territory and held none of what they did. But that wasn’t what American families watching in their dens back home saw. Instead, 50 million viewers saw clips of U.S. Marines in disarray; scenes of bloody retribution and bodies stacked deep. The most dramatic moment came when the U.S. embassy in Saigon was put under siege. Although the main building was never penetrated and the attackers were quickly defeated, the footage was mesmerizing—and, for many, deeply troubling.
Unfolding across a hundred South Vietnamese cities and towns, Tet was the biggest battle of the Vietnam War. But the war’s real turning point came a month later and 8,000 miles away.
Legendary journalist Walter Cronkite was the anchor of the CBS Evening News and deemed “the most trusted man in America.” In a three-minute monologue, Cronkite declared that the Vietnam War was never going to be the victory the politicians and generals had promised. In the White House, President Lyndon B. Johnson watched. Forlorn, he purportedly told his staff, “If I’ve lost Cronkite, I’ve lost Middle America.” Such was the power of moving images and sound, interspersed with dramatic narration and beamed into tens of millions of households. Not only did video provide a new level of emotional resonance, but it was also hard to dispute. When the government claimed one thing and the networks showed another, the networks usually won.
As television ranged into even wider territory with the advent of real-time satellite coverage, the story seemed complete. From words pressed into Mesopotamian clay to broadcasts from the moon, the steady march of technological innovation had overcome the obstacles of time and distance. With each step, communications technology had altered the politics of the day, subverting some powers while crowning new ones in their place. Despite its originators’ unfailing optimism about social promise and universal peace, each technology had also been effectively turned to the ends of war.
But one important limitation still bound all these technologies. Users could form a direct link, conversing one-to-one, as with the telegraph or telephone. Or one user could reach many at once, as with the printing press or radio and television broadcasts.
No technology could do it all—until ARPANET.
The first computer network grew quickly. Within weeks of the computers at UCLA and Stanford connecting in October 1969, a computer in Santa Barbara and then another in Utah joined the party. By 1971, fifteen university computer labs had been stitched together. In 1973, the network made its first international connection, incorporating computers at the Norwegian Seismic Array, which tracked earthquakes and nuclear tests.
With the bold idea of computer communication now proven workable, more and more universities and labs were linked together. But instead of joining the ARPANET, many forged their own mini-networks. One connected computers in Hawaii (delightfully called the ALOHAnet and MENEHUNE); another did so in Europe. These mini-networks presented an unexpected problem. Rather than forming one “galactic” network, computer communication was becoming isolated into a bunch of little clusters. It was even worse. Each network had its own infrastructure and governing authority. This meant that the networks couldn’t easily link up. They were each setting their own rules about everything from how to maintain the network to how to communicate within it. Unless a common protocol could be established to govern a “network of networks” (or “internet”), the spread of information would be held back. This is when Vint Cerf entered the scene.
While figures like J.C.R. Licklider and Robert W. Taylor had conceived ARPANET, Cerf is rightfully known as the “father of the internet.” As a teenager, he learned to code computer software by writing programs to test rocket engines. As a young researcher, he was part of the UCLA-Stanford team that connected the Pentagon’s new network.
Recognizing the problem of compatibility would keep computerized communication from ever scaling, Cerf set out to find a solution. Working with his friend Robert Kahn, he designed the TCP/IP (transmission-control protocol/internet protocol), an adaptable framework that could track and regulate the transmission of data across an exponentially expanding network. Essentially, it is what allowed the original ARPANET to bind together all the mini-networks at universities around the world. It remains the backbone of the internet to this day.
Over the following years, Cerf moved to work at ARPA and helped set many of the rules and procedures for how the network would evolve. He was aware of the futuristic visions that his predecessors had laid out. But it was hard to connect that vision to what was still just a way for scientists to share computing time. Considerations of the internet’s social or political impact seemed the stuff of fantasy.
This changed one day in 1979 when Cerf logged on to his workstation to find an unopened message from the recently developed “electronic mail” system. Because more than one person was using each computer, the scientists had conceived of “e-mail” (now commonly styled “email”) as a way to share information, not just between computers but also from one person to another. But, just as with regular mail, they needed a system of “addresses” to send and receive the messages. The “@” symbol was chosen as a convenient “hack” to save typing time and scarce computer memory.
The message on Cerf’s screen wasn’t a technical request, however. The email subject was “SF-lovers.” And it hadn’t been sent just to him. Instead, Cerf and his colleagues scattered across the United States were all asked to respond with a list of their favorite science fiction authors. Because the message had gone out to the entire network, everybody’s answers could then be seen and responded to by everybody else. Or users could send their replies to just one person or subgroup, generating scores of smaller discussions that eventually fed back into the whole.
Over forty years later, Cerf still recalls the moment he realized the internet would be something more than every other communications technology before it. “It was clear we had a social medium on our hands,” he said.
The thread was a hit. After SF-lovers came Yumyum, a mailing list to debate the quality of restaurants in Silicon Valley. Soon the network was also used to share not just opinions but news about both science and science fiction, such as plans for a movie revival of the 1960s TV show Star Trek.
The U.S. military budgeters wanted to ban all this idle chatter from their expensive new network. However, they relented when engineers convinced them that the message traffic was actually a good stress test for ARPANET’s machinery. Chain letters and freewheeling discussions soon proliferated across the network. ARPANET’s original function had been remote computer use and file transfer, but soon email was devouring two-thirds of the available bandwidth. No longer was the internet simply improving the transfer of files from one database to another. Now it was creating those “interactive communities” that Licklider and Taylor had once envisioned, transforming what entire groups of people thought and knew. Soon enough, it would even change how they spoke to each other.
Perhaps no one—engineers included—understood by how much. At precisely 11:44 A.M. EST on September 19, 1982, computer scientist Scott Fahlman changed history forever. In the midst of an argument over a joke made on email, he wrote:
I propose that [sic] the following character sequence for joke markers:
:-)
Read it sideways. Actually, it is probably more economical to mark things that are NOT jokes, given current trends. For this, use
:-(
And so the humble emoticon was born. But it illustrated something more.
For all its promise, ARPANET was not the internet as we know it. It was a kingdom ruled by the U.S. government. And, as shown by the formal creation of the emoticon in the midst of an argument among nerds, its population was mostly PhDs in a handful of technical fields. Even the early social platforms these computer scientists produced were just digital re-creations of old and familiar things: the postal service, bulletin boards, and newspapers. The internet remained in its infancy.
But it was growing fast. By 1980, there were 70 institutions and nearly 5,000 users hooked up to ARPANET. The U.S. military came to believe that the computer network its budget was paying for had expanded too far beyond its needs or interests. After an unsuccessful attempt to sell ARPANET to a commercial buyer (for the second time, AT&T said “no thanks”), the government split the internet in two. ARPANET would continue as a chaotic and fast-growing research experiment, while the military would use the new, secure MILNET. For a time, the worlds of war and the internet went their separate ways.
This arrangement also paved the way for the internet to become a civilian—and eventually a commercial—enterprise. The National Science Foundation took over from the Pentagon and moved to create a more efficient version of ARPANET. Called NSFNET, it proved faster by an order of magnitude and brought in new consortiums of users. The 28,000 internet users in 1987 grew to nearly 160,000 by 1989. The next year, the now outdated ARPANET was quietly retired. Vint Cerf was there to deliver the eulogy. “It was the first, and being first, was best, / But now we lay it down to ever rest. / . . . / Of faithful service, duty done, I weep. / Lay down thy packet, now, o friend, and sleep.”
While the internet and the military were ostensibly dividing, other worlds were on the brink of colliding. Back in 1980, the British physicist Tim Berners-Lee had developed a prototype of something called “hypertext.” This was a long-theorized system of “hyperlinks” that could bind digital information together in unprecedented ways. Called ENQUIRE, the system was a massive database where items were indexed based on their relationships to each other. It resembled a very early version of Wikipedia. There was a crucial difference, however. ENQUIRE wasn’t actually part of the internet. The computers running this revolutionary indexing program couldn’t yet talk to each other.
Berners-Lee kept at it. In 1990, he began designing a new index that could run across a network of computers. In the process, he and his team invented much of the digital shorthand still in use today. They wrote a new code to bind the databases together. Hypertext markup language (HTML) defined the structure of each item, could display images and video, and, most important, allowed anything to link to anything else. Hypertext transfer protocol (HTTP) determined how hypertext was sent between internet nodes. To give it an easy-to-find location, every item was then assigned a unique URI (uniform resource identifier), more commonly known as a URL (uniform resource locator). Berners-Lee dubbed his creation the World Wide Web.
Just as ARPANET had shaped the systems that made online communication possible and Cerf and Kahn’s protocol had allowed the creation of a network of networks that spanned the globe, the World Wide Web—the layer on top that we now call the “internet”—shaped what this communication looked like. Forward-thinking entrepreneurs quickly set to building the first internet “browsers,” software that translated the World Wide Web into a series of visual “pages.” This helped make the internet usable for the masses; it could now be navigated by anyone with a mouse and a keyboard. During this same period, the U.S. government continued investment in academic research and infrastructure development, with the goal of creating an “information superhighway.” The most prominent sponsor of these initiatives was Senator Al Gore, leading to the infamous claim that he “invented” the internet. More accurately, he valuably sped up its development.
The advent of the World Wide Web matched up perfectly with another key development that mirrored technology past: the introduction of profit-seeking. In 1993, early internet architects gathered to take their biggest step yet: privatizing the entire system and tying independent internet operators—of which there were thousands—into a single, giant network. At the same time, they also took steps to establish a common system of internet governance, premised on the idea that no one nation should control it. In 1995, NSFNET formally closed, and a longstanding ban on online commercial activity was lifted.
The internet took off like a rocket. In 1990, there were 3 million computers connected to the internet. Five years later, there were 16 million. That number reached 360 million by the turn of the millennium.
As with the technologies of previous eras, the internet’s commercialization and rapid growth paved the way for a gold rush. Huge amounts of money were to be made, not just in owning the infrastructure of the network but also in all the new business that sprang from it. Among the earliest to profit were the creators of Netscape Navigator, the easy-to-use browser of choice for three-quarters of all internet users. When Netscape went public in 1995, the company was worth $3 billion by the end of its first day, despite having never turned a profit. At that moment, the internet ceased to be the plaything of academics.
Amid the flurry of new connections and ventures, the parallel world of the internet began to grow so fast that it became too vast for any one person to explore, much less understand. It was fortunate, then, that nobody needed to understand it. The explorers who would catalog the internet’s most distant reaches would not be people but “bots,” special programs built to “crawl” and index the web’s endless expanse. The first bots were constructed by researchers as fun lab experiments. But as millions of users piled online, web search became the next big business. The most successful venture was born in 1996, created by two Stanford graduate students, Larry Page and Sergey Brin. Their company’s name was taken from a mathematical term for the number 1 followed by 100 zeros. “Google” symbolized their idea to “organize a seemingly infinite amount of information on the web.”
As the web continued its blistering growth, it began to attract a radically different user base, one far removed from the university labs and tech enclaves of Silicon Valley. For these new digital arrivals, the internet wasn’t simply a curiosity or even a business opportunity. It was the difference between life and death.
In early 1994, a ragtag force of 4,000 disenfranchised workers and farmers rose up in Mexico’s poor southern state of Chiapas. They called themselves the Zapatista National Liberation Army (EZLN). The revolutionaries occupied a few towns and vowed to march on Mexico City. The government wasn’t impressed. Twelve thousand soldiers were deployed, backed by tanks and air strikes, in a swift and merciless offensive. The EZLN quickly retreated to the jungle. The rebellion teetered on the brink of destruction. But then, twelve days after it began—as the Mexican military stood ready to crush the remnant—the government declared a sudden halt to combat. For students of war, this was a head-scratcher.
But upon closer inspection, there was nothing conventional about this conflict. More than just fighting, members of the EZLN had been talking online. They shared their manifesto with like-minded leftists in other countries, declared solidarity with international labor movements protesting free trade (their revolution had begun the day the North American Free Trade Agreement, or NAFTA, went into effect), established contact with international organizations like the Red Cross, and urged every journalist they could find to come and observe the cruelty of the Mexican military firsthand. Cut off from many traditional means of communication, they turned en masse to the new and largely untested power of the internet.
The gambit worked. Their revolution was joined in solidarity by tens of thousands of liberal activists in more than 130 countries, organizing in 15 different languages. Global pressure to end the small war in Chiapas built swiftly on the Mexican government. Moreover, it seemed to come from every direction, all at once. Mexico relented.
Yet this new offensive didn’t stop after the shooting had ceased. Instead, the war became a bloodless political struggle, sustained by the support of a global network of enthusiasts and admirers, most of whom had never even heard of Chiapas before the call to action went out. In the years that followed, this network would push and cajole the Mexican government into reforms the local fighters hadn’t been able to obtain on their own. “The shots lasted ten days,” lamented the Mexican foreign minister, José Ángel Gurría, in 1995, but “ever since the war has been a war of ink, of written word, a war on the internet.”
Everywhere, there were signs that the internet’s relentless pace of innovation was changing the social and political fabric of the real world. There was the invention of the webcam and the launch of eBay and Amazon; the birth of online dating; even the first internet-abetted scandals and crimes, one of which resulted in a presidential impeachment, stemming from a rumor first reported online. In 1996, Manuel Castells, among the world’s foremost sociologists, made a bold prediction: “The internet’s integration of print, radio, and audiovisual modalities into a single system promises an impact on society comparable to that of the alphabet.”
Yet the most forward-thinking of these internet visionaries wasn’t an academic at all. In 1999, musician David Bowie sat for an interview with the BBC. Rather than promote his albums, Bowie waxed philosophical about technology’s future. The internet wouldn’t just bring people together, he explained; it would also tear them apart. “Up until at least the mid-1970s, we really felt that we were still living under the guise of a single, absolute, created society—where there were known truths and known lies and there was no kind of duplicity or pluralism about the things that we believed in,” the artist once known as Ziggy Stardust said. “[Then] the singularity disappeared. And that I believe has produced such a medium as the internet, which absolutely establishes and shows us that we are living in total fragmentation.”
The interviewer was mystified by Bowie’s surety about the internet’s powers. “You’ve got to think that some of the claims being made for it are hugely exaggerated,” he countered.
Bowie shook his head. “No, you see, I don’t agree. I don’t think we’ve even seen the tip of the iceberg. I think the potential of what the internet is going to do to society, both good and bad, is unimaginable. I think we’re actually on the cusp of something exhilarating and terrifying . . . It’s going to crush our ideas of what mediums are all about.”
“The goal wasn’t to create an online community, but a mirror of what existed in real life.”
It’s a grainy 2005 video of a college-age kid sitting on a sofa in a den, red plastic cup in hand. He’s trying to describe what his new invention is and—more important—what it isn’t. It isn’t going to be just a place to hang out online, a young Mark Zuckerberg explains. It’s going to be a lot more than that.
Zuckerberg was part of the first generation to be born into a world where the internet was available to the masses. By the age of 12, he had built ZuckNet, a chat service that networked his dad’s dental practice with the family computer. Before he finished high school, he took a graduate-level course in computer science. And then one night in 2003, as a 19-year-old sophomore at Harvard, Zuckerberg began a new, ambitious project. But he did so with no ambition to change the world.
At the time, each Harvard house had “facebooks” containing student pictures. The students used them as a guide to their new classmates, as well as fodder for dorm room debates about who was hot or not. Originally printed as booklets, the facebooks had recently been posted on the internet. Zuckerberg had discovered that the online version could be easily hacked and the student portraits downloaded. So over a frenzied week of coding, he wrote a program that allowed users to rate which of two randomly selected student portraits was more attractive. He called his masterpiece “Facemash.” Visitors to the website were greeted with a bold, crude proclamation: “Were we let in on our looks? No. Will we be judged on them? Yes.”
Facemash appeared online on a Sunday evening. It spread like wildfire, with some 22,000 votes being cast in the first few hours. Student outrage spread just as quickly. As angry emails clogged his inbox, Zuckerberg was compelled to issue a flurry of apologies. Hauled before a university disciplinary committee, he was slapped with a stern warning for his poor taste and violation of privacy. Zuckerberg was embarrassed—but also newly famous.
Soon after, Zuckerberg was recruited to build a college dating site. He secretly channeled much of his energy in a different direction, designing a platform that would combine elements of the planned dating site with the lessons he’d learned from Facemash. On January 11, 2004, he formally registered TheFacebook.com as a new domain. Within a month, 20,000 students at elite universities around the country signed up, with tens of thousands more clamoring for Facebook to be made available at their schools.
For those fortunate enough to have it, the elegant mix of personal profiles, public postings, instant messaging, and common-friend groups made the experience feel both intimate and wholly unique. Early users also experienced a new kind of feeling: addiction to Facebook. “I’ve been paralyzed in front of the computer ever since signing up,” one college freshman confessed to his student newspaper. That summer, Zuckerberg filed for a leave of absence from Harvard and boarded a plane to Silicon Valley. He would be a millionaire before he set foot on campus again and a billionaire soon thereafter.
Talented and entrepreneurial though Zuckerberg was, these qualities alone aren’t enough to explain his success. What he had in greatest abundance—what all of history’s great inventors have had—was perfect timing.
Facebook, after all, was hardly the first social network. From the humble beginnings of SF-lovers, the 1980s and early 1990s had seen all manner of online bulletin boards and message groups. As the internet went commercial and growth exploded, people started to explore how to profit from our willingness, and perhaps our need, to share. This was the start of “social media”—platforms designed around the idea that an expanding network of users might create and share the content in an endless (and endlessly lucrative) cycle.
The first company dedicated to creating an online platform for personal relationships was launched in 1997. Six Degrees was based on the idea first proposed in sociology studies that there were no more than six degrees of separation between any two people in the world. On this new site, you could maintain lists of friends, post items to a shared bulletin board, and even expand your network to second- and third-degree connections. At its peak, Six Degrees boasted 3.5 million registered members. But internet use was still too scattered for the network to grow at scale, and web browsers were still too primitive to realize many of the architects’ loftiest ambitions.
Through the late 1990s, a wave of such new online services were created. Early dating sites like Match.com essentially applied the eBay model to the world of dating, where you would shop in a marketplace for potential mates. Massively multiplayer online role-playing games (MMORPGs) rose to prominence with the 1997 Ultima series, allowing users to form warring clans. And in 1999, a young programmer launched LiveJournal, which offered access to dynamic, online diaries. These journals would first be called “weblogs,” which was soon shortened to “blogs.” All of these networks would bloom (within ten years, there were more than 100 million active blogs), but across them all, the sociality was only a side effect of the main feature. The golden moment had yet to arrive.
And then came Armageddon. In 2000, the “dot-com” bubble burst, and $2.5 trillion of Silicon Valley investment was obliterated in a few weeks. Hundreds of companies teetered and collapsed. Yet the crash also had the same regenerative effect as a forest fire. It paved the way for a new generation of digital services, building atop the charred remains of the old.
Even as Wall Street retreated from Silicon Valley, the internet continued its extraordinary growth. The 360 million internet users at the turn of the millennium had grown to roughly 820 million in 2004, when Facebook was launched. Meanwhile, connection speed was improving by about 50 percent each year. The telephone-based modem, whose iconic dial-up sound had characterized and frustrated users’ experiences, was dying a welcome death, replaced by fast broadband. Pictures and video, which had once taken minutes or even hours to download, now took seconds.
Most important of all was the steady evolution of HTML and other web development languages that governed the internet’s fundamental capabilities. The first internet browsers were pieces of software that provided a gateway into an essentially static World Wide Web. Visitors could jump from page to page of a website, but rarely could they change what was written on those pages. Increasingly, however, websites could process user commands; access and update vast databases; and even customize users’ experience based on hundreds or thousands of variables. To input a Google search, for instance, was essentially to borrow one of the world’s mightiest supercomputers and have it spin up server farms across the world, just to help you find out who was the voice of Salem the cat in Sabrina, the Teenage Witch (it was Nick Bakay). The internet was becoming not just faster but more visual. It was both user-friendly and, increasingly, user-controlled. Media entrepreneur Tim O’Reilly dubbed this new, improved internet “Web 2.0.”
An apt illustration of the Web 2.0 revolution in action was Wikipedia, launched in 2001. Since the very first encyclopedia was assembled in the first century CE by Pliny the Elder, these compendiums of knowledge had been curated by a single source and held in libraries or peddled door-to-door. By contrast, Wikipedia was an encyclopedia for the digital age, constructed of “wikis”—website templates that allowed anyone to edit pages or add new ones. The result was a user-administered, endlessly multiplying network of knowledge—essentially, a smaller version of the internet. By 2007, the English-language Wikipedia had amassed more than 2 million articles, making it the largest encyclopedia in history.
Wikipedia was just for knowledge, though. The first Web 2.0 site to focus on social networks of friends was launched in 2002. The name “Friendster” was a riff on Napster, the free (and notorious) peer-to-peer file-sharing system that gloriously let users swap music with each other. Following the same design, it linked peer groups of friends instead of music pirates. Within a few months, Friendster had 3 million users.
A series of social media companies seeking to profit from the new promise of online social networking quickly jumped in. Myspace offered a multimedia extravaganza: customizable profiles and embeddable music, peppered with all manner of other colorful options. Fronted by musicians and marketed to teenagers, Myspace unamicably knocked Friendster from its perch. Then there was LinkedIn, the staid professional social network for adults, which survives to this day. And finally came Photobucket, a “lite” social media service that offered the once-unthinkable: free, near-limitless image storage.
At this crucial moment, Facebook entered the scrum. Originally, a user had to be invited into the social network. Within a year, the service spread to 800 college campuses and had more than a million active accounts, buoyed by the kind of demand that could only come with the allure of exclusivity. When Facebook stripped away its original barriers to entry, more people stampeded into the online club. By the end of 2007, the service numbered 58 million users.
As exponential growth continued, Zuckerberg sought to make his creation ever more useful and indispensable. He introduced the News Feed, a dynamic scroll of status updates and “shares” that transformed Facebook from a static web service into a living, breathing world. It also made Facebook a driver of what the world knew about everything from a user’s personal life to global news. For all the sharing Facebook brought to the world, however, little was known about the algorithm that governed the visibility, and thus the importance, of items in the News Feed. That was, like so much of Facebook’s inner workings, shared only between Zuckerberg and his employees.
As Facebook’s power and scale expanded beyond his wildest dreams, Zuckerberg began to grow more introspective about its (and his) potential place in history. Not only did Facebook offer a way to share news, he realized, but it also promised to shape and create mass narratives. As Zuckerberg explained in 2007, “You can start weaving together real events into stories. As these start to approach being stories, we turn into a massive publisher. Twenty to 30 snippets of information or stories a day, that’s like 300 million stories a day. It gets to the point where we are publishing more in a day than most other publications have in the history of their whole existence.”
Within a decade, Facebook would boast 2 billion users, a community larger than any nation on earth. The volume of conversation recorded each day on Facebook’s servers would dwarf the accumulated writings of human history. Zuckerberg himself would be like William Randolph Hearst transposed to the global stage, entertaining visiting ministers and dignitaries from his beige cubicle in Menlo Park, California. He would show off a solar-powered Facebook drone to the pope and arbitrate the pleas of armed groups battling it out in Ukraine. In his hands lay more power and influence than that young teen or any of the internet’s pioneers could have imagined.
But that future hadn’t arrived quite yet. It would take one final revolution before Facebook and its ilk—the new face of the internet—could swallow the world.
On January 9, 2007, Apple cofounder and CEO Steve Jobs donned his signature black turtleneck and stepped onstage to introduce the world to a new technology. “Today, Apple is reinventing the phone!” Jobs gleefully announced. Although nobody knew it at the time, the introduction of the iPhone also marked a moment of destruction. Family dinners, vacations, awkward elevator conversations, and even basic notions of privacy—all would soon be endangered by the glossy black rectangle Jobs held triumphantly in his hand.
The iPhone wasn’t the first mobile phone, of course. That honor belonged to a foot-long, three-pound Motorola monstrosity invented in 1973 and first sold a decade later for the modest price of (in today’s dollars) $10,000. Nor was the iPhone the first internet-capable smartphone. Ericsson engineers had built one of those as far back as 1997, complete with touchscreen and full working keyboard. Only 200 were produced. The technology of the time was simply too clunky and slow.
By comparison, the iPhone was sexy. And the reason wasn’t just the sleek design. Internet access was no longer a gimmick or afterthought; instead, it was central to the iPhone’s identity. The packed auditorium at the 2007 Macworld Expo whooped with excitement as Jobs ran through the list of features: a touchscreen; handheld integration of movies, television, and music; a high-quality camera; and major advances in call reception and voicemail. The iPhone’s most radical innovation, though, was a speedy, next-generation browser that could shrink and reshuffle websites, making the entire internet mobile-friendly.
A year later, Apple officially unveiled its App Store. This marked another epochal shift. For more than a decade, a smartphone could be used only as a phone, calculator, clock, calendar, and address book. Suddenly, the floodgates were thrown open to any possibility, as long as they were channeled through a central marketplace. Developers eagerly launched their own internet-enabled games and utilities, built atop the iPhone’s sturdy hardware. (There are roughly 2.5 million such apps today.) With the launch of Google’s Android operating system and competing Google Play Store that same year, smartphones ceased to be the niche of tech enthusiasts, and the underlying business of the internet soon changed.
By 2013, there were some 2 billion mobile broadband subscriptions worldwide; by 2018, 6 billion. By 2020, that number is expected to reach 8 billion. In the United States, where three-quarters of Americans own a smartphone, these devices have long since replaced televisions as the most commonly used piece of technology.
The smartphone combined with social media to clear the last major hurdle in the race started thousands of years ago. Previously, even if internet services worked perfectly, users faced a choice. They could be in real life but away from the internet. Or they could tend to their digital lives in quiet isolation, with only a computer screen to keep them company. Now, with an internet-capable device in their pocket, it became possible for people to maintain both identities simultaneously. Any thought spoken aloud could be just as easily shared in a quick post. A snapshot of a breathtaking sunset or plate of food (especially food) could fly thousands of miles away before darkness had fallen or the meal was over. With the advent of mobile livestreaming, online and offline observers could watch the same event unfold in parallel.
One of the earliest beneficiaries of the smartphone was Twitter. The company was founded in 2006 by Silicon Valley veterans and hard-core free speech advocates. They envisioned a platform with millions of public voices spinning the story of their lives in 140-character bursts (the number came from the 160-character limitation imposed by SMS mobile texting minus 20 characters for a URL). This reflected the new sense that it was the network, rather than the content on it, that mattered. As Twitter cofounder Jack Dorsey explained, “We looked in the dictionary . . . and we came across the word ‘twitter,’ and it was just perfect. The definition was a ‘short burst of inconsequential information,’ and ‘chirps of birds.’ And that’s exactly what the product was.”
As smartphone use grew, so did Twitter. In 2007, its users were sending 5,000 tweets per day. By 2010, that number was up to 50 million; by 2015, 500 million. Better web technology then offered users the chance to embed hyperlinks, images, and video in their updates.
Soon enough, Twitter was transforming the news—not just how it was experienced (as with Michael Jackson’s death in 2009), but how it was reported. Journalists took to using social media to record notes and trade information, sharing the construction of their stories in real time. As they received instant feedback, Twitter became, in the words of tech reporter Farhad Manjoo, “a place.” It was akin to a clubhouse for everyone, “where many journalists unconsciously build and gut-check a worldview.” The social network had become where people decided what merited news coverage and what didn’t.
Twitter also offered a means for those being reported on to bypass journalists. Politicians and celebrities alike turned to it to get their own messages out. Donald Trump likened his Twitter account to “owning your own newspaper,” but drastically improved, by only featuring one perfect voice: his own. In just a few years, Twitter would become the engine driving political reporting across much of the world, even with a relatively “small” population of 330 million users.
Blistering advancements in smartphone camera quality and mobile bandwidth also began to change what a social network could look like. Instagram launched in 2010—a next-generation photo-sharing service that combined user profiles, hashtags, and a range of attractive image filters. By 2017, Instagram was adding more than 60 million photographs to its archives each day. It was gobbled up by Zuckerberg’s Facebook, just as its video counterpart, YouTube, had been scooped up by Google a decade earlier.
Before almost anyone realized it, mobile tech, carefully policed app stores, and corporate consolidation had effected another massive change in the internet—who controlled it. After decades of freewheeling growth, companies that had been startups just a few years earlier had rapidly risen to rule vast digital empires, playing host to hundreds of millions of virtual residents. Even more important, this handful of companies provided the pillars on which almost all the millions of other online services depended.
And it seems likely that things will stay this way for some time. Would-be rivals have been bought up with ungodly amounts of cash that only the titans can afford to throw around. WhatsApp, for instance, was bought by Facebook in 2014 for $19 billion, the largest acquisition of a venture-backed company in history. Even if smaller companies retain their independence, these titans now control the primary gateways through which hundreds of millions of people access the web. In countries like Thailand and the Philippines, Facebook literally is the internet. For all the internet’s creative chaos, it has come to be ruled by a handful of digital kings.
The outcome is an internet that is simultaneously familiar but unrecognizable to its founders, with deep ramifications for not just the web’s future, but for the future of politics and war as well. As Tim Berners-Lee has written, “The web that many connected to years ago is not what new users will find today. What was once a rich selection of blogs and websites has been compressed under the powerful weight of a few dominant platforms. This concentration of power creates a new set of gatekeepers, allowing a handful of platforms to control which ideas and opinions are seen and shared . . . What’s more, the fact that power is concentrated among so few companies has made it possible to [weaponize] the web at scale.”
It’s an echo of how earlier tech revolutions created new classes of tycoons, as well as new powers deployed into conflict. But it differs, markedly, in the sheer breadth of the current companies’ control. Guglielmo Marconi, for instance, invented radio and tried to monopolize it. But he was unable to contain the technology’s spread or exert control over the emerging network of radio-based media companies. He could scarcely have imagined determining what messages politicians or militaries could send via the airwaves, nor seizing the entire global market for ads on them. Similarly, the inventions of Samuel Morse and then Alexander Graham Bell spawned AT&T, the most successful communications monopoly of the twentieth century. But neither they nor their corporate inheritors ever came close to exercising the political and economic influence wielded by today’s top tech founders.
There is one more difference between this and earlier tech revolutions: not all of these new kings live in the West. WeChat, a truly remarkable social media model, arose in 2011, unnoticed by many Westerners. Engineered to meet the unique requirements of the enormous but largely isolated Chinese internet, WeChat may be a model for the wider internet’s future. Known as a “super app,” it is a combination of social media and marketplace, the equivalent of companies like Facebook, Twitter, Amazon, Yelp, Uber, and eBay all fused into one, sustaining and steering a network of nearly a billion users. On WeChat, one can find and review businesses; order food and clothing; receive payments; hail a car; post a video; and, of course, talk to friends, family, and everyone else. It is an app so essential to modern living that Chinese citizens quite literally can’t do without it: they’re not allowed to delete their accounts.
Put simply, the internet has left adolescence.
In the span of a generation, it has blossomed from a handful of scientists huddled around consoles in two university computer labs into a network that encompasses half the world’s population. Behind this growth lies a remarkable expansion of the demographics of this new community. The typical internet user is no longer a white, male, American computer scientist from California. More than half of all users are now in Asia, with another 15 percent in Africa. As Jakob Nielsen, a pioneer in web interface and usability, once observed of the changes afoot in the “who” of the internet, “Statistically, we’re likely talking about a 24-year-old-woman in Shanghai.”
Half of the world’s population is online, and the other half is quickly following. Hundreds of millions of new internet users are projected to join this vast digital ecosystem each year. This is happening for the most part in the developing world, where two-thirds of the online population now resides. There, internet growth has overtaken the expansion of basic infrastructure. In sub-Saharan Africa, rapid smartphone adoption will see the number of mobile broadband subscriptions double in the next five years. According to U.S. National Intelligence Council estimates, more people in sub-Saharan Africa and South Asia have access to the internet than to reliable electricity.
As a result, the internet is also now inescapable. Anyone seeking to go beyond its reach is essentially out of luck. Remote outposts in Afghanistan and Congo offer Wi-Fi. At the Mount Everest base camp, 17,500 feet above sea level, bored climbers can duck into a fully functional cybercafe. Meanwhile, hundreds of feet beneath the earth’s surface, the U.S. Air Force has begun to rework the communications systems of its nuclear missile bunkers. Among the upgrades is making sure the men and women standing watch for Armageddon can access Facebook.
All these people, in all these places, navigate an online world that has grown unfathomably vast. While the number of websites passed 1 billion sometime in 2014, unknowable millions more lurk in the “deep web,” hidden from the prying eyes of Google and other search indexes. If one counts all the pieces of content with a unique web address that have been created across all the various social networks, the number of internet nodes rises into the high trillions.
In some ways, the internet has gone the way of all communications mediums past. After decades of unbridled expansion, the web has fallen under the control of a few giant corporations that are essentially too big to fail, or at least too big to fail without taking down vast portions of global business with them.
But in other obvious ways, the internet is nothing like its precursors. A single online message can traverse the globe at the speed of light, leaving the likes of poor Pheidippides in the dust. It requires no cumbersome wires or operators. Indeed, it can leap language barriers at the press of a button.
Yet that very same message is also a tool of mass transmission, almost infinitely faster than the printing press and unleashed in a way that radio and television never were. And each and every one of those transmissions joins millions of others every minute, colliding and building upon each other in a manner that bears little resemblance to the information flow of centuries past.
This is what the internet has become. It is the most consequential communications development since the advent of the written word. Yet, like its precursors, it is inextricably tied to the age-old human experiences of politics and war. Indeed, it is bound more closely than any platform before it. For it has also become a colossal information battlefield, one that has obliterated centuries’ worth of conventional wisdom about what is secret and what is known. It is to this revolution that we turn next.