I
The Internet story begins with a familiar figure in American history: the Yankee inventor. Vannevar Bush (no relation to the political family of the same name) was born into a middle-class family in Chelsea, Massachusetts, in March 1890. Adept with numbers and fascinated by gadgets, he studied engineering at Tufts University, where, in 1913, he invented a device to measure distances over uneven ground. Made from a bicycle wheel, a rotating drum, some gears and a pen, this contraption, which was called a Prolific Tracer, earned Bush a master’s degree and his first patent. Two years later, Harvard and Massachusetts Institute of Technology jointly awarded him a Ph.D. for his research into how electrical currents behave in power lines. Bush then moved into the private sector. He returned to academic life in 1919, joining the electrical engineering faculty at MIT, where he was to remain until the Second World War.
During the interwar years, a series of revolutionary inventions transformed daily life. For the first time, ordinary Americans gained access to electricity, motor cars, refrigerators, and radios, Economists spoke of a “New Era” of technology-based prosperity. The stock market crash of October 1929 and the subsequent Great Depression put paid to such language, but scientific progress continued unabated, especially in Bush’s field of electrical engineering, where useful applications seemed to emerge from the laboratory every week. MIT has always had close links to industry, and Bush lent his expertise to numerous manufacturing ventures, including firms that later became part of Texas Instruments and the Raytheon Corporation.
In 1931, Bush and his students invented an unwieldy contraption consisting of levers, gears, shafts, wheels, and discs, all mounted on a large metal frame, which they called a “differential analyzer.” The first digital computer, the ENIAC, wasn’t invented for another decade and a half, but Bush’s invention was a mechanical computer designed to mechanize the solution to differential equations, a mathematical problem that had tormented students for decades. It cemented his reputation as an ingenious engineer. During the Second World War, the U.S. Navy used an updated version of the differential analyzer, one weighing a hundred tons, to help it calculate missile flight trajectories.
In 1938, Bush was appointed president of the Carnegie Institution, a prestigious research institute based in Washington. After moving to the capital, he became friendly with Harry Hopkins, a senior aide to President Roosevelt. Hopkins and Roosevelt were keen to mobilize the nation’s scientists and engineers behind the war effort, and they picked Bush as the man for the job. In 1941, Roosevelt named Bush as director of the newly formed Office of Scientific Research and Development, which operated independently of the Pentagon and the civilian agencies. Bush quickly justified his selection. Riding roughshod over bureaucrats, military and civilian alike, he granted military research contracts to Harvard, MIT, Cal Tech, and other leading universities. At the war’s peak, he had more than six thousand scientists working for him. The research carried out by these scientists facilitated the development of the proximity fuse, radar, and the hydrogen bomb.
In 1944, Roosevelt asked Bush to explore how scientists and the government could work together when the war was over. The following year, Bush published Science, the Endless Frontier, in which he proposed federal financing of basic scientific research, especially in the fields of health and national security. This report led directly to the creation of the National Science Foundation, which became the main government agency for supporting scientific research. Bush also dusted off an essay about the future that he had written before the war, when he had been doing some research into information retrieval systems. He did a bit of rewriting and published the essay in the July 1945 edition of Atlantic Monthly under the title “As We May Think.”
According to Bush, the biggest problem facing scientists after the war would be information overload. “The investigator is staggered by the findings of thousands and thousands of other workers—conclusions which he cannot find time to grasp, much less to remember, as they appear. Yet specialization becomes increasingly necessary for progress, and the effort to bridge between disciplines is correspondingly superficial.”1 Fortunately, Bush went on, the very phenomenon that had caused the problem—the explosion of scientific research—also provided two potential solutions to it: microphotography and the cathode ray tube. The former could reduce the Encyclopaedia Britannica to the volume of a matchbox. The latter could be used to display text and pictures on glass screens. Put them together, Bush wrote, and you could construct a device “in which an individual stores all his books, records, and communications, and which is mechanized, so that it may be consulted with exceeding speed and flexibility.”2
Bush called his proposed machine a “memex.” He said it would feature “translucent screens, on which material can be projected for convenient reading. There is a keyboard, and sets of buttons and levers. Otherwise it looks like an ordinary desk.”3 All entries in the memex would be indexed by title and by subject, just as in a regular library, but users could also move between items of interest more directly, via what Bush called “trails.” Each time a researcher created a new file, he would be able, by tapping in a code, to link it to a second file of his choosing, which, in turn, could be linked to a third, and so on. Thereafter, anybody looking at the first file could call up the other files by pressing a couple of buttons. The great attraction of this filing system, Bush argued, was that it would mimic the human mind and work by “association.” Vast amounts of related information would be grouped together in an easily accessible format. “The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities,” Bush explained. “The patent attorney has on call the millions of issued patents, with familiar trails to every point of his client’s interest. The physician, puzzled by a patient’s reactions, strikes the trail established in studying an earlier similar case, and runs rapidly through analogous case histories, with side references to the classics for the pertinent anatomy and histology.”4
Bush’s memex was never built, but his information “trails” were the intellectual forerunner of the World Wide Web, although the road between them was long and circuitous. In the early 1960s, Ted Nelson, an iconoclastic New Yorker who decided as a schoolchild that “most people are stupid, most authority is malignant, God does not exist and everything is wrong,” realized that digital computers, with their enormous storage capacity, provided the technical means to make Bush’s ideas a reality.5 Nelson envisioned all the knowledge in the world being collected on an enormous database, where it would be available to anybody who could use a computer. The information would be presented in what Nelson called “hypertext”—the first use of the word—and computer users would be able to jump from one hypertext file to another, just as in Bush’s memex. Nelson called his ambitious project Xanadu. It was never implemented, but the idea of hypertext caught on. In 1968, Doug Engelbart, a Stanford researcher who had come across “As We May Think” while stationed in the Philippines, demonstrated the first working hypertext program, the “oNLine System.” Engelbart, who dedicated his life to creating software designed to augment human intelligence, was years ahead of his time. His oNLine software allowed users to browse through multiple windows of text using a small block of wood mounted on a ball to direct the cursor—a prototype mouse.
The early hypertext systems inspired a devoted band of enthusiasts, but it would be another twenty-five years before the invention of the World Wide Web. The reasons for the delay were technological and sociological. In the late 1960s, computers were big, expensive, difficult to program, and largely incompatible. They didn’t communicate easily, and neither did many of their users, who tended to be young men with long greasy hair, thick glasses, and an obsessive interest in science fiction. The computer “geeks” tended to congregate in university science departments, where, for reasons of economy and camaraderie, they often worked through the night. Many of them held the outside world in contempt, and the feeling was generally reciprocated.
It took the emergence of the personal computer to bring down the barriers between the computer literate and the rest of society. In 1971, Ted Hoff, an engineer at Intel, a technology firm based in Northern California, invented the microprocessor—a computer on a silicon chip the size of a thumbnail. By today’s standards, Hoff’s microchip was primitive—it contained 2,300 transistors, had a memory of 1,024 bits, and ran at 0.5 megahertz; Intel’s 1995 Pentium chip contained more than 3 million transistors, had a memory of 256 megabytes, and ran at 150 megahertz—but it was a truly historic invention. Once Hoff had demonstrated that a computer didn’t have to be the size of a room, the profit motive did the rest. In 1975, Ed Roberts, the founder of MITS, a calculator company based in Albuquerque, New Mexico, put a microchip in a box with a screen and called his invention the Altair, after a character from Star Wars. The following year, Steve Jobs and Steve Wozniak, two high school dropouts in Menlo Park, California, used the Altair as the basis for the Apple I, the first commercially successful microcomputer. In 1977, a pair of Harvard dropouts, Bill Gates and Paul Allen, wrote some software for the Altair and set up a company to market it, which they called Microsoft. Four years later, Gates and Allen licensed a version of their software to IBM, the mainframe computer giant, which used it in a new product line: the IBM Personal Computer.
For all the wealth and praise subsequently heaped on Gates and Jobs, their technical achievements were modest. Much more important were the scientists at the Xerox Palo Alto Research Center who developed both the graphical user interface that Apple used and a commercial version of Engelbart’s mouse. These two inventions replaced the blinking cursor with point-and-click computing. The other essential component of the PC age is Moore’s law, named after Gordon Moore, the founder of Intel, which decrees that the processing capacity of microchips doubles every eighteen months or so. There is no irrevocable physical principle determining that Moore’s law must hold, but it has worked for almost forty years. As a result, computing gets cheaper all the time. In 1960, the cost of carrying out a million operations on a computer was about seventy-five dollars; by 1990, the cost had fallen to less than a thousandth of a cent. But despite the technical leaps of the 1970s and 1980s, the PC remained a limited tool—useful for processing words and crunching numbers, but little else besides. When all was said and done, it was still an isolated, lifeless box.
II
It was the Pentagon that brought the PC to life. For decades, strategists had been thinking about how to enable the U.S. military’s communications system to survive a nuclear first strike, which would, in all likelihood, knock out large parts of the network. In the early 1960s, Paul Baran, a Polish-born engineer at the Rand Corporation, a Pentagon-financed research institute in Santa Monica, California, came up with a potential solution. Baran proposed to allow individual units on the ground to communicate with each other without going through a centralized command and control structure. The key to this idea was something called “packet switching.” In the old days (pre-Baran), most communications networks, such as the Bell phone system, were based on analog circuits. When somebody dialed a number, a switch at a local telephone exchange assigned a wire to the call. When the caller talked into his receiver, his voice was converted into an electronic imprint (analog), which traveled along the wire to another local exchange and then onto the recipient’s handset, where it was reconverted into voice waves. Throughout the call, a single connection was maintained. (These days, the principle is the same but the practice is more complicated. Hundreds of calls can be sent, or “multiplexed,” down the same wire at one time.) Circuit technology worked well most of the time, but if the circuit was broken at any point, as might well happen during a nuclear attack, the call would be lost. To get around this problem, Baran was forced to forgo the analog world and think in terms of digital technology.
Digitization was one of the great inventions of the twentieth century. It involves converting any form of information—text, voice, pictures, music, video—to numbers (hence the term “digital”). In digitizing a black-and-white photograph, for example, the number 0 may be ascribed to white and 1,000 to black, with each shade of gray somewhere between the two. If a fine grid is superimposed on the photograph, each square in the grid can be given a number, and the resultant string of numbers represents the digital version of the photograph. If these numbers are sent down a phone line, then reconverted to colors at the other end, the result is an almost perfect impression of the original. For technical reasons, the numbers used are not the decimal ones we see every day, which work in multiples of ten, but binary numbers, which work in multiples of two. In the binary number system, any number consists of just two digits: 0 and 1. For example, the binary expression of the decimal number 4 is 100. With only two digits, binary numbers can be transmitted as electrical impulses. “Off” means 0; “on” means 1. Each 1 or 0 is usually referred to as a “bit.” A string of eight bits is called a “byte.” A million bytes is called a “megabyte.”
Baran combined digitization with a new way of sending information called “packet switching.” Once a stream of voice or data had been converted into digital form, it would be split into a number of small pieces, called “packets.” Each packet would include a “header,” or address, to tell the routing switches on the network which direction to send it in. The packets would be dispatched across the network individually, and they wouldn’t necessarily travel by the same route. If one route was blocked, perhaps because it had been destroyed, the switches would choose another route. They would keep sending each packet until it had been received. At the final address, a piece of software would gather all the packets together and reassemble them in the original order.
Baran’s fellow scientists hailed his design, which was published in 1964, but it took the U.S. government several years to realize its implications. In 1968, the Advanced Research Projects Agency (ARPA), a generously funded Pentagon body that was set up in the wake of the Sputnik launch, decided to build a computer network linking several university departments that carried out research for the Department of Defense. ARPA’s motives in commissioning the network were not purely military. Since the early 1960s, when J. C. R. Licklider, an MIT computer scientist who had started out as an experimental psychologist, headed it up, ARPA’s computing division had financed interesting research wherever it arose. One of Licklider’s successors, Bob Taylor, was particularly interested in getting computers to work together more effectively. The obvious answer, circuit switching (direct lines between each computer), was problematic because on a big network there are so many possible connections. Eventually, somebody remembered Baran’s idea for a packet-switched network. Bolt, Beranek, and Newman, a technology firm with close links to MIT, landed the contract to build the new network. Honeywell Corporation supplied the computer hardward. AT&T provided the telephone cables. On October 1, 1969, the ARPANET went online. A researcher at the University of California at Los Angeles tried to send a message to the Stanford Research Institute. In one of history’s little ironies, the network crashed, but the Pentagon persisted. Within months, the ARPANET was delivering messages between UCLA, Stanford, the University of California at Santa Barbara, and the University of Utah. Eventually, the network was expanded to other universities across the country.
The ARPANET was an important step forward, but it still wasn’t easy for computers to communicate, especially if they were on different networks. They used many languages, and they didn’t even follow the same rules of conversation. For example, they sometimes tried to talk at the same time. In 1973, Vinton Cerf, of Stanford, and Robert Kahn, of ARPA, addressed these problems by writing a new rule book for how computers should communicate on the ARPANET and elsewhere. Cerf and Kahn had two guiding principles: their rules would allow different networks to be linked together in an internetwork, or “internet”; and all types of communications would be treated equally. An order from the Pentagon to a battalion commander would be passed across the network in the same way as a digitized picture of Niagara Falls being sent to National Geographic. Cerf and Kahn encapsulated these ideas in two new standards: the Transmission Control Protocol (TCP), which detailed how information should be split into packets and reassembled at the destination; and the Internet Protocol (IP), which specified how the packets should be sent across the network. Different networks would be tied together via “gateways,” special computers that would redirect the packets from one network to another, translating them as necessary. The Cerf and Kahn design, which was later modified by other computer scientists, proved flexible enough to incorporate vastly different networks and robust enough to withstand rapid growth. It allowed the ARPANET, a single network, to develop into the Internet—an internetwork of hundreds of thousands of networks.
This transformation didn’t take place overnight, but the ARPANET proved popular with scientists, who used it to exchange informal messages as well as research. Ray Tomlinson, an MIT-trained computer scientist, designed the first e-mail program for ARPANET users, using the symbol “@” to separate the name of the sender from his or her network address. E-mail facilitated the development of online communities that had nothing to do with scientific research. One of the most popular mailing lists among early ARPANET users was devoted to science fiction. By the start of 1982, there were more than a dozen networks connected to the ARPANET. A year later, the ARPANET was split in two. One half of it (the MILNET) was reserved for military users. The other half (the ARPANET) was given over to scientific researchers. Rival networks were also growing up. In 1982, the National Science Foundation launched the CSFNET, which was open to any research institution willing to pay the dues and ban commercial use. It didn’t make much sense for the federal government to be financing two different networks. In the late 1980s, Congress approved the construction of the NSFNET, a new high-speed national network that would replace the ARPANET, which was becoming technically obsolete, as the Internet’s backbone. A number of regional networks were also built and linked to the NSFNET, such as BARRNet in Northern California and NYSERNet in the Northeast. On February 28, 1990, the Pentagon closed down the ARPANET, leaving the Internet in the hands of the National Science Foundation. To mark the occasion, Vinton Cerf penned a poem that ended with the line “Lay down thy packet, now O friend, and sleep.”6
Computer networking was no longer esoteric. Many big companies had their own internal networks, which were called Local Area Networks (LANs). When these LANs were connected to the Internet, the network expanded dramatically. There were also a number of networks that had been built by independent “hackers,” a term that didn’t have the negative connotations it does now. In 1978, two Chicago students, Randy Seuss and Ward Christensen, invented the modem, which allowed computer users to exchange files without going through a host system. A year later, Tom Truscott and Steve Bellovin, two students at Duke University, modified the popular Unix operating system so that people could exchange files over regular telephone wires. This led to the growth of the USENET, a community of online bulletin boards that was sometimes referred to as “a poor man’s ARPANET.” In 1983, Tom Jennings, a California software developer, demonstrated how to post bulletin boards on PCs and helped set up FIDONET, the first big PC network. By 1991, FIDONET had tens of thousands of users around the world, many of them in countries where freedom of communication was previously unknown.
At the end of the 1980s, according to some estimates, several million computers were connected to networks. On the Internet alone, there were more than 800 networks and more than 150,000 registered addresses. However, for all but the expert, logging onto the Internet remained a fraught exercise. In order to retrieve a file from a computer on the network, a person might have to use one program to access the other computer, another program to locate the file he wanted, and a third program to translate it into something his own computer could understand, if that was even possible. Not surprisingly, online communication remained largely the preserve of computer geeks who liked the technical challenge. Things would have stayed this way for longer if it hadn’t been for a reserved Englishman living in Switzerland.
III
Tim Berners-Lee was brought up in London. His mother and father were computer scientists. After high school, he studied physics at Oxford University, where he built his own personal computer with an old television, a microprocessor, and a soldering iron. Berners-Lee left Oxford in 1976 and spent a few years as a software engineer. In 1980, he landed a temporary post at CERN, the European particle physics laboratory that nestles beneath Mont Blanc just outside Geneva. The scientists at CERN came from all over Europe, and they brought their own computers and software with them, which made it difficult to record their activities. Berners-Lee wrote some software to keep track of what the scientists were up to and which computer programs they were using. He called his own program Enquire, short for Enquire Within About Everything, a Victorian book he had read as a child that offered advice on everything from cleaning clothes to investing. Enquire was simple, little more than an address book, really, but it reflected a bigger ambition on Berners-Lee’s part, as he later recalled: “Suppose all the information stored on computers everywhere were linked, I thought. Suppose I could program my computer to create a space in which every computer at CERN, and on the planet, would be available to me and to anyone else. There would be a single, global information space.”7
In 1984, Berners-Lee returned to CERN, this time as a full-time researcher. A colleague directed him to the Internet, which was then little known in Europe. Berners-Lee was impressed by the decentralized nature of the American network, and the fact that it could be accessed by various operating systems, but he found it cumbersome to use. How much simpler things would be if each file on the Internet had its own label and if users could jump between related files using hypertext, just as Vannevar Bush and Ted Nelson had envisaged. Berners-Lee believed such a system was now technically feasible, and he set out to write some software to prove it. By 1990, he had made enough progress to try and name his project. His first thought was “The Information Mesh,” but that sounded too messy. “The Information Mine” was too egocentric. (Its acronym was TIM.) In the end, Berners-Lee settled on “The World Wide Web.”
Berners-Lee’s design was deceptively simple. (That was its great strength.) The World Wide Web would sit on top of the Internet, utilizing its communications protocols and packet-switching technology, as well as existing computers and phone lines. It would consist of just three elements: (1) A computer language for formatting hypertext files, which Berners-Lee called Hypertext Markup Language (HTML); (2) A method of jumping between files, which Berners-Lee called the Hypertext Transfer Protocol (HTTP); (3) A unique address code attached to each file that could be used to call up any file on the Web instantly. Berners-Lee called his address code a Universal Resource Identifier (URI). The word “identifier” was later replaced with “locator,” leading to the acronym that is used today, “URL.”
Once Berners-Lee had drawn up the basic architecture of the World Wide Web, he needed some content to show how it would work. Using his own computer as the host, he created the first Web site, www.info.cern.ch, which contained a description of HTML, HTTP, and URI, together with some notes. With the help of a French colleague, Robert Cialliau, and a visiting English graduate student, Nicola Pellow, Berners-Lee also designed a primitive Web browser that allowed users to read and edit files. By the end of 1990, the World Wide Web was running on several computers at CERN. The following spring, Berners-Lee demonstrated his creation to a seminar of computer scientists. A few months later, in August 1991, he posted details of the World Wide Web at several locations on the Internet, including alt.hypertext, a bulletin board favored by hypertext enthusiasts. The posting directed users to the CERN Web site, where they could download software to set up their own site. Four months later, Paul Kunz, a Stanford computer scientist who had visited CERN, launched the first American Web site.
To begin with, less than a hundred people a day visited Berners-Lee’s site, but the number gradually increased. Every three months or so, it doubled. In the summer of 1992, when Berners-Lee plotted the “hits” on a graph, he ended up with a curved line that got steeper as it moved from left to right. This pattern, which mathematicians call an exponential curve, was to become a basic feature of the online world. For a long time it would seem that any statistic to do with the World Wide Web—the number of Web sites, the number of networks attached to the Web, the number of Web users—followed the same uplifting pattern. Eventually, the exponential curve would achieve talismanic status, especially on Wall Street. It would get to the stage where no presentation about the Internet was complete without a graph showing the magic pattern; and entire business plans would be based on creating one and riding it out.
Despite its growing popularity, Berners-Lee’s invention initially confused many people. The World Wide Web was not a place or a thing. It didn’t have a single piece of hardware at the center of it or an organization overseeing it. Essentially, it was just a set of conventions for exchanging information over the Internet. “I told people that the Web was like a market economy,” Berners-Lee would later write. “In a market economy, anybody can trade with anybody, and they don’t have to go to a market square to do it. What they do need, however, are a few practices everyone has to agree to, such as the currency used for trade, and the rules of fair-trading. The equivalent of rules for fair-trading, on the Web, are the rules about what a URI means as an address, and the language the computers use—HTTP—whose rules define things like which one speaks first, and how they speak in turn.”8
The comparison of the World Wide Web to a market economy was prophetic. But before Berners-Lee’s invention could be developed commercially it would have to become easier and more pleasant to use. In the early days, the World Wide Web was a sea of gray text, which Berners-Lee’s browser displayed one line at a time. A number of scientists developed more powerful Web browsers, but none of them really took off. The most popular browser at the time was an Internet browser called Gopher, which Mark McCahill, a computer scientist at the University of Minnesota, designed. Gopher located files via a system of menus and grouped them together in related topics. By early 1993 it had become so popular that the University of Minnesota decided to charge an annual license fee to nonacademic users. “This was an act of treason in the academic community and the Internet community,” Berners-Lee noted. “Even if the university never charged anyone a dime, the fact that the school had announced it was reserving the right to charge people for the use of Gopher protocols meant it had crossed the line.”9 Internet enthusiasts were used to using the network free of charge. Professional software developers feared being sued if they worked on what had become proprietary technology. Usage of Gopher fell dramatically and never recovered.
The same thing could easily have happened to the World Wide Web. CERN owned the intellectual property rights to Berners-Lee’s programs, and it was unclear what it would do with them. Berners-Lee pressed his bosses to release the rights and, after some hesitation, they agreed. On April 30, 1993, CERN announced that from now on anybody anywhere in the world could use the World Wide Web protocols without paying a royalty or observing any legal constraints. This farsighted move created a global electronic commons where people could come together, talk, play, and, before too long, buy refreshments.
IV
At the start of 1993, the Internet had more than a million computers attached to it, but Internet commerce was still an oxymoron. In order to prevent companies from profiting from taxpayer subsidies, the U.S. government had proscribed using the Internet for commercial activities. Anybody who wanted access to the NSFNET, the Internet’s backbone, had to abide by an “Acceptable Use Policy,” which restricted it to “research and education.” A few specialist bookstores had skirted this ban by posting discreet advertisements on online bulletin boards, but by and large the Internet remained commerce-free—often militantly so. Whenever somebody tried to use the Internet to make money the online community reacted by deluging the offender with aggressive e-mails, a practice known as “flaming.”
In a few years, all this changed and the Internet was transformed from an idealistic community of technology enthusiasts to a global bazaar. The key figure behind this transformation was a little-known public official, Stephen S. Wolff, the program director for computer networking at the National Science Foundation, which, following the shutdown of the ARPANET, was responsible for running the Internet’s backbone. In 1990 and 1991, Wolff initiated a series of discussions with other government departments and representatives from the private sector about what to do next with the Internet. Back in 1987 the NSF had awarded a five-year contract to the Michigan Educational Research Information Triad (MERIT), a nonprofit computer network based in Michigan, to build and operate the NSFNET. This contract was coming up for renewal, and a number of commercial network providers were pressing the NSF to let them enter the Internet market. Some of these companies, such as Performance Systems International (later known as PSI NET) and Advanced Network Services (ANS), were new ventures that had grown out of regional not-for-profit networks. Others were established telecommunications carriers, such as MCI, AT&T, and Sprint, which were building their own high-speed networks.
Wolff, who would later become a senior executive at Cisco Systems, the biggest manufacturer of networking gear for the Internet, had no interest in running the Internet indefinitely as a public utility, although this is what some Internet activists were calling for. Wolff believed the government’s role was to foster important technologies, then let the private sector take over. The “NSF recognizes limitations and only has so much money,” he argued. “If you don’t stop doing old things, then you can’t start any new things. And when something gets to the point that it becomes a commodity product there is no reason for the NSF to be supporting it.”10 As far back as the mid-1970s, Pentagon officials had thought about allowing a private operator to take over the ARPANET, but they couldn’t find any companies interested. Now the private sector was keen to exploit what it saw as a big potential market. Wolff decided the best thing would be to allow several companies to build and operate backbone networks. This would prevent any single company from dominating the Internet, while allowing the NSF to bow out.
In November 1991, the NSF issued a proposal reflecting Wolff’s thinking, which proposed to shut down the NSFNET over the ensuing few years and replace it with competing commercial networks. These systems would be linked by a series of electronic gateways, so it would appear to the individual Internet user that he or she was using a single network. The government would create a separate “very-high-speed Backbone Network Service” restricted to scientific researchers and supported by taxpayers’ money. All other Internet users would have to sign up with a commercial Internet Service Provider (ISP). Since each company would own and operate its own network, the Acceptable Use Policy would become redundant, and the Internet would turn into a capitalist free-for-all. There was remarkably little public discussion about this plan, which revolutionized the Internet. A few hearings were held on Capitol Hill, but they were attended mainly by lobbyists from the telecommunications industry who supported privatization. The NSF asked companies to submit bids to operate the new system, and it awarded contracts to a number of them, including MCI, Sprint, and Bellcore. On April 30, 1995, the NSFNET was closed down. The Internet was now a private-sector enterprise.