chapter 3 information superhighway

I

In March 2000, CNN anchorman Wolf Blitzer asked Al Gore what set him apart from Bill Bradley, his main rival for the Democratic nomination for the presidency. As part of his answer, Gore said: “During my service in the United States Congress, I took the initiative in creating the Internet.”1 The statement dogged Gore for the rest of the campaign, which was hardly surprising, as the Internet had been created years before he arrived in Washington. But Gore has always been something of a geek. In the late 1970s, he called for the construction of a high-speed data network to link the nation’s major libraries, although nobody took much notice of the freshman Congressman from Tennessee. In 1985, when Gore entered the Senate, he persuaded Congress to finance a study into the feasibility of building a national network of supercomputers (high-performance machines that can perform billions of calculations a second). The Reagan administration resisted Gore’s plans, but in 1989 President George Bush signed into law the National High-Performance Computer Technology Act, a five-year program that allocated $1.7 billion for research into high-performance computer networks. Gore was the bill’s primary sponsor in the Senate. None of this amounts to taking “the initiative in creating the Internet,” but it wasn’t totally insignificant. And Gore also made another contribution: he introduced the phrase “information superhighway” to the political lexicon. In July 1990, in a lengthy piece in the Outlook section of The Washington Post, Gore wrote:

Just as the Interstate highway system made sense for a post-war America with lots of automobiles clogging the crooked two-lane roads, a nationwide network of information superhighways now is needed to move the vast quantities of data that are creating a kind of information gridlock.2

And he added:

If we had the information superhighway we need, a school child could plug into the Library of Congress every afternoon and explore a universe of knowledge, jumping from one subject to another, according to the curiosity of the moment. A doctor in Carthage, Tennessee, could consult with doctors at the Mayo Clinic in Minnesota on a patient’s CAT scan in the middle of an emergency. Teams of scientists and engineers working on the same problem in different locations could work together in a ‘co-laboratory’ if their supercomputers were linked.

As the column inches devoted to the information superhighway multiplied, this sort of passage would become commonplace, with only the details varying. The vignettes made the reader say “Wow,” but they didn’t prompt much further thought. More than a moment’s consideration might have led to some uncomfortable questions for the technology boosters, such as: Why would an eighth-grader need to “plug into” the Library of Congress every afternoon? Wouldn’t she be better off studying her algebra and English grammar? And how would the information superhighway differ from the Internet, which had been in existence for twenty years?

During the 1992 presidential campaign, Gore and Bill Clinton promised to wire up every classroom in the country if they were victorious, and they must have repeated the term “information superhighway” thousands of times. Even when they entered the White House, there was no relief. In February 1993, the president and vice president visited Silicon Valley to launch what was billed as a $17 billion high-technology program. (The fine print revealed that less than $500 million would go toward construction of the information superhighway.) Later in 1993, the White House published an Agenda for Action aimed at ensuring universal access to the information superhighway. In 1994, the White House created an interagency National Information Infrastructure Task Force, which included officials from the Commerce Department, the Treasury, the Pentagon, the Office of Management and Budget, and the Internal Revenue Service. None of this activity meant much, but it kept journalists busy. According to a search of Nexis, the online database, in 1994 the phrase “information superhighway” cropped up almost nine thousand times in major newspapers and magazines. Thereafter, as media interest switched to the Internet, and the Clinton administration became consumed by other subjects, the number of stories mentioning the information superhighway fell sharply. By 2000, it merited just fifteen hundred mentions in major newspaper and magazines, many of them in stories about Gore’s past.

In many ways, the hubbub over the information superhighway helped to create the soggy environment in which the Internet tulips sprouted and took root. In both cases a promising new technology was hyped out of all proportion, prompting a flurry of media coverage and a blaze of Wall Street deal making that ended in heavy losses. Far less money was lost in the information superhighway mania than in the Internet boom, but the cast of characters was similar: self-promoting technology “gurus,” credulous journalists, wily investment bankers, ambitious entrepreneurs, and gullible investors. And what exactly was the information superhighway? Despite all the ink spilled on the subject, it is still impossible to say for sure. Some, such as Gore, saw the information superhighway as a research tool. Others saw it as a high-bandwidth entertainment medium that would replace the clunky, text-based Internet. Yet others saw it as a new way of life. Vagueness was the key to the project’s appeal. As long as the information superhighway couldn’t be pinned down to a specific design, it couldn’t be evaluated objectively and its promise remained limitless. As a senior AT&T executive pointed out in 1994: “The information superhighway is a convenient metaphor, but it’s largely a mythical thing. In a strict sense, it doesn’t exist and will never exist. And in another sense, it’s here already.”3

II

Al Gore wasn’t the only person smitten with digital technology. During the late 1980s and the early 1990s, an unlikely alliance of liberal and conservative thinkers promoted computer networks as the key to human salvation. “We are seeing a revitalization of society,” Michael Hauben, the coauthor of Netizens: On the History and Impact of Usenet and the Internet, wrote. “The frameworks are being redesigned from the bottom up. A new more democratic world is becoming possible.”4 Hauben teaches communications at Columbia Teachers College. He and other liberal intellectuals were particularly taken by the fact that neither governments nor corporations could control the Internet. “People now have the ability to broadcast [their] observations or questions around the world and [have] other people respond,” Hauben wrote. “The computer networks form a new grassroots connection that allows the excluded sections of society to have a voice.”5 This optimistic view of technology seemed to be confirmed in 1991 when Russian liberals used the Internet to help foil an attempted Communist coup.

Closer to home, online bulletin boards, such as the Well, based in San Francisco, provided a virtual meeting place for activists and dreamers of all descriptions. The intelligentsia’s infatuation with computers dates back to the 1984 publication of William Gibson’s Neuromancer, though this science fiction novel was hardly a manifesto for technology-based liberation.6 In Gibson’s nightmarish vision of the future, the world was being run by anonymous corporations, the environment had been stricken by catastrophe, and many people were living in a criminal underworld, renting out parts of their brains as storage devices. Neuromancer was highly influential. At a time when the old frames of reference seemed to be worn out, the concept of “cyberspace” that Gibson introduced proved attractive to many artists and writers. In the “cyberpunk” movement, which emerged soon after, they used computers to interpret the world around them in new ways, with, it must be said, varying degrees of success.

The feminist Donna Haraway seized upon technology as a means of superceding gender differences. In her 1991 book Cyborgs, Simians and Women: The Reinvention of Nature, Haraway argued that, because in cyberspace a person’s sex didn’t make any difference, there was the possibility to create an entirely new relationship between men and women.7 The novelist Susie Bright, in her book Susie Bright’s Book of Virtual Sex, and the editors of Mondo 2000, a glossy magazine aimed at the computer fetishist, saw the sexual potential of cyberspace in more graphic terms. The theme of personal liberation even attracted some icons of the 1960s counterculture, notably Timothy Leary, the former Harvard professor who had famously advised his students to “Tune in, turn on, drop out,” before taking his own advice and repairing to an ashram in upstate New York. In the late 1980s, Leary reemerged in Southern California as an apostle of cyberpunk. Leary still advocated stimulation of the senses, but this time he encouraged people to get high by booting up rather than by taking LSD.

The conservative technology lovers were equally guilty of hyperbole. But largely because they had more money behind them than the liberals did, they were more effective at spreading their message. George Gilder, a political-speechwriter-turned-journalist, was the high priest of the conservative technology cult, although, in many ways, he was an unlikely futurist. The scion of an old New England family, Gilder was brought up partly on a Massachusetts farm and partly in a Manhattan rooming house that his mother kept after his father was killed in World War II. His father’s roommate at Harvard, David Rockefeller, paid for Gilder to be educated at Exeter, an exclusive private school, where he finished last in his class, and at Harvard, where he had an equally undistinguished academic career. After college, Gilder wrote speeches for a number of moderate Republicans, including Jacob Javits and Nelson Rockefeller, then moved sharply to the right. During the 1970s he wrote three political books, the last of which, Visible Man, sold 578 copies.

Gilder seemed destined for a life as a little-known ideologue when, in 1981, at the age of forty-one, he published Wealth and Poverty, a defense of supply-side economics that lambasted Adam Smith and William Buckley for being insufficiently supportive of capitalism. The timing couldn’t have been better. Ronald Reagan had just entered the White House promising big tax cuts for the rich. Gilder provided the new administration with a moral, as well as an economic, argument for trickle-down economics, namely that it provided the greatest wealth and liberty for the greatest number of people. David Stockman, Reagan’s budget director, lauded Gilder’s “Promethean” vision, and William J. Casey, the director of the Central Intelligence Agency, said Wealth and Poverty would “serve as an inspiration and guide for the new Administration.” Even the president himself, while not going so far as to read Gilder’s book—that would have been asking a lot—gave copies to his friends. All the attention turned Wealth and Poverty into a bestseller and Gilder into a public figure.

While Gilder was writing Wealth and Poverty, Peter Sprague, the chairman of National Semiconductor, introduced him to the microprocessor and to Moore’s law. The idea of thousands of minute electrical circuits being arranged on a piece of silicon the size of a pinhead struck Gilder like a revelation. “It just seemed to me to be the most important thing that was going on,” he later recounted. “I didn’t think our literature was especially impressive, or our poetry, art, or films. They were not as representative of what this era was accomplishing as the microchip. The microchip is the Gothic Cathedral of our time.”8 In 1989, Gilder published Microcosm, in which he exalted the microchip. The following year, in Life After Television, he tried to demonstrate how “the microchip will reshape not only the television and computer industries but also the telecommunications industry and all other information services. It will transform business, education and art. It can renew our entire culture.”9

Gilder’s embrace of the computer had almost as much to do with religion as with technology. A devout Christian, who also believes in ESP, he has long viewed America as a fallen society, corrupted by the “depraved” medium of broadcast television, which appealed to the prurience of tens of millions of viewers. But television’s hegemony was based on the scarcity of the electromagnetic spectrum, which could support only a few channels. In the coming digital age, there would be thousands of channels to choose from. “The position of the broadcasters parallels the stance of mass magazines before the rise of television and the proliferation of a thousand specialized magazines,” Gilder exulted. “The TV networks are the Look and Life, the Collier’s and Saturday Evening Post, of the current cultural scene.”10 In the near future, the “dumb terminal” in the corner of the living room would be replaced by a “telecomputer, a personal computer adapted for video processing and connected by fiber-optic threads to other telecomputers all around the world.”

Gilder continued:

Tired of watching TV? With artful programming of telecomputers, you could spend a day interacting with Henry Kissinger, Kim Basinger, or Billy Graham. Celebrities could produce and sell their own software or make themselves available for two-way personal video communication. You could take a fully interactive course in physics or computer science with the world’s most exciting professors, who respond to your questions and let you move at your own learning speed. You could have a fully interactive workday without commuting to the office or run a global corporation without ever getting on a plane.
    You could watch your child play baseball at a high school across the county, view the Super Bowl from any point in the stadium that you choose, or soar above the basket with Michael Jordan. You could fly an aeroplane over the Alps or climb Mount Everest—all on a powerful high-resolution display.11

The fact that this passage reads like Gore’s Washington Post article is no coincidence. The allure of the information superhighway went beyond party politics. The concept appealed to the adventurous little boy or little girl inside every grown-up. Having heard all the things that this fabulous invention could do, who could resist it?

III

With so many exciting things apparently about to happen, the digital revolution was crying out for a chronicler. He arrived in the form of Louis Rossetto, a forty-something magazine editor who had spent most of the 1980s working on computer industry publications like Electric Word and Language Technology. Rossetto sported a ponytail, smoked pot, and had written two books: Takeover, a political thriller that he published under his own name; and Ultimate Porno, which was inspired by the film Caligula, and came out under a pseudonym. In the early 1990s, Rossetto and his girlfriend, Jane Metcalfe, moved to San Francisco and created Wired, a glossy magazine aimed at the “Vanguard of the Digital Revolution.” The first issue, which appeared in January 1993, was financed by a number of investors, including Nicholas Negroponte, the head of the Media Lab at MIT, and Charlie Jackson, a Silicon Valley entrepreneur. Wired was meant to be a lifestyle magazine as well as a technology guide. Sections like “Fetish” and “Street Cred” told readers which new gadgets to buy, while “Idees Forte” and “Jargon Watch” told them what to think and say. The most striking thing about Wired was its garish design. Rossetto had no time for the “old media” idea of readability. His new magazine sometimes featured yellow text on purple background, with indecipherable graphics as a bonus.

Rossetto hoped the first few issues of Wired would attract a few thousand upscale readers, but he underestimated his potential market. After only a year, Wired had attracted 30,000 subscribers and a substantial investment from S. I. Newhouse, the chairman of Conde Nast Publications, which owns Vogue and Vanity Fair. After two years, Wired had a monthly circulation of about 200,000 and a National Magazine Award to its credit. Like all successful magazines, Wired helped to create the culture that it reported on. It made no pretense at objectivity or historical perspective. In his first editor’s note, Rossetto said the digital revolution was creating “social changes so profound that their only parallel is probably the discovery of fire.”12 He told a reporter from The New York Times Magazine that “within ten or twenty years the world will be completely transformed. Everything we know will be different. Not just a change from L.B.J. to Nixon, but whether there will be a President at all. I think Alvin Toffler’s basically right: we’re in a phase change of civilizations.”13

Toffler, the author of Future Shock and The Third Wave, became part of a right-wing intellectual coterie that surrounded Wired, and so did Gilder. Both of them granted long interviews to the magazine. “It’s very hard to think outside the boxes—cultural box, institutional box, political box, religious box—that we are all, every one of us, imprisoned in,” Toffler lamented.14 Wired liked to see itself as above politics, but in reality it was the editorial voice of the rich, highly educated capitalists who created Silicon Valley. Such men had little time for history and even less for government. One of them was Mitchell Kapor, founder of Lotus Development Corporation, helped set up the Electronic Frontier Foundation to lobby against any form of government interference in the new digital media. The fact that the federal government had built the Internet and operated it for more than twenty years didn’t dampen Kapor’s libertarian ardor. In Wired’s third issue he wrote a long article under the headline: “Where Is the Digital Highway Really Heading?”15 The piece called on the government to step aside and let the private sector build and operate the new network.

Rossetto also employed some fiction writers. Michael Crichton contributed an article predicting the demise of newspapers. Douglas Coupland wrote a thinly disguised story about the internal culture of Microsoft, which he subsequently turned into a best-selling book, Microserfs. Negroponte, who wrote a regular back-page column, was one of the magazine’s most popular contributors. The dapper son of a Greek shipping family, educated at an exclusive Swiss school, Negroponte was the perfect spokesman for the rootless culture of technology-driven global capitalism that Wired was promoting. He went to MIT in 1961 to study architecture, and a few years later he joined the faculty as a professor specializing in computers and design. In 1983, Negroponte persuaded MIT’s then president, Jerome Wiesner, to set up the Media Lab, an interdisciplinary research department whose initial staff included a filmmaker, a graphic designer, and a composer, as well as a famous physicist and two mathematicians. With the explosive growth of the PC industry, Negroponte had little difficulty attracting corporate sponsors for his creation. Apple Computer, Time Warner, and NEC all gave money.

The Media Lab supported a number of interesting projects, but its best-known product was Negroponte himself. He did a lot of flying around the world, giving speeches and rubbing shoulders with CEOs and politicians. In futurology, modesty and intellectual caution are false virtues. Negroponte wasn’t overburdened with either, as he made clear in Being Digital, a book that he published in 1995 based on his columns in Wired. “When I met with the Japanese Prime Minister Kiichi Miyazawa in 1992, he was startled to learn that Hi-vision”—high-definition television—“was obsolete,” Negroponte wrote. “Margaret Thatcher, however, did listen to me.”16

Being Digital was translated into thirty languages, and it established Negroponte as the world’s leading technology guru, a role he reveled in. The book wasn’t all dubious. Its early chapters, which explained the meaning of “bits,” “bandwidth,” and other technical terms, were clear and lucid. But when Negroponte went on to describe what life would be like in the not-too-distant future, he sounded as if he had just emerged from Timothy Leary’s ashram. The following extracts were fairly typical:

The idea that twenty years from now you will be talking to a group of eight-inch-high holographic assistants walking across your desk is not farfetched.17

    My VCR of the future will say to me when I come home, “Nicholas, I looked at five thousand hours of television and recorded six segments for you, which total forty minutes. Your old high-school classmate was on the ‘Today’ show, there was a documentary on the Dodecomese Island, etc.”

    Future rooms will know that you just sat down to eat, that you have gone to sleep, just stepped into the shower, took the dog for a walk. A phone would never ring. If you are not there, it won’t ring because you are not there. If you are there and the digital butler decides to connect you, the nearest doorknob may say, “Excuse me, Madam,” and make the connection.19

Negroponte appeared to be deadly serious. He predicted that within a few years The New York Times and The Boston Globe would be usurped by The Daily Me, an online newspaper that would scour the Internet for stories of interest to the individual reader. Video stores like Blockbuster “will go out of business in less than ten years,” Negroponte asserted. Michael Crichton will “make more money selling his books direct” to readers online. And the information superhighway, while it “may be mostly hype today,” will prove to be “an understatement about tomorrow. It will exist beyond people’s wildest predictions.”

Hyperbole aside, the most striking thing in retrospect about the Wired crowd is how they missed the significance of the Internet. Caught up in elusive visions of telecomputers and information superhighways, they looked down on the anarchic text-based network. In Gilder’s Life After Television the Internet didn’t appear at all. The first few issues of Wired hardly mentioned it either. The index to Being Digital included five entries for “interactive television” and five for “information superhighway” but none at all for “Tim Berners-Lee” or “World Wide Web.” After Berners-Lee’s creation took off, Rossetto and his colleagues scrambled to catch up, and the myth was created that they had been in the vanguard of the Internet revolution. In fact, they were just as surprised by the World Wide Web’s remarkable growth as everybody else was.

IV

If Al Gore ever really intended that the federal government should build the information superhighway in the same way it built the interstate highways during the 1950s, he had changed his mind by the time he entered the White House. The cost of constructing a nationwide high-speed network was estimated at anywhere between $500 billion and $1 trillion. For an administration intent on reducing government deficits, spending such a sum was out of the question. In March 1993 Gore moved to lower expectations, telling The National Journal, “The idea of the federal government constructing, owning and operating a nationwide fiber network to the home is a straw man. No one—at least nobody associated with me—has made such a proposal.”20 This was inaccurate—in his 1990 article in The Washington Post Gore had said the new network would be “supported by the federal government”—but at least it cleared things up. The Clinton administration would act as a cheerleader for the information superhighway, but that would be all. The task of getting the new network up and running would fall to private industry, just as Mitch Kapor and others wanted.

This meant that the telephone companies and cable television companies would have to do the job. Both of these industries already had wires going into tens of millions of homes, but they each faced big challenges. The long-distance telephone companies, such as AT&T and MCI, had built their own high-capacity fiber-optic networks, but they depended on the regional Baby Bells, which had been created when AT&T was broken up, to reach their customers. The Baby Bells, such as Bell Atlantic and BellSouth, still largely relied on old-fashioned copper wires. If the telephone network was to be transformed into the information superhighway, these wires would have to be upgraded, which would cost a fortune.

The cable companies, for their part, could transport video and sound into homes, but they lacked the switching technology and powerful computer servers necessary for interactive television of the type Gilder had envisaged. The set-top boxes of the early 1990s were primitive devices that worked by blocking certain channels, depending on which service package the customer had purchased. They couldn’t transmit signals from the viewer to the cable provider. If the cable industry were to deliver interactivity, it would have to invest heavily in computer technology. But most cable companies were already heavily indebted from the cost of laying their cables.

The strongest cable company was Time Warner. Early in 1993, Gerald Levin, Time Warner’s chairman, decided to build a pilot version of the information superhighway in Orlando, Florida, offering video-on-demand, games, and online shopping. Scientific Atlanta and Toshiba agreed to build the interactive set-top boxes, Silicon Graphics agreed to provide powerful network servers, and Hewlett-Packard promised to deliver specially modified color printers, which would allow viewers to print out on-screen advertisements. The Orlando system, which Levin christened “the Full Service Network,” would initially be restricted to 4,000 upscale households, but Time Warner told advertisers that within five years it could be expanded to 14 million homes. “Our new electronic superhighway will change the way people use television,” Levin said. “By having the consumer access unlimited services, the Full Service Network will render irrelevant the notion of sequential channels on a TV set.”21 Time Warner’s announcement prompted a mad scramble to enter a market that John Sculley, the chairman of Apple Computer, claimed could be as large as $3.5 trillion early in the twenty-first century. (This estimate, which amounted to about half the Gross Domestic Product, was ludicrous.) AT&T teamed up with Viacom, the owner of MTV and Nickelodeon, to build a rival interactive television system in Castro Valley, California. Microsoft joined TCI Inc., the nation’s biggest cable company, to set up a pilot system in Seattle. And US West, a Baby Bell based in Denver, combined with computer manufacturer Digital Equipment Corporation to launch a similar project in Omaha, Nebraska.

This spurt of activity delighted the investment bankers on Wall Street, who helped to broker many of the deals. Whenever a new industry emerges, or an old one goes into decline, investment bankers rub their hands. There are always lucrative fees to be earned in bringing companies together and breaking them up. Wall Street research analysts helped to promote deals by publicly debating which old media companies would end up as “roadkill” on the information superhighway. The early conventional wisdom was “content is king,” and this adage helped to fuel a bidding war for Paramount Pictures, one of Hollywood’s oldest film studios. The week after Labor Day of 1993, Viacom agreed to pay $8.2 billion for Paramount. Two weeks later, QVC, a home-shopping network based in Pennsylvania, upped Viacom’s bid by more than a billion dollars. QVC’s main asset was its chairman, Barry Diller, a Hollywood executive with a well-earned reputation as a wheeler-dealer. The siege of Paramount lasted for six months and ensnared more than half a dozen other big companies. On Viacom’s side, they included Nynex and BellSouth, two Baby Bells, and Blockbuster. In QVC’s camp were Comcast and Cox, two cable providers. After the last lawsuit had been filed, and the final investment banker had received down payment on his summerhouse, the Viacom-led alliance emerged victorious.

While the battle for Paramount was going on, Bell Atlantic, another Baby Bell, agreed to merge with TCI. The deal, if it had gone through, would have been worth about $26 billion, making it the biggest corporate merger yet. “This is a perfect information-age marriage,” Ray Smith, the chairman of Bell Atlantic, declared. “Together we will help make the information superhighway a reality.”22 These words proved to be empty. The prospect of a big telephone monopoly merging with a big cable monopoly raised objections in Washington. Telephone and cable companies were supposed to be competing to build the information superhighway, not joining together. Bell Atlantic, when it inspected Tele-Communication Inc.’s books, also had second thoughts. TCI wasn’t the high-technology colossus that its mercurial boss, John Malone, claimed it was. In reality, it was a sprawling assembly of aging cable systems, most of which needed heavy investment if they were to provide interactive television service. In early 1994, Bell Atlantic called off the merger. It wouldn’t be building the information superhighway after all.

Meanwhile, down in Florida, the launch of Full Service Network was being repeatedly delayed. The set-top boxes and the network server didn’t work properly together, and the launch date was postponed from the beginning until the end of 1994. When the experiment did finally get up and running, only a handful of viewers were online. In the middle of 1995, there were still fewer than fifty homes on the network, and many of the promised services, such as news on demand, online banking, and home delivery of fast food, remained unavailable. Finally, at the start of 1996, four thousand homes were connected, and viewers were given the option to order pizza. Even then, things didn’t go as planned. Video-on-demand proved popular, but other services languished, and technical problems continued. Time Warner started to distance itself from the Full Service Network, describing it as just one of several options for interactive television that the firm was pursuing. In May 1997, when the Orlando network had been fully operational for little more than a year, Time Warner announced its closure. The media giant wouldn’t say how much it had spent, but some estimates put the total cost at more than $150 million. One by one, the other interactive television trials were also shut down. By then, though, hardly anybody noticed. Wall Street and Silicon Valley had finally realized that the information superhighway already existed and it was called the Internet. Even George Gilder had caught on.