6

The Internet and the World Wide Web

As these events were unfolding in Silicon Valley, the ARPANET continued expanding. The nineteen nodes connected in 1971 doubled within two years. By 1977, the year the Apple II was introduced, there were about sixty nodes. (A “node” could be a computer connected to the network or a simple terminal, such as the Teletype Model ASR-33.)1 The Stanford laboratory where Douglas Engelbart invented the mouse was one of the first to have an ARPANET connection by 1970, and the Xerox Palo Alto Research Lab had a connection by 1979. In 1983 ARPANET switched from the original protocol for routing packets to a new set that was better able to handle the growing network. The original protocol was divided into two parts: the Transmission Control Protocol, to manage the assembly of packets into messages and ensure that the original message was received, and the Internet Protocol to pass packets from one node to another.2 The combination, TCP/IP, was primarily the work of two scientists, Vint Cerf and Robert Kahn, and it remains at the basis of the Internet to this day.

This decision to split the protocol was based in part on a need to ensure robustness in a military environment. The initial ARPANET was designed to ensure reliable communications during a time of international crisis or even nuclear war—recall the November 1962 conference that J. C. R. Licklider attended right after the Cuban missile crisis. One of the originators of the concept of packet switching, Paul Baran of the RAND Corporation, stated the need to communicate during a crisis explicitly as a goal. The modern Internet evolved from those initial decisions made when the ARPANET was being built. The Internet is much more than that, of course. It is a social and political as well as technical entity. It relies on TCP/IP, but nearly everything else about it is different.

ARPANET nodes were connected by communication lines leased from AT&T or other commercial suppliers at speeds of about 50,000 bits per second. That gave way to microwave radio and satellites. Traffic is now carried mainly on fiber-optic lines with orders of magnitude greater speed. Likewise, the mainframes and minicomputers of the ARPANET evolved to massive server farms consisting of large arrays of microprocessor-based systems similar to desktop personal computers. The mesh topology of the early network evolved to a set of high-speed “backbones” crossing the globe, with a hierarchy of subnetworks eventually connecting personal devices at the individual or office level. Although the Internet is truly a global phenomenon, the east-to-west backbones across the United States dominate traffic. The topology of this network dovetailed with an innovation not foreseen by the ARPANET pioneers: local area networks, especially Ethernet. Outside the home, users access the Internet through a desktop computer or workstation connected to Ethernet, which is connected to a regional network and then to a backbone.

The governance of the Internet has also evolved. In 1983 the Department of Defense established a network for its internal use (MILNET). It turned over the management of the rest of the network to the National Science Foundation (NSF), which in turn let contracts to commercial suppliers to build a network for research and academic use. By this time, commercial networks using packet switching began to appear. Within a few years the “Internet,” as it was now being called, was fully open to commercial use.3 Since about 1995, the U.S. Department of Commerce has held authority for Internet governance, including the assignment of names, although Commerce waited until 2005 to state this formally.4

The main difference between the ARPANET and the Internet is the social and cultural component. Once again the forces that drove the personal computer phenomenon, coexisting alongside the ARPA-funded work, come into play. Early adopters of PCs used them for games, later spreadsheets and word processing, but not for communicating. Nevertheless the PC had the potential to be networked. Like everything else about PCs, networking required effort by PC owners. Networking was not as well publicized as other applications, but evidence of its significance can be found in some of the publications that appeared in the early days of personal computing. Beginning in 1977, the first of many editions of The Complete Handbook of Personal Computer Communications, by Alfred Glossbrenner, appeared. The author argued that those who were not taking advantage of the many online services then available were missing the most important dimension of all of the personal computer revolution.5 He listed dozens of databases that users could access over their home telephone lines. These databases began as specialized services for corporate or professional customers and were now being made available to individuals to amortize the costs of establishing them and broaden their market. It was a positive feedback cycle: as more people accessed these services, the providers saw a reason to offer more types of information, which in turn spurred more customers. Glossbrenner was not alone in his evangelism. Alan Kay, of the Xerox Palo Alto Research Center, remarked that “a computer is a communications device first, second, and third.” And Stewart Brand of the Whole Earth Catalog, stated, “`Telecommunicating’ is our founding domain.”6 In 1977 however, the PC was not first and foremost a communications device. It was just barely a computer at all. Just as it took visionaries like Steve Jobs and Steve Wozniak to make the PC a viable machine, it also took visionaries like Brand, Glossbrenner, and Kay to make telecommunicating for those outside the privileged world of the ARPANET a reality.

To connect a PC, an individual connected a modem, which translated computer signals into audio tones that the telephone network recognized and vice versa. The connection to the phone network was made by placing the handset into an acoustic cradle; later this was replaced by a direct wire. The user then dialed a local number and connected to a service. In addition to commercial services, one could dial into a personal local “bulletin board,” which may have been a simple personal computer, augmented by the addition of a hard drive memory unit and run by a neighbor. If the service was not located in the local calling zone, the user would be reluctant to dial a long-distance number given the rate structure of long-distance calls in those days of the AT&T monopoly. Bulletin boards got around this restriction by saving messages or files and forwarding them to distant bulletin boards late at night, when calling rates were lower. Commercial services took another route: they established local phone numbers in metropolitan areas and connected the lines to one another over their own private network. AT&T made no distinction between the sounds of a voice and the staccato tones of data sent over the phone, so a local data connection for an individual appeared to be “free.” The services established different pricing schemes. For example CompuServe charged $19.95 for an initial setup and one hour of connection to its service, plus an additional $15.00 an hour during business hours, dropping to $12.50 an hour on evenings and weekends. The Source charged a $10.00 monthly fee, plus a fee per hour of connect time depending on the speed, and so on.7 NEXIS charged $50.00 a month, plus $9.00 to $18.00 per search during business hours (this service was obviously not intended for casual, personal use).8 Most of these services maintained a central database on mainframe computers.9 The topology of these services was that of a hub and spokes, harking back to the time-sharing services from which they descended.

Years before the term cyberspace became fashionable, Glossbrenner recognized that the Source, operating out of northern Virginia, was creating a new realm of social interaction, one that Licklider envisioned only among scientists and the military. Every day, as millions of people look at and update their Facebook accounts, the Source lives on. After its demise in 1989, Glossbrenner noted, “It was the system that started it all. It made the mistakes and experienced the successes that paved the way for those who followed. Above all, it contributed a vision of what an online utility could and should be.”10

The Source was the brainchild of William von Meister, whose ideas ran ahead of an ability to implement them. In 1979, von Meister was pushed out of the service he founded.11 Undeterred, he founded another service, to deliver songs to consumers over the phone, and then another company, Control Video Corporation, whose intent was to allow owners of Atari game consoles to play interactively with others online.12 The service attracted the notice of Steve Case, a marketer from Pizza Hut. Case was converted after logging on from a Kaypro computer from his apartment in Wichita, Kansas. He joined Control Video and tried to revive it. In 1985, with funding from a local Washington entrepreneur, he formed Quantum Computer Services, later renamed America Online (AOL).13 By the late 1990s it connected more individuals—as many as 30 million—to the Internet than any other service.

AOL was not alone. In 1984 Prodigy was founded, with joint support from CBS, IBM, and Sears. It pioneered the use of computer graphics at a time when PCs ran the text-only DOS operating system and when modems had very low speeds. Prodigy preloaded graphics onto the user’s computer to get around those barriers. The graphics not only made the service more attractive; they provided advertisers a way to place ads before subscribers, and those ads offset most of the costs. Thus, Prodigy anticipated the business model of the World Wide Web. Prodigy suffered from its users’ heavy use of e-mail and discussion forums, which used a lot of connect time but did not generate ad views. The service also attempted to censor the discussions, which led to a hostile reaction from subscribers. America Online had a more freewheeling policy in its chat rooms, where it appointed volunteers to monitor the content of the rooms. These volunteers guided chat room discussions with a very light touch of censorship. Many of the chat rooms focused on dating, meeting partners, or flirting (helped by the fact that one’s physical appearance did not factor into the discussions, and also that one could disguise one’s gender or sexual preference while chatting). Like Facebook twenty years later, a major reason for AOL’s success was hidden in plain sight.

AOL’s business model, if it was at all understood by the NSF, would not have been something the NSF would have advertised. One must be constantly reminded that these efforts by Glossbrenner, Brand, and the others were operating independent of the NSF-sponsored Internet. For example, Stewart Brand’s path-breaking Whole Earth Software Catalog, published in 1984, did not mention the Internet at all.14 Neither did Glossbrenner in the early editions of his book, and in later editions he mentions “the internet” (lowercase) as one of many networks that were in existence.

Halfway between the Internet and these personal connections were several networks that also contributed to the final social mix of cyberspace. One was BITNET, built in 1981 to connect IBM mainframes.15 The service was often the first entree to networking for many liberal arts students and faculty, who otherwise had no access.16 One of its features was the “Listserv”: a discussion forum similar to those provided by the Source and CompuServe and still in use today. Another service, Usenet, began the same way in 1980, as an interconnection between computers that used the Unix operating system. This was an operating system developed at Bell Laboratories, which had an enthusiastic base of programmers among minicomputer users. The name was a subtle jab at the operating system Multics, which had been developed for Project MAC and had experienced problems getting into production. Usenet was not at first connected to the Internet but had a large impact nonetheless. Thanks to ARPA funding, the TCI/IP protocols were bundled into a distribution of Unix, which became a standard for Internet software by the mid-1980s. Eventually personal computers using the Microsoft Windows operating system could also connect to the Internet, but people writing networking software had to be fluent in Unix, and they spent a lot of time discussing programming and related technical topics on Usenet groups. Usenet groups tended to be more free-wheeling than those on BITNET, reflecting the subculture of Unix programmers.

The Role of the National Science Foundation

The NSF became involved in networking because it wanted to provide access to supercomputers: expensive devices that were to be installed in a few places with NSF support. By 1986 the NSF linked five supercomputer centers, and it made three crucial decisions that would affect later history. The first was to adopt the TCP/IP protocols promulgated by ARPA; although this seems obvious in hindsight, it was not at the time.17 The second was to create a general-purpose network, available to researchers in general, not just to a specific discipline. The third was to fund the construction of a high-speed backbone, to which not only these centers but also local and regional networks would connect.18 In 1987 the NSF awarded a contract to replace the original backbone with a new one, at a speed known as T1: 1.5 million bits per second (Mbps). In 1992 the NSFNET was upgraded to 45 Mbps—T3. (Compare these speeds to the few hundreds of bits per second available to home computer users with modems connected to their telephones.)

One of the major recipients of these contracts was MCI, which to this day (now as a subsidiary of Verizon) is the principal carrier of Internet backbone traffic. MCI descended from a company founded in 1963 to provide a private microwave link between Chicago and St. Louis. AT&T fought MCI at every step, and only in 1971 was MCI able to offer this service. Having fought that battle and won, MCI changed course. The company saw that voice traffic was becoming a low-profit commodity, while packet-switched data traffic was growing exponentially, so it began to focus on providing backbone service to data networks. Contractors delivered a T1 backbone to the NSF by mid-1988. By 1990 the net connected about 200 universities, as well as other government networks, including those operated by NASA and the Department of Energy. It was growing rapidly. BITNET and Usenet established connections, as did many other domestic and several international networks. With these connections in place, the original ARPANET became obsolete and was decommissioned in 1990.

Commercial networks were allowed to connect to the backbone, but that traffic was constrained by what the NSF called an “acceptable use policy.” The Congress could not allow the NSF to support a network that others were using to make profits from. For-profit companies could connect to the network but not use it for commercial purposes.19 In 1988 the NSF allowed MCI to connect its MCI e-mail service, initially for “research purposes”—to explore the feasibility of connecting a commercial mail service.20 That decision was probably helped by Vint Cerf, who had worked on developing MCI Mail while he was employed there. The MCI connection gave its customers access to the growing Internet, and not long after, CompuServe and Sprint got a similar connection. Under the rubric of “research,” a lot could be accomplished.

The topology of the NSF network encouraged regional networks to connect to the NSF’s backbone. That allowed entrepreneurs to establish private networks and begin selling services, hoping to use a connection to the backbone to gain a national and eventual global reach. The NSF hoped that the acceptable use policy would encourage commercial entities to fund the creation of other backbones and allow the NSF to focus on its mission of funding scientific research. By 1995 all Internet backbone services were operated by commercial entities. By then the acceptable use policy was relaxed. That milestone was contained in an amendment to the Scientific and Technology Act of 1992, which pertained to the authorization of the NSF. The legislation passed and was signed into law by President George H. W. Bush on November 23, 1992.21 Paragraph g reads, in full:

In carrying out subsection (a) (4) of this section, the Foundation is authorized to foster and support access by the research and education communities to computer networks which may be used substantially for purposes in addition to research and education in the sciences and engineering, if the additional uses will tend to increase the overall capabilities of the networks to support such research and education activities [emphasis added].22

With those three words, “in addition to,” the modern Internet was born, and the NSF’s role in it receded. 23

The World Wide Web

For the layperson, the Internet is synonymous with a program, introduced in 1991, that runs on it: the World Wide Web. The progression of networking helps explain the confusion. The ARPANET was led by military people who wondered at times whether even personal e-mail messages would be permitted over it. Commercial airline pilots and air traffic controllers, for example, are forbidden to use their radios for anything except the business of flying the aircraft when landing or taking-off. That was the model that ARPANET funders had in mind. Companies like IBM and DEC had their own proprietary networks, which were used mainly for commercial purposes. Hobbyists who ran networks from their homes stressed the same idealism that drove the personal computer. The Internet subsumed all of those models, in part by virtue of its open protocols, lack of proprietary standards, and ability to interconnect (hence the name) existing networks of various designs. The Web, developed by Tim Berners-Lee and Robert Cailliau at the European Council for Nuclear Research (CERN) near Geneva, continued this trend by allowing the sharing of diverse kinds of information seamlessly over the Internet.24 The software was, and remains, free.

Berners-Lee described his inspiration as coming from the diverse groups of physicists at CERN who would meet randomly at strategic places in the hallways of the center and exchange information among one another. Computer networks had places for a structured exchange of information, but it lacked that serendipity that no one can force but Berners-Lee felt he could facilitate. He was aware of a concept known as hypertext, or nonlinear writing, that Vannevar Bush had hinted at in his 1945 Atlantic essay, and which Apple had implemented as a stand-alone program, Hypercard, for its Macintosh line of computers. Doug Engelbart had developed a system that used such texts, the selection of which used his invention, the mouse. Hypertext had also been promoted by Ted Nelson, a neighbor of Stewart Brand in northern California and author of the self-published manifesto Computer Lib. As he saw the growing amounts of data on the Internet, Berners-Lee explored a way of porting those concepts to the online world. His invention had three basic components. The first was a uniform resource locator (URL), which took a computer to any location on the Internet—across the globe, down the hall, or even on one’s own hard drive—with equal facility—a “flat” access device. The second was a protocol, called hypertext transfer protocol (http), which rode on top of the Internet protocols and facilitated the exchange of files from a variety of sources, regardless of the machines they were residing on. The third was a simple hypertext markup language (HTML), a subset of a formatting language already in use on IBM mainframes. That HTML was simple to learn made it easy for novices to build Web pages. With administrative and technical support from Robert Cailliau, the system was working on machines at CERN on Christmas Day 1990, and the following year it began to spread around the globe.25

Berners-Lee also wrote a program, called a browser, that users installed on their computer to decode and properly display the information transmitted over the Web (the term may have come from Apple’s Hypercard). In 1992, as the Web began spreading across university and government laboratories in the United States, a group at an NSF-sponsored supercomputer center in Illinois took the browser concept further, adding rich graphics capabilities and integrating it seamlessly with the mouse and with icons. That browser was called Mosaic; its creators later moved to Silicon Valley and founded a company, Netscape, to market a similar commercial version of it. Microsoft later would license another descendant of Mosaic for its own browser, Internet Explorer. One significant feature of Netscape’s browser that was not present in others was a way to transmit information, especially credit card numbers, securely by encrypting the data stream. The method, called Secure Socket Layer, opened up the Internet to commercial transactions. That phenomenon was soon dominated by firms like Amazon and eBay, both launched in 1995. Another feature that Netscape introduced was a way of tracking a user’s interaction with a Web site over successive screens, an obvious feature but something not designed into the Web architecture. Netscape’s browser did so by an identifying number called a “cookie” (the name no doubt came from the television program Sesame Street). When Netscape applied for an initial public offering of stock in August 1995, the resulting frenzy set off a bubble that made all the other Silicon Valley doings pale by comparison. The bubble eventually burst, but in its wake, the Web had established itself as not only an information service but a viable method of doing business.

By 1995 many American households were gaining access to the Internet. Initially the process was difficult. It required loading communications and other software on one’s computer, establishing an account with an Internet service provider, and accessing the network by dialing a number on one’s voice telephone line. As the decade progressed, the process became easier, with much of the software loaded onto a personal computer at purchase. In most urban areas, dial-up telephone connections were replaced by faster digital subscriber line (DSL) service, or one’s cable television wire. Toward the end of the 1990s, a new method of access appeared, called Wi-Fi, which began as an ad hoc system that took advantage of little-used portions of the electromagnetic spectrum. Later it was sanctioned by the Institute of Electrical and Electronics Engineers (IEEE), which published specifications for it as Standard 802.11b. In 2003 Wi-Fi got a further boost from Intel, which developed circuits called “Centrino,” with embedded Wi-Fi capability. Laptop computers, by then replacing desktop PCs for many users, would thus have wireless access built in. By design, Wi-Fi had a limited range. Many coffee shops, restaurants, airports, and other public spaces installed the service, either for a fee or free.26

Although Wi-Fi was popular, efforts to install it throughout metropolitan areas faltered, as did efforts to develop a faster and more powerful successor to 802.11b. There were several reasons for that, among them the rise of cellular phone networks that allowed users access to e-mail and other limited Internet services over their cell phone frequencies.

The Smart Phone

The rise of cell phone text communication, led by the Canadian company Research in Motion and its phone, the BlackBerry, is another case of history’s stubborn refusal to follow a logical, linear path. The inventors of cellular telephone service knew they were advocating a revolutionary change in the way telephones connected to one another, but they did not foresee the impact their invention would have on computing. Mobile telephone service had been available for personal automobiles as far back as the 1950s, but with severe limitations. A phone call would use a single-frequency channel, of which few were allocated for such use, and the radios had to be powerful to cover a reasonable geographical area that the automobile might traverse. Just as packet switching replaced the dedicated circuits of traditional telephone connections, so too did cellular service break up a region into small areas, called cells, with the phones in each cell using frequencies that could be used without interference by other phones in other cells, since they operated at lower power. The technique worked by transferring, or handing off, a call from one cell to another as the caller moved. This technique obviously required a complex database of phone subscribers, a way to measure the relative strengths of their signals relative to nearby towers, and a switching method to hand over the call from one cell to another as the neighboring cell came into play.

The theory for such a service was developed at Bell Labs after World War II, but it was not practically implemented until years later. One of the pioneering installations came in 1969 on Amtrak’s Northeast Corridor, between Washington, D.C., and New York, where passengers on board a train could make calls that were handed off from one tower to another as the train traveled down the track. Further details of this history are beyond the scope of this narrative, but it is widely agreed that the “first” call made on a handheld phone in the United States was by Martin Cooper, of Motorola, on April 3, 1973, to a scientist at Bell Labs.27 Motorola’s aggressive promotion of the service led to its dominance of the market for many years. The relationship between these stories and the evolution of computing is similar to that of the Morse telegraph and the Teletype to computing’s early history: continuous convergence of different base technologies. The Internet has been described as a convergence of the telegraph and traditional mainframe computer. Beginning around 2000, there was a second convergence, this time of personal computing with the radio and the telephone.

The BlackBerry allowed access to one’s corporate e-mail account and to send and receive brief e-mail messages. It soon became addictive for executives. Another trigger for the convergence of computer and telephone came from a Silicon Valley company that had more modest aspirations. In 1996 Palm, Inc. introduced a personal organizer, soon dubbed a “personal digital assistant,” that replaced a paper address book, to-do list, calendar and notebook that people carried in their pockets. The Palm Pilot was not the first of these devices, but thanks to a careful design by one of its founders, Jeff Hawkins, it had a simple, easy-to-understand user interface that made it instantly popular. The Palm’s user-interface designers, as well as Martin Cooper of cell phone fame, were both influenced by the controls and especially the “Communicator” that Captain Kirk used in the original Star Trek television series.28 Captain Kirk’s Communicator had a flip-open case, which was popular among cell phones for a while (it has since fallen out of favor).

Not long after Palm established itself, Hawkins and two other employees from the company broke off and founded a competitor, Handspring, that offered similar devices. In 2002 Handspring offered a PDA with cell phone capabilities (or perhaps it was the other way around—a cell phone with PDA capabilities). Just as the Pilot was not the first digital organizer, the Handspring Treo was not the first smart phone. But it integrated the functions of phone and computer better than any other products, and it set off the next wave of convergence in computing. The convergence was triggered not only by advances in semiconductor and cell phone technology but also by a focus on the user’s interaction with the device—a theme that goes back to World War II. Before long, smart phones were introduced with Wi-Fi as well as cellular reception, global positioning system (GPS) navigation, cameras, video, even gyroscopes and accelerometers that used tiny versions of components found in missile guidance systems (see figure 6.1).

Figure 6.1

Figure 6.1

The Handspring Treo. The Treo inaugurated the class of devices known in the United States as the smart phone: a convergence of a host of digital technologies in a handheld portable device. Credit: Copyright © Hewlett-Packard Development Company, L.P. Reproduced with permission.

Handspring and Palm later merged but were unable to hold on to their lead. In 2007 Apple introduced its iPhone, setting off a frenzy similar to the dot-com bubble of the 1990s that continues to this day. Apple followed the iPhone a few years later with the iPad, a tablet computer that also operated on cell phone as well as Wi-Fi networks. Computer companies had been offering tablet computers for years, but the iPad has been far more successful. Zeno’s paradox prevents telling this story further—anything I could say about Apple and its product strategy will likely be obsolete by the time it appears in print. The convergence is real, however, and it is driven as much by Apple’s attention to design (following the lead of Palm and Handspring) as by advances in semiconductor electronics.

When discussing this convergence, one must acknowledge that the smart phone, like the World Wide Web before it, is not without its flaws. These portable devices operate on both cellular and Wi-Fi networks, which are not interchangeable. Thus the devices violate the basic goals of interoperability espoused by Vint Cerf, Robert Kahn, and Tim Berners-Lee, the pioneers of the Internet and the World Wide Web, who worked to create a system with open standards accessible to all. Smart phones offer wireless connections where Wi-Fi is unavailable. But that comes at a price: higher costs, a small display, no mouse, an awkward method of entering data, and (in the United States), a lock-in to a specific provider that takes steps to prevent migration to a competitor.

This evolution of the smart phone illustrates the significance of the microprocessor in the history of technology. An iPhone or iPad is a device built around a microprocessor. It accepts strings of bits as its input and sends out stream of bits as its output. Programmers determine the relationship between the two streams. Incoming bits include digitized radio signals, sounds, pixels from a camera, readings from a GPS receiver or accelerometer, text or equations typed on a touch screen—“anything” in the sense of a Universal Turing Machine. The output stream of bits can be voice, pictures, movies, sounds, text, mathematical symbols—again, anything.

Social Networking and Google

Smart phones addressed one deficiency of the Internet’s connectivity. The initial Web design had several other deficiencies, which were addressed by a variety of approaches between its debut in 1991 and the turn of the millennium. Netscape’s introduction of “cookies” and the Secure Socket Layer were among the first. Tim Berners-Lee described how he hoped that it would be as easy to write to the Web as it was to read pages on it. Although the simplicity of HTML allowed a computer-savvy person to create a Web site, it was not as easy as surfing through the Web with a mouse. By the late 1990s, that imbalance was addressed by a specialized Web site called a “Web log,” soon shortened to the unfortunate term blog. Online postings appeared from the earliest days of local bulletin board systems (BBS). CompuServe offered something similar, which it compared to the freewheeling discussions on then-popular citizens band radio. Programs that automated or simplified this process for the Web began to appear around 1997, with blogger.com one of the most popular (purchased by Google in 2003). A typical blog consisted of a space where one could type in text, in reverse chronological order with the newest on top, and a column on one side that contained links to other Web sites of interest. The key breakthrough was that the blogger did not have to compile a program or otherwise drop down to HTML programming. The most successful blogs gained large followings and spanned the spectrum from celebrities, journalists, and pundits to ordinary folks who had something to say. A good blogger made frequent, concise, and well-written comments as frequently as possible, so that Web surfers would get in the habit of checking it every day. With the advent of social networking sites like Facebook, blogs lost some of their appeal, although they remain popular; the best of them are a threat to traditional print-based journalism.

Another deficiency of the Web was a direct consequence of its most endearing feature: the flatness of its architecture, that is, the way that a URL could find data on one’s own hard drive as easily as on a server located on another continent. With the world literally at one’s fingertips (or mouse click), how does one locate information? Unlike many of the proprietary closed networks that preceded it, the Web came with no indexing scheme. That gap was soon filled by Web sites called portals and by search engines, which traversed the flatness of the Web and provided Web surfers a guide to the information contained in it. These evolved to be financially supported by advertisements, and during the inflation of the dot-com bubble in the late 1990s, dozens of such sites were founded. Most failed during the stock market crash of 2000–2002. A few deserve mention, especially Yahoo! and Google, which became among the most visited sites on the Web.

Before the World Wide Web prevailed on the Internet, AOL had pioneered the notion of a network that guided its subscribers to content in a friendly way. AOL gradually merged with the Internet and dropped its subscription-only access, but its controlled access had followers, as well as detractors, who derided AOL as “the Internet with training wheels.” Yahoo! (the company name included the exclamation point) was started in 1994 by two Stanford University students, Jerry Yang and David Filo, who initially compiled a guide to the Web manually. They took a crucial step that year by funding the site with advertisements, harking back to the Prodigy service. As the Web grew, it became less and less practical to continue indexing the Web manually, although automated indexing left much to be desired, because it had little intelligence to know whether the presence of a keyword implied that the Web site containing it actually contained useful information pertaining to that word.

The sites of Yahoo! and its competitors evolved to be more of a guide to the Web than just a search function, becoming what the press called a portal to the Web. As AOL and Prodigy had done in the pre-Web days, a visitor to a portal could find news, e-mail, discussion groups, winning lottery numbers, sports scores, and endless other information all in one place. The ease with which one could set up such a portal, drawing on content that others created, led to competitors, many of which were given absurdly high valuations by Wall Street in the late 1990s. After 2000, Yahoo! survived, while most of the others disappeared. AOL merged with the media giant Time-Warner in 2000 (actually it bought Time-Warner using its inflated stock valuation), but the merger did not work out. AOL has survived, though as a much smaller entity.

The emphasis on creating these portals did not fully solve the problem of searching for information. The portals included a search bar, but search was not among their priorities. One reason was that if a user found a site using a portal’s search function, he or she would likely leave the portal and go to the sites found by the search, which meant this person was less likely to see the advertisements.29 Into that breach stepped a few pure search engines. One of the most interesting of these was AltaVista, formed in 1995 as a subsidiary of the Digital Equipment Corporation, in part to take advantage of the high processing speeds of its own “Alpha” microprocessor. For a few brief years, AltaVista was the search engine that the most savvy Internet surfers preferred. It had a clean user interface, an ability to deliver results quickly, and a focus on search rather than other things. Digital Equipment , always a hardware company, failed to see that AltaVista was an extremely valuable property in its own right, not just as a way to sell Alpha chips, and as a result AltaVista faded as competitors entered the field.30

Google, like Yahoo! was founded by two Stanford University students, Larry Page and Sergei Brin. Stanford’s location in the heart of Silicon Valley has given it an edge over MIT and other universities as places where ideas like these emerge, although there are many other factors as well, especially the relations Stanford has developed with venture capitalists located nearby. One factor seldom mentioned in the histories of Google is the presence on the Stanford faculty of Donald Knuth, whose book The Art of Computer Programming, Volume Three: Sorting and Searching, remains a classic work on the topic, decades after its publication in 1973.31 The task set before search engines is a difficult one and can never be entirely automated. Histories of Google do mention the mentorship of Terry Winograd, a professor who had done pioneering work in artificial intelligence at the beginning of his career, when he developed a program called SHRDLU that exhibited an understanding of natural-language commands.32 This is worth mentioning because it illustrates how far computing has evolved from its roots in World War II. Computers continue to serve as calculators, with a myriad of applications in science and engineering. Modern high-end supercomputers are the descendants of the ENIAC. Winograd and his colleagues working in artificial intelligence took early steps toward using a computer to process texts. Those steps faltered after initial modest success, but with Google, the efforts bore fruit.

Google surpassed the rival search engines because it consistently delivered better results when a user typed in a query. It ranked results not just by the number of times a term appeared on a Web page, for example, but also by how many other sites linked to a Web site. That was a Web version of a technique developed for scientific publishing by Eugene Garfield, in which the importance of a scientific paper was judged by how many other papers cited it in footnotes. Google’s algorithm was more complex than that, and major portions of it remain a trade secret. Another factor contributing to Google’s success was one we have encountered again and again: Google presented the user with a clean, simple search screen, devoid of pop-up ads, fancy graphics, and other clutter. Two other Web sites that consistently rank among the most visited, Wikipedia and Craigslist, also have a text-oriented design with few frills. That was a lesson that many Web sites have not learned; their creators apparently cannot resist the temptation to load a page with an angry fruit salad of too many colors, fonts, typefaces, and pop-up ads. These designs may seem a long way from the human factors work done during World War II on antiaircraft fire control devices, but they are the spiritual descendants.

Facebook and Twitter

As of this writing, many press accounts of computing all feel obligated to mention Facebook and Twitter in the same sentence. But the two have little in common other than being currently very popular. Twitter, founded in 2006, is easy to explain. A messaging program that is restricted to a few lines of text, it is the product of the smart phone revolution and a creative response to the phones’ small screens and keyboards. One can use Twitter on a laptop or desktop computer, but by using a full-sized screen and keyboard, blogs and other tools are more effective. A good percentage of the press coverage for Twitter is from journalists who are concerned that the program is going to adversely affect their careers—which may turn out to be true.

Facebook is another story, one with deep roots. Its meteoric rise was the subject of a Hollywood film, The Social Network (2010). One of the most interesting aspects of Facebook’s success was how quickly it eclipsed the rival program MySpace, which as recently as 2008 was considered the established leader in such programs.33 Facebook’s overpowering presence suggests that the traditional view of the Internet and Web (in this context “traditional” means the way it was in 2005) is giving way. Many users log on to Facebook in the morning and leave it in a window (or on a separate screen) all day. It offers a single site for sharing gossip, news, photos, and many of the other offerings found on the Web. It is to the Web as Walmart is to retailing: a single place where someone can do all his or her shopping, under one roof, to the detriment of traditional specialty stores, and even supermarkets and department stores that had already integrated a lot of shopping. What is more, Facebook is beyond the reach of Google’s search tentacles. One cannot locate, say, the enormous number of photographs stored on Facebook without joining and becoming an active member.

Before the Web transformed the Internet, AOL provided a one-stop access to similar services, using its private network of servers. After 1995 AOL was compelled to open up its private enclave, and it lost its privileged position as the favored portal to networked information. And before AOL there was Geocities, a Web site that used a geographical metaphor to organize chat rooms, personal Web sites, and other topics. And before Geocities there were bulletin boards and Usenet, with its cadre of Unix-savvy users trading information. Although seldom stated explicitly, a driving force for many of these online communities was sex, as noted by the obsession Facebook users have with the updates to one’s “status.” Historically this social force ranged from the simple expedient use of the network to find partners (of either sex) to date, to socialize with, to marry, or for seedier uses, including prostitution and pornography.34 In 1990, as Usenet was establishing categories for discussion about the Unix operating system, an “alt” (for alternate) category was established, for topics not generally pertaining to computer programming or the network. Before long, “alt.sex” and “alt.drugs” appeared, prompting one user to state, “It was therefore artistically necessary to create alt.rock-n-roll, which I have also done.”35 The social forces driving AOL and the bulletin boards were the ancestors of the forces driving Facebook, Twitter, and similar programs in the twenty-first century. As with the invention of the personal computer itself, these forces drove networking from the bottom up, while privileged military and academic agencies drove networking from the top down. Today’s world of networked computing represents a collision of the two.