Chapter 2. Listening to Napster

Clay Shirky, The Accelerator Group

Premature definition is a danger for any movement. Once a definitive label is applied to a new phenomenon, it invariably begins shaping—and possibly distorting—people’s views. So it is with the present movement toward decentralized applications. After a year or so of attempting to describe the revolution in file sharing and related technologies, we have finally settled on peer-to-peer as a label for what’s happening.[2]

Somehow, though, this label hasn’t clarified things. Instead, it’s distracted us from the phenomena that first excited us. Taken literally, servers talking to one another are peer-to-peer. The game Doom is peer-to-peer. There are even people applying the label to email and telephones. Meanwhile, Napster, which jump-started the conversation, is not peer-to-peer in the strictest sense, because it uses a centralized server to store pointers and resolve addresses.

If we treat peer-to-peer as a literal definition of what’s happening, we end up with a phrase that describes Doom but not Napster and suggests that Alexander Graham Bell is a peer-to-peer engineer but Shawn Fanning is not. Eliminating Napster from the canon now that we have a definition we can apply literally is like saying, “Sure, it may work in practice, but it will never fly in theory.”

This literal approach to peer-to-peer is plainly not helping us understand what makes it important. Merely having computers act as peers on the Internet is hardly novel. From the early days of PDP-11s and Vaxes to the Sun SPARCs and Windows 2000 systems of today, computers on the Internet have been peering with each other. So peer-to-peer architecture itself can’t be the explanation for the recent changes in Internet use.

What have changed are the nodes that make up these peer-to-peer systems—Internet-connected PCs, which formerly were relegated to being nothing but clients—and where these nodes are: at the edges of the Internet, cut off from the DNS (Domain Name System) because they have no fixed IP addresses.

Resource-centric addressing for unstable environments

Peer-to-peer is a class of applications that takes advantage of resources—storage, cycles, content, human presence—available at the edges of the Internet. Because accessing these decentralized resources means operating in an environment of unstable connectivity and unpredictable IP addresses, peer-to-peer nodes must operate outside the DNS and have significant or total autonomy from central servers.

That’s it. That’s what makes peer-to-peer distinctive.

Note that this isn’t what makes peer-to-peer important. It’s not the problem designers of peer-to-peer systems set out to solve, like aggregating CPU cycles, sharing files, or chatting. But it’s a problem they all had to solve to get where they wanted to go.

What makes Napster and Popular Power and Freenet and AIMster and Groove similar is that they are all leveraging previously unused resources, by tolerating and even working with variable connectivity. This lets them make new, powerful use of the hundreds of millions of devices that have been connected to the edges of the Internet in the last few years.

One could argue that the need for peer-to-peer designers to solve connectivity problems is little more than an accident of history. But improving the way computers connect to one another was the rationale behind the 1984 design of the Internet Protocol (IP), and before that DNS, and before that the Transmission Control Protocol (TCP), and before that the Net itself. The Internet is made of such frozen accidents.

So if you’re looking for a litmus test for peer-to-peer, this is it:

If the answer to both of those questions is yes, the application is peer-to-peer. If the answer to either question is no, it’s not peer-to-peer.

Another way to examine this distinction is to think about ownership. Instead of asking, “Can the nodes speak to one another?” ask, “Who owns the hardware that the service runs on?” The huge preponderance of the hardware that makes Yahoo! work is owned by Yahoo! and managed in Santa Clara. The huge preponderance of the hardware that makes Napster work is owned by Napster users and managed on tens of millions of individual desktops. Peer-to-peer is a way of decentralizing not just features, but costs and administration as well.

Up until 1994, the Internet had one basic model of connectivity. Machines were assumed to be always on, always connected, and assigned permanent IP addresses. DNS was designed for this environment, in which a change in IP address was assumed to be abnormal and rare, and could take days to propagate through the system.

With the invention of Mosaic, another model began to spread. To run a web browser, a PC needed to be connected to the Internet over a modem, with its own IP address. This created a second class of connectivity, because PCs entered and left the network cloud frequently and unpredictably.

Furthermore, because there were not enough IP addresses available to handle the sudden demand caused by Mosaic, ISPs began to assign IP addresses dynamically. They gave each PC a different, possibly masked, IP address with each new session. This instability prevented PCs from having DNS entries, and therefore prevented PC users from hosting any data or applications that accepted connections from the Net.

For a few years, treating PCs as dumb but expensive clients worked well. PCs had never been designed to be part of the fabric of the Internet, and in the early days of the Web, the toy hardware and operating systems of the average PC made it an adequate life-support system for a browser but good for little else.

Over time, though, as hardware and software improved, the unused resources that existed behind this veil of second-class connectivity started to look like something worth getting at. At a conservative estimate—assuming only 100 million PCs among the Net’s 300 million users, and only a 100 MHz chip and 100 MB drive on the average Net-connected PC—the world’s Net-connected PCs presently host an aggregate 10 billion megahertz of processing power and 10 thousand terabytes of storage.

The launch of ICQ, the first PC-based chat system, in 1996 marked the first time those intermittently connected PCs became directly addressable by average users. Faced with the challenge of establishing portable presence, ICQ bypassed DNS in favor of creating its own directory of protocol-specific addresses that could update IP addresses in real time, a trick followed by Groove, Napster, and NetMeeting as well. (Not all peer-to-peer systems use this trick. Gnutella and Freenet, for example, bypass DNS the old-fashioned way, by relying on numeric IP addresses. United Devices and SETI@home bypass it by giving the nodes scheduled times to contact fixed addresses, at which times they deliver their current IP addresses.)

A run of whois counts 23 million domain names, built up in the 16 years since the inception of IP addresses in 1984. Napster alone has created more than 23 million non-DNS addresses in 16 months, and when you add in all the non-DNS instant messaging addresses, the number of peer-to-peer addresses designed to reach dynamic IP addresses tops 200 million. Even if you assume that the average DNS host has 10 additional addresses of the form foo.host.com, the total number of peer-to-peer addresses now, after only 4 years, is of the same order of magnitude as the total number of DNS addresses, and is growing faster than the DNS universe today.

As new kinds of Net-connected devices like wireless PDAs and digital video recorders such as TiVo and Replay proliferate, they will doubtless become an important part of the Internet as well. But for now, PCs make up the enormous majority of these untapped resources. PCs are the dark matter of the Internet, and their underused resources are fueling peer-to-peer.

Napster is peer-to-peer because the addresses of Napster nodes bypass DNS, and because once the Napster server resolves the IP addresses of the PCs hosting a particular song, it shifts control of the file transfers to the nodes. Furthermore, the ability of the Napster nodes to host the songs without central intervention lets Napster users get access to several terabytes of storage and bandwidth at no additional cost.

However, Intel’s “server peer-to-peer” is not peer-to-peer, because servers have always been peers. Their fixed IP addresses and permanent connections present no new problems, and calling what they already do “peer-to-peer” presents no new solutions.

ICQ and Jabber are peer-to-peer, because they not only devolve connection management to the individual nodes after resolving the addresses, but they also violate the machine-centric worldview encoded in DNS. Your address has nothing to do with the DNS hierarchy, or even with a particular machine, except temporarily; your chat address travels with you. Furthermore, by mapping “presence”—whether you are at your computer at any given moment in time—chat turns the old idea of permanent connectivity and IP addresses on its head. Transient connectivity is not an annoying hurdle in the case of chat but an important contribution of the technology.

Email, which treats variable connectivity as the norm, nevertheless fails the peer-to-peer definition test because your address is machine-dependent. If you drop AOL in favor of another ISP, your AOL email address disappears as well, because it hangs off DNS. Interestingly, in the early days of the Internet, there was a suggestion to make the part of the email address before the @ globally unique, linking email to a person rather than to a person@machine. That would have been peer-to-peer in the current sense, but it was rejected in favor of a machine-centric view of the Internet.

Popular Power is peer-to-peer, because the distributed clients that contact the server need no fixed IP address and have a high degree of autonomy in performing and reporting their calculations. They can even be offline for long stretches while still doing work for the Popular Power network.

Dynamic DNS is not peer-to-peer, because it tries to retrofit PCs into traditional DNS.

And so on. This list of resources that current peer-to-peer systems take advantage of—storage, cycles, content, presence—is not necessarily complete. If there were some application that needed 30,000 separate video cards, or microphones, or speakers, a peer-to-peer system could be designed that used those resources as well.

It seems obvious but bears repeating: Definitions are useful only as tools for sharpening one’s perception of reality and improving one’s ability to predict the future. Whatever one thinks of Napster’s probable longevity, Napster is the killer app for this revolution.

If the Internet has taught technology watchers anything, it’s that predictions of the future success of a particular software method or paradigm are of tenuous accuracy at best. Consider the history of “multimedia.” If you had read almost any computer trade magazine or followed any technology analyst’s predictions for the rise of multimedia in the early ’90s, the future they predicted was one of top-down design, and this multimedia future was to be made up of professionally produced CD-ROMs and “walled garden” online services such as CompuServe and Delphi. And then the Web came along and let absolute amateurs build pages in HTML, a language that was laughably simple compared to the tools being developed for other multimedia services.

HTML’s simplicity, which let amateurs create content for little cost and little invested time, turned out to be HTML’s long suit. Between 1993 and 1995, HTML went from an unknown protocol to the preeminent tool for designing electronic interfaces, decisively displacing almost all challengers and upstaging CD-ROMs, as well as online services and a dozen expensive and abortive experiments with interactive TV—and it did this while having no coordinated authority, no central R&D effort, and no discernible financial incentive for the majority of its initial participants.

What caught the tech watchers in the industry by surprise was that HTML was made a success not by corporations but by users. The obvious limitations of the Web for professional designers blinded many to HTML’s ability to allow average users to create multimedia content.

HTML spread because it allowed ordinary users to build their own web pages, without requiring that they be software developers or even particularly savvy software users. All the confident predictions about the CD-ROM-driven multimedia future turned out to be meaningless in the face of user preference. This in turn led to network effects on adoption: once a certain number of users had adopted it, there were more people committed to making the Web better than there were people committed to making CD-ROM authoring easier for amateurs.

The lesson of HTML’s astonishing rise for anyone trying to make sense of the social aspects of technology is simple: follow the users. Understand the theory, study the engineering, but most importantly, follow the adoption rate. The cleanest theory and the best engineering in the world mean nothing if the users don’t use them, and understanding why some solution will never work in theory means nothing if users adopt it all the same.

In the present circumstance, the message that comes from paying attention to the users is simple: Listen to Napster.

Listen to what the rise of Napster is saying about peer-to-peer, because as important as Groove or Freenet or OpenCOLA may become, Napster is already a mainstream phenomenon. Napster has had over 40 million client downloads at the time of this writing. Its adoption rate has outstripped NCSA Mosaic, Hotmail, and even ICQ, the pioneer of P2P addressing. Because Napster is what the users are actually spending their time using, the lessons we can take from Napster are still our best guide to the kind of things that are becoming possible with the rise of peer-to-peer architecture.

The primary fault of much of the current thinking about peer-to-peer lies in an “if we build it, they will come” mentality, where interesting technological challenges of decentralizing applications are assumed to be the only criterion that a peer-to-peer system needs to address in order to succeed. The enthusiasm for peer-to-peer has led to a lot of incautious statements about the superiority of peer-to-peer for many, and possibly most, classes of networked applications.

In fact, peer-to-peer is distinctly bad for many classes of networked applications. Most search engines work best when they can search a central database rather than launch a meta-search of peers. Electronic marketplaces need to aggregate supply and demand in a single place at a single time in order to arrive at a single, transparent price. Any system that requires real-time group access or rapid searches through large sets of unique data will benefit from centralization in ways that will be difficult to duplicate in peer-to-peer systems.

The genius of Napster is that it understands and works within these limitations.

Napster mixes centralization and decentralization beautifully. As a search engine, it builds and maintains a master song list, adding and removing songs as individual users connect and disconnect their PCs. And because the search space for Napster—popular music—is well understood by all its users, and because there is massive redundancy in the millions of collections it indexes, the chances that any given popular song can be found are very high, even if the chances that any given user is online are low.

Like ants building an anthill, the contribution of any given individual to the system at any given moment is trivial, but the overlapping work of the group is remarkably powerful. By centralizing pointers and decentralizing content, Napster couples the strengths of a central database with the power of distributed storage. Napster has become the fastest-growing application in the Net’s history in large part because it isn’t pure peer-to-peer. Chapter 4, explores this theme farther.

Where’s the content?

Napster’s success in pursuing this strategy is difficult to overstate. At any given moment, Napster servers keep track of thousands of PCs holding millions of songs comprising several terabytes of data. This is a complete violation of the Web’s data model, “Content at the Center,” and Napster’s success in violating it could be labeled “Content at the Edges.”

The content-at-the-center model has one significant flaw: most Internet content is created on the PCs at the edges, but for it to become universally accessible, it must be pushed to the center, to always-on, always-up web servers. As anyone who has ever spent time trying to upload material to a web site knows, the Web has made downloading trivially easy, but uploading is still needlessly hard. Napster dispenses with uploading and leaves the files on the PCs, merely brokering requests from one PC to another—the MP3 files do not have to travel through any central Napster server. Instead of trying to store these files in a central database, Napster took advantage of the largest pool of latent storage space in the world—the disks of the Napster users. And thus, Napster became the prime example of a new principle for Internet applications: Peer-to-peer services come into being by leveraging the untapped power of the millions of PCs that have been connected to the Internet in the last five years.

While some press reports call the current trend the “Return of the PC,” it’s more than that. In these new models, PCs aren’t just tools for personal use—they’re promiscuous computers, hosting data the rest of the world has access to, and sometimes even hosting calculations that are of no use to the PC’s owner at all, like Popular Power’s influenza virus simulations.

Furthermore, the PCs themselves are being disaggregated: Popular Power will take as much CPU time as it can get but needs practically no storage, while Gnutella needs vast amounts of disk space but almost no CPU time. And neither kind of business particularly needs the operating system—since the important connection is often with the network rather than the local user, Intel and Seagate matter more to the peer-to-peer companies than do Microsoft or Apple.

It’s too soon to understand how all these new services relate to one another, and the danger of the peer-to-peer label is that it may actually obscure the real engineering changes afoot. With improvements in hardware, connectivity, and sheer numbers still mounting rapidly, anyone who can figure out how to light up the Internet’s dark matter gains access to a large and growing pool of computing resources, even if some of the functions are centralized.

It’s also too soon to see who the major players will be, but don’t place any bets on people or companies that reflexively use the peer-to-peer label. Bet instead on the people figuring out how to leverage the underused PC hardware, because the actual engineering challenges in taking advantage of the underused resources at the edges of the Net matter more—and will create more value—than merely taking on the theoretical challenges of peer-to-peer architecture.

The early peer-to-peer designers, realizing that interesting services could be run off of PCs if only they had real addresses, simply ignored DNS and replaced the machine-centric model with a protocol-centric one. Protocol-centric addressing creates a parallel namespace for each piece of software. AIM and Napster usernames are mapped to temporary IP addresses not by the Net’s DNS servers, but by privately owned servers dedicated to each protocol: the AIM server matches AIM names to the users’ current IP addresses, and so on.

In Napster’s case, protocol-centric addressing turns Napster into merely a customized FTP for music files. The real action in new addressing schemes lies in software like AIM, where the address points to a person, not a machine. When you log into AIM, the address points to you, no matter what machine you’re sitting at, and no matter what IP address is presently assigned to that machine. This completely decouples what humans care about—Can I find my friends and talk with them online?—from how the machines go about it—Route packet A to IP address X.

This is analogous to the change in telephony brought about by mobile phones. In the same way that a phone number is no longer tied to a particular physical location but is dynamically mapped to the location of the phone’s owner, an AIM address is mapped to you, not to a machine, no matter where you are.

This does not mean that DNS is going away, any more than landlines went away with the invention of mobile telephony. It does mean that DNS is no longer the only game in town. The rush is now on, with instant messaging protocols, single sign-on and wallet applications, and the explosion in peer-to-peer businesses, to create and manage protocol-centric addresses that can be instantly updated.

Nor is this change in the direction of easier peer-to-peer addressing entirely to the good. While it is always refreshing to see people innovate their way around a bottleneck, sometimes bottlenecks are valuable. While AIM and Napster came to their addressing schemes honestly, any number of people have noticed how valuable it is to own a namespace, and many business plans making the rounds are just me-too copies of Napster or AIM. Eventually, the already growing list of kinds of addresses—phone, fax, email, URL, AIM, ad nauseam—could explode into meaninglessness.

Protocol-centric namespaces will also force the browser into lesser importance, as users return to the days when they managed multiple pieces of Internet software. Or it will mean that addresses like aim://12345678 or napster://green_day_ fan will have to be added to the browsers’ repertoire of recognized URLs. Expect also the rise of "meta-address” servers, which offer to manage a user’s addresses for all of these competing protocols, and even to translate from one kind of address to another. ( These meta-address servers will, of course, need their own addresses as well.) Chapter 19, looks at some of the issues involved .

It’s not clear what is going to happen to Internet addressing, but it is clear that it’s going to get a lot more complicated before it gets simpler. Fortunately, both the underlying IP addressing system and the design of URLs can handle this explosion of new protocols and addresses. But that familiar DNS bit in the middle (which really put the dot in dot-com) will never recover the central position it has occupied for the last two decades, and that means that a critical piece of Internet infrastructure is now up for grabs.

Much has been made of the use of Napster for what the music industry would like to define as “piracy.” Even though the dictionary definition of piracy is quite broad, this is something of a misnomer, because pirates are ordinarily in business to sell what they copy. Not only do Napster users not profit from making copies available, but Napster works precisely because the copies are free. (Its recent business decision to charge a monthly fee for access doesn’t translate into profits for the putative “pirates” at the edges.)

What Napster does is more than just evade the law, it also upends the economics of the music industry. By extension, peer-to-peer systems are changing the economics of storing and transmitting intellectual property in general.

The resources Napster is brokering between users have one of two characteristics: they are either replicable or replenishable.

Replicable resources include the MP3 files themselves. “Taking” an MP3 from another user involves no loss (if I “take” an MP3 from you, it is not removed from your hard drive)—better yet, it actually adds resources to the Napster universe by allowing me to host an alternate copy. Even if I am a freeloader and don’t let anyone else copy the MP3 from me, my act of taking an MP3 has still not caused any net loss of MP3s.

Other important resources, such as bandwidth and CPU cycles (as in the case of systems like SETI@home), are not replicable, but they are replenishable. The resources can be neither depleted nor conserved. Bandwidth and CPU cycles expire if they are not used, but they are immediately replenished. Thus they cannot be conserved in the present and saved for the future, but they can’t be “used up” in any long-term sense either.

Because of these two economic characteristics, the exploitation of otherwise unused bandwidth to copy MP3s across the network means that additional music can be created at almost zero marginal cost to the user. It employs resources—storage, cycles, bandwidth—that the users have already paid for but are not fully using.

Economists call these kinds of valuable side effects " positive externalities.” The canonical example of a positive externality is a shade tree. If you buy a tree large enough to shade your lawn, there is a good chance that for at least part of the day it will shade your neighbor’s lawn as well. This free shade for your neighbor is a positive externality, a benefit to her that costs you nothing more than what you were willing to spend to shade your own lawn anyway.

Napster’s signal economic genius is to coordinate such effects. Other than the central database of songs and user addresses, every resource within the Napster network is a positive externality. Furthermore, Napster coordinates these externalities in a way that encourages altruism. As long as Napster users are able to find the songs they want, they will continue to participate in the system, even if the people who download songs from them are not the same people they download songs from. And as long as even a small portion of the users accept this bargain, the system will grow, bringing in more users, who bring in more songs.

Thus Napster not only takes advantage of low marginal costs, it couldn’t work without them. Imagine how few people would use Napster if it cost them even a penny every time someone else copied a song from them. As with other digital resources that used to be priced per unit but became too cheap to meter, such as connect time or per-email charges, the economic logic of infinitely copyable resources or non-conservable and non-depletable resources eventually leads to “all you can eat” business models.

Thus the shift from analog to digital data, in the form of CDs and then MP3s, is turning the music industry into a smorgasbord. Many companies in the traditional music business are not going quietly, however, but are trying to prevent these “all you can eat” models from spreading. Because they can’t keep music entirely off the Internet, they are currently opting for the next best thing, which is trying to force digital data to behave like objects.

Within this economic inevitability, however, lies the industry’s salvation, because despite the rants of a few artists and techno-anarchists who believed that Napster users were willing to go to the ramparts for the cause, large-scale civil disobedience against things like Prohibition or the 55 MPH speed limit has usually been about relaxing restrictions, not repealing them.

Despite the fact that it is still possible to make gin in your bathtub, no one does it anymore, because after Prohibition ended high-quality gin became legally available at a price and with restrictions people could live with. Legal and commercial controls did not collapse, but were merely altered.

To take a more recent example, the civil disobedience against the 55 MPH speed limit did not mean that drivers were committed to having no speed limit whatsoever; they simply wanted a higher one.

So it will be with the music industry. The present civil disobedience is against a refusal by the music industry to adapt to Internet economics. But the refusal of users to countenance per-unit prices does not mean they will never pay for music at all, merely that the economic logic of digital data—its replicability and replenishability—must be respected. Once the industry adopts economic models that do, whether through advertising or sponsorship or subscription pricing, the civil disobedience will largely subside, and we will be on the way to a new speed limit.

In other words, the music industry as we know it is not finished. On the contrary, all of their functions other than the direct production of the CDs themselves will become more important in a world where Napster economics prevail. Music labels don’t just produce CDs; they find, bankroll, and publicize the musicians themselves. Once they accept that Napster has destroyed the bottleneck of distribution, there will be more music to produce and promote, not less.

With this change in addressing schemes and the renewed importance of the PC chassis, peer-to-peer is not merely erasing the distinction between client and server. It’s erasing the distinction between consumer and provider as well. You can see the threat to the established order in a recent legal action: a San Diego cable ISP, Cox@Home, ordered several hundred customers to stop running Napster not because they were violating copyright laws, but because Napster leads Cox subscribers to use too much of its cable network bandwidth.

Cox built its service on the current web architecture, where producers serve content from always-connected servers at the Internet’s center and consumers consume from intermittently connected client PCs at the edges. Napster, on the other hand, inaugurated a model where PCs are always on and always connected, where content is increasingly stored and served from the edges of the network, and where the distinction between client and server is erased. Cox v. Napster isn’t just a legal fight; it’s a fight between a vision of helpless, passive consumers and a vision where people at the network’s edges can both consume and produce.

The question of the day is, “Can Cox (or any media business) force its users to retain their second-class status as mere consumers of information?” To judge by Napster’s growth, the answer is “No.”

The split between consumers and providers of information has its roots in the Internet’s addressing scheme. Cox assumed that the model ushered in by the Web—in which users never have a fixed IP address, so they can consume data stored elsewhere but never provide anything from their own PCs—was a permanent feature of the landscape. This division wasn’t part of the Internet’s original architecture, and the proposed fix (the next generation of IP, called IPv6) has been coming Real Soon Now for a long time. In the meantime, services like Cox have been built with the expectation that this consumer/provider split would remain in effect for the foreseeable future.

How short the foreseeable future sometimes is. When Napster turned the Domain Name System inside out, it became trivially easy to host content on a home PC, which destroys the asymmetry where end users consume but can’t provide. If your computer is online, it can be reached even without a permanent IP address, and any material you decide to host on your PC can become globally accessible. Napster-style architecture erases the people-based distinction between provider and consumer just as surely as it erases the computer-based distinction between server and client.

There could not be worse news for any ISP that wants to limit upstream bandwidth on the expectation that edges of the network host nothing but passive consumers. The limitations of cable ISPs (and Asymmetric Digital Subscriber Line, or ADSL) become apparent only if its users actually want to do something useful with their upstream bandwidth. The technical design of the cable network that hamstrings its upstream speed (upstream speed is less than a tenth of Cox’s downstream) just makes the cable networks the canary in the coal mine.

Any media business that relies on a neat division between information consumer and provider will be affected by roving, peer-to-peer applications. Sites like GeoCities, which made their money providing fixed addresses for end user content, may find that users are perfectly content to use their PCs as that fixed address. Copyright holders who have assumed up until now that only a handful of relatively identifiable and central locations were capable of large-scale serving of material are suddenly going to find that the Net has sprung another million leaks.

Meanwhile, the rise of the end user as information provider will be good news for other businesses. DSL companies (using relatively symmetric technologies) will have a huge advantage in the race to provide fast upstream bandwidth; Apple may find that the ability to stream home movies over the Net from a PC at home drives adoption of Mac hardware and software; and of course companies that provide the Napster-style service of matching dynamic IP addresses with fixed names will have just the sort of sticky relationship with their users that venture capitalists slaver over.

Real technological revolutions are human revolutions as well. The architecture of the Internet has effected the largest transfer of power from organizations to individuals the world has ever seen, and it is only getting started. Napster’s destruction of the serving limitations on end users shows how temporary such bottlenecks can be. Power is gradually shifting to the individual for things like stock brokering and buying airline tickets. Media businesses that have assumed such shifts wouldn’t affect them are going to be taken by surprise when millions of passive consumers are replaced by millions of one-person media channels.

This is not to say that all content is going to the edges of the Net, or that every user is going to be an enthusiastic media outlet. But enough consumers will become providers as well to blur present distinctions between producer and consumer. This social shift will make the next generation of the Internet, currently being assembled, a place with greater space for individual contributions than people accustomed to the current split between client and server, and therefore provider and consumer, had ever imagined.



[2] Thanks to Business 2.0, where many of these ideas first appeared, and to Dan Gillmor of the San Jose Mercury News, for first pointing out the important relationship between P2P and the Domain Name System.