Chapter 20. Afterword

Andy Oram, O’Reilly & Associates, Inc.

Like many new ideas with substantial “disruptive” potential (that is, ideas whose impacts can fundamentally change the roles and relationships of people and institutions), peer-to-peer has been surrounded by a good amount of fear. In particular, it has been closely associated in the public mind with the legal difficulties faced by Napster over claims that the company engaged in copyright infringement. The association is ironic, because Napster depends heavily on a central server where users register information. It is precisely the existence of the central server that makes it technically possible for a court to shut down the service.

However, Napster does demonstrate important peer-to-peer aspects. Files are stored on users’ individual systems, and each download creates a peer-to-peer Internet connection between the source and destination systems. Furthermore, each system must furnish metadata information about the title and artist of the song. The legal questions Napster raises naturally attach themselves to some of the other peer-to-peer technologies, notably Gnutella and Freenet.

The Napster case in itself may not be dangerous to other peer-to-peer technologies. Its particular business model, its dependence on the preexisting popularity of exchanging MP3 files that are unauthorized copies of copyrighted material, and the many precedents for the concepts invoked by both sides (fair use, vicarious and contributory copyright infringement, substantial non-infringing uses) make the case unique.

But there are several indications that large copyright holders wield their legal weapons too widely for the comfort of technological innovators. For instance, during the Napster case, the band Metallica conducted a search for Metallica MP3s and created a list of 335,000 Napster users that it forced Napster to ban temporarily from the system. This raises the possibility that a determined plaintiff could try to prosecute all the individuals that form an entire community of peer-to-peer systems, such as Gnutella, Freenet, or Publius.

Users of those systems could then face the dilemma of being condemned for providing computer resources to a system that has social value, simply because one user of that system (perhaps a malicious user) provided material that raised the ire of a powerful commercial or political force. It would be interesting to see whether users would then try to invoke a kind of “ISP exemption,” where they claim they are simply providing communications channels and have no control over content.

This legal status for ISPs is pretty well established in some countries. In the United States, numerous courts have refused to prosecute ISPs for Internet content. Still, a section of the enormous Digital Millennium Copyright Act, passed by the U.S. Congress in 1998, requires sites hosting content to take it down at the request of a copyright holder. Canada also protects ISPs from liability.

The status of ISPs and hosting sites is much shakier in other countries. In Britain, an ISP was successfully sued over defamatory content posted by an outsider to a newsgroup. The German parliament has shrouded the issue in ambiguity, stating that ISPs are responsible for blocking illegal content when it would be “technically feasible” to do so. Of course, some countries such as China and Saudi Arabia monitor all ISP traffic and severely restrict it.

France exempts ISPs from liability for content, but they have to remove access to illegal content when ordered to by a court, and maintain data that can be used to identify content providers in case of a court request. The latter clause would seem to make a system like Freenet, Publius, or Free Haven automatically illegal. The November 2000 French decision forcing Yahoo! to block the display of Nazi memorabilia auction sites sets a precedent that peer-to-peer users cannot ignore. It has already been echoed by a ruling in Germany’s high court declaring that German laws apply to web sites outside the country. The trend will undoubtedly lead to a flood of specialized legal injunctions in other countries that try to control whether particular domain names and IP addresses can reach other domain names and IP addresses.

Further threats to technological development are represented by companies’ invocation of copyrights and trade secrets to punish people who crack controls on software content filters or video playback devices. The latter happened in the much publicized DeCCS case, where the court went so far as to force web sites unrelated to the defendants to delete source code. In 1998, Congress acceded to the wishes of large content vendors and put clauses in the extensive Digital Millennium Copyright Act that criminalize technological development, like some types of encryption cracking and reverse engineering.

It would be irresponsible of me to suggest that copyright is obsolete (after all, this book is under copyright, as are most O’Reilly publications), but it is perfectly reasonable to suggest that new movements in society and technology should make governments reexamine previous guidelines and compromises. Copyright is just such a compromise, where government is trying to balance incentives to creative artists with benefits to the public.

Napster showed above all that there is now a new social context for music listening, as well as new technological possibilities. The courts, perhaps, cannot redefine fair use or other concepts invoked by both sides in the Napster case, but the U.S. Congress and the governing bodies of other countries can ask what balance is appropriate for this era.

Peer-to-peer, like all technologies, embodies certain assumptions about people and future directions for technology. It so happens that peer-to-peer is moving the compass of information use in a direction that directly contradicts the carefully mapped-out plans drawn by some large corporate and government players.

The question now posed is between two views of how to use technology and information. One common view gives consumers and users the maximum amount of control over the application of technology and information. One example will suffice to show how powerful this principle can be.

Despite Tim Berners-Lee’s hope that the World Wide Web would be a two-way (or even multiperson to multiperson) medium, early browsers were pretty much glorified file transfer programs with some minimal GUI elements for displaying text and graphics together. The addition of CGI and forms allowed users to talk back, but did not in itself change the notion of the Web as an information transfer service. What caused the Web to take on new roles was the crazy idea invented by some visionary folks to use the available web tools for selling things. An innovative use of existing technology resulted in an economic and social upheaval.

Putting tools in the hands of users has an impact on business models, though. People might no longer buy a technical manual from O’Reilly & Associates; they might download it from a peer instead—or more creatively, extract and combine pieces of it along with other material from many peers. And peer-to-peer, of course, is just a recent option that joins many other trends currently weakening copyright.

When a revenue stream that information providers have counted on for over 2000 years threatens to dry up, powerful reactions emerge. Copyright holders have joined with a wide range of other companies to introduce legal changes that revolve around a single (often unstated) notion: that the entity providing information or technology should control all uses of it. The manufacturer of a disk decides what devices can display it. A compiler of information decides how much a person can use at any one time, and for how long. The owner of a famous name controls where that name can appear.

Trying to plug serious holes in the traditional web of information control—copyrights, trade secrets, patents, trademarks—information owners are extending that control into areas where they have previously been excluded. In their view, new ideas like selling over the Web would have to come from the company who provides the media or the service, not from people using the service.

So where do we look for the future uses of information and technology? The two answers to this question—users versus corporate owners—are likely to struggle for some time before either a winner or a workable compromise appears. But the thrust of peer-to-peer implicitly throws its weight behind the first answer: trust the users. The technological innovations of peer-to-peer assume that users have something to offer, and some peer-to-peer projects (notably Jabber in its community-building, and Gnutella in its search model) actually encourage or even provoke users to contribute something new and different.

Some people ask whether peer-to-peer will replace the client/server model entirely. Don’t worry, it emphatically will not. Client/server remains extremely useful for many purposes, particularly where one site is recognized as the authoritative source for information and wants to maintain some control over that information.

Client/server is also a much simpler model than peer-to-peer, and we should never abandon simplicity for complexity without a clear benefit. Client/server rarely presents administrative problems except where the amount of traffic exceeds the server’s capacity.

Peer-to-peer is useful where the goods you’re trying to get at lie at many endpoints; in other words, where the value of the information lies in the contributions of many users rather than the authority of one. Peer-to-peer systems can also be a possible solution to bandwidth problems, when designed carefully. (Of course, they can also cause bandwidth problems, either because their design adds too much overhead or because people just want a lot of stuff without paying for the bandwidth that can accommodate it.)

In short, peer-to-peer and client/server will coexist. Many systems will partake of both models. In fact, I have avoided using the phrase “peer-to-peer model” in this book because such a variety of systems exist and so few can be considered pure peer-to-peer. The ones that are completely decentralized—Gnutella, Freenet, and Free Haven—are extremely valuable for research purposes in addition to the direct goals they were designed to meet. Whether or not other systems move in their direction, the viability of the most decentralized systems will help us judge the viability of peer-to-peer technology as a whole.