5
Production of Information

Organisation of Productive Relations

There is an extensive amount of literature critical of the current intellectual property regime. Many of the writers make a point of the fact that intellectual property often falls short of one of its stated aims, to advance the progress of science and useful arts. Typically, this observation is a centrepiece in their case against strengthened intellectual property rights. However, the wider implications of this failure have not been given much attention. In this chapter we will attempt to relate the intellectual property regime and its shortcomings to a theoretical understanding of capitalism. We put forth the proposition that the market failures of intellectual property reflect the failure of the capitalist relation as an organising principle of labour. The capitalist relation consists of private property, market exchange, and wage labour. In the informational sector, these aspects of the capitalist relation take the form of intellectual property, markets in information, and individual authorship. All of these components are required for the reproduction of capital, and, as will be argued later on, each one bears a toll upon the productivity of labour. Our reasoning is only one step short from the well-known forecast by Karl Marx: “Beyond a certain point, the development of the powers of production becomes a barrier for capital; hence the capital relation a barrier for the development of the productive powers of labour. When it has reached this point, capital, i.e. wage labour, enters into the same relation towards the development of social wealth and of the forces of production as the guild system, serfdom, slavery, and is necessarily stripped off as a fetter.” (Grundrisse, 749). Marx believed that living labour at this point would gain an edge over capital. A socialist relation of production would be discovered by the proletariat and would overcome the capitalist ditto. He considered this to be a necessary precondition for transcending capitalism. If the proletariat lacked an economic model of its own it could not hold on to power. In a chilling anticipation of later events, Karl Marx warned that a political revolution without an economic revolution would just result in a bloody coup.

While reviewing these claims, Michael Howard and John King stress the significance attached to efficiency by early socialists: ”The materialist conception of history relates the feasibility of socialism to the question of efficiency, measured by the ability to operate the productive forces optimally. For Marx and Engels, socialist relations of production would be sustained only if they could on this criterion out-compete those of capitalism.”1 The reason for the grave importance attributed to efficiency was that if socialism fell behind capitalism in terms of productivity, individuals in the socialist society would have to make sacrifices, willingly or not, for the economy to float. Such stern measures were not consistent with the freedoms that socialism promised. Michael Howard and John King carefully examine the attempts by Marxists to defend the claim above. Their conclusion is that no plausible evidence has been presented supporting a scenario of the kind. With no indication that socialism could ever measure up to capitalism, the criteria laid down by Marx ends up pointing in the opposite direction, towards the longevity of the capitalist mode of production. Perhaps this bleak fact has contributed to the fact that these parts in Marx’s thinking are seen as outdated by many contemporary socialists. The proposition here is that the idea should be re-examined in light of the highly successful FOSS development model. In the software sector, self-organised labour is outdoing capital in its own game of technological development. We have to be careful, though, not to invent a definite opposition between proprietary software and capitalism on one side, and FOSS and anti-capitalism on the other side. From the discussions in the first chapter, it should be clear that FOSS developers are deeply embedded in the capitalist society, that individual capitalists make good use of the volunteer labour of the hacker community, and that FOSS applications have become serious competitors thanks to the backing of the computer industry. For the sake of clarity, FOSS development will be discussed as an ideal model, though in reality it functions as a hybrid. The hacker community is ridden with contradicting potentialities that are constantly being fought out between rivalling fractions and external forces.

We must be equally careful when contemplating over the possibility that software technology can be reclaimed from capital. The first generation of Marxists was optimistic about the possibility that scientific discoveries could be isolated from capitalist relations. They assumed that once private ownership over machinery had been disbanded, technology would come to serve all of humanity. Advancement of the forces of production was seen as a promise of liberating humans from the realm of necessity. An extreme example is Lenin’s well-known endorsement of Taylorism as a model for the Soviet industry. Karl Marx’s position is harder to pin down. In some of his writings he welcomed the advancement of science, at other times he saw machinery as an instrument for disciplining workers. The latter theme was picked up by labour theoreticians in the 1970s. They argued that the growth of industry is inseparable from a deepened technical division of labour, and that the forces of production developed under capitalism are intimately tied up with the capitalist relation of production. On a more general note, dis-belief is by now the common response to the modernist notion of historical progress. Herbert Marcuse is iconic for formulating a pessimistic, leftist position on technology. His reproach was not directed against any technology in particular but against technological rationality as such. In Marcuse’s view, the master-servant perspective is embedded in the instrumentality of the scientific method. It mirrors the domination of humans in the capitalist, patriarchal society.

These remarks ought to caution us against an overly optimistic assessment of current trends within the FOSS movement. In the second half of the chapter, it will be argued that the seizure of the means of production is no longer a philosopher’s stone that could dissolve capitalism once and for all. Quite the opposite, dissemination of productive tools is consistent with a post-Fordist labour process that has been displaced to the whole of society. Capital tries to make itself independent of unionised labour and, as a side-effect, the conditions are established for a FOSS production line relatively independent of capital. The significance of this fact, however, is overdetermined by other kinds of constraints in the capitalist society. Though the liberation of the tools and skills for writing software code is an important step, it is not in itself a sufficient condition, as early socialists believed, for stripping off the capitalist fetter.

Market Relations and Scientific Labour

The historical record of intellectual property in hampering scientific research and technological development is a good place to start our inquiry. It is telling that the innovation iconic of the industrial age and capitalism, the steam engine, fell victim to patent disputes. James Watt’s refusal to license his innovation kept others from improving the design until the patent expired in year 1800. The introduction of locomotives and steamboats was delayed because of it.2 Watt’s patent had a stifling impact on the Cornish mining district where the machine was used to pump water from the underground. A period of rapid improvements followed shortly after the patent had expired. The engineers in the area shared their discoveries with each other in a publication while aspiring to outdo each other in achieving the best performance. Rarely did these engineers protect their discoveries with patents.3 Many key areas of the industrial revolution, such as mining, engineering, and chemistry, were advanced entirely or partially outside the patent system through processes of incremental and cumulative invention by anonymous workers and engineers.4 In the history of patent rights, in contrast, innovations have often been held back because of conflicting ownership claims and legal uncertainties.

The early development of radio technology is a case in point. Marconi’s Wireless Telegraph Company and AT&T ended up owning different components that were critical for radio transmissions. Military concerns during the First World War compelled the American government to demand that the two companies cooperated. For a brief period the patent stalemate was suspended which resulted in a rapid development of radio technology. With the end of the war and of government emergency powers, research was once more boggled down in the old patent dispute.5 Patent stalemates of the sort are escalating because patents are systematically used in anti-competitive strategies. A colourful example is the patent filed by Romanoff Caviar Company on synthetic caviar. The artificial version of caviar would have sold for an estimated one-fourth of the price of natural caviar. Consequently, Romanoff Caviar Company held the patent in order to prevent cheap substitutes from entering the market. (Scherer) The list of examples making the same point could be extended to fill the whole book. And then ordinary patents look rational in comparison to the havoc that the patent system causes when it is extended to the area of information processes, i.e. software patents. Software development, like the pursuit of abstract knowledge in general, is particularly affected by patents since computer programming builds on many disparate sources of information. The writing of software is at heart a cumulative, and therefore a collective, process.6

Economists defending the patent system believe that, in spite of its known shortcomings, on average it contributes to the progress of science and technology. Often patents are described as a necessary evil for creating market incentives. The underlying assumption is that the market economy is the most efficient method for allocating resources. From such a perspective, the drawbacks with patent monopolies are seen as tradeoffs for setting free the productivity of market forces. The thrust of this reasoning can easily be cast around. It is precisely because such an awkward and counter-productive system as intellectual property is needed for a market economy in information to function, that we can estimate the full magnitude of failure of the market relation itself. At a closer look, the defence of property-based research turns out to be oxymoronic. It claims that by preventing the diffusion of scientific discoveries there will be more science and technology to diffuse. The paradox is reflected in neoclassical economic theory which actually advises against pricing information. Goods with zero marginal costs, such as information, should be treated as public goods and not sold as commodities. James Boyle, a long-time critic of intellectual property law, succinctly formulated the contradiction of ‘informed markets in information’: “The analytical structure of microeconomics includes ‘perfect information’—meaning free, complete, instantaneous, and universally available—as one of the defining features of the perfect market. At the same time, both the perfect and the actual market structure of contemporary society depend on information being a commodity—that is to say being costly, partial, and deliberately restricted in its availability.” (Boyle, 35) Economists are aware of these inconsistencies in their theory. Their support for intellectual property fall back on the assertion that the costs for providing the first copy must be recovered by charging for all subsequent copies made. If the price of knowledge was zero, they argue, investors would be left without any return on their investments. Research is costly so clearly investors are needed. But then again, we must retort, why is it that research is so costly?

According to a study by the National Research Council in America, the average cost for filing a U.S. patent is between $10 000–30 000. Most of the expense is fees for legal advice. But a patent is worthless unless the patentee also has a ‘war chest’ sufficient to defend his patent claim. Each part involved in a patent dispute can expect to spend in-between $500 000–$4 million. The sum depends on the complexity and the stakes involved in the court case.7 The outlay for a patent portfolio mounts up to equal the investments in an old-time ‘Fordist’ machine park. To some degree, thus, it is the patent system that creates the costs that patent rights are to make up for. Or, in more general terms, research is expensive because information has been highly priced. With a zero legal price for knowledge, research and development activities could take place without large investments, thus without investors, which is to say, independently of capital and capitalists. Knowledge has to be kept costly and inaccessible to sustain market relations in the informational sector. Ideally, for information to be productively engaged, it ought to take resources on a scale of spent surplus value. Or, to phrase it differently, safely out of reach from wage earners. Science must be privatised for the very reasons foreseen by Karl Marx: “[…] It is, firstly, the analysis and application of mechanical and chemical laws, arising directly out of science, which enables the machine to perform the same labour as that previously performed by the worker […] Innovation then becomes a business, and the application of science to direct production itself becomes a prospect which determines and solicits it.” (Grundrisse, 704). Since the sphere of production is overtaken by science, capital must overtake the scientific process. Of course, universities have always been integrated with and contributed to the development of capital. What is happening now is that higher education and scientific research is passing from a state of formal subsumption to a state of real subsumption under capital. The composition of what we might call, for lack of better words, ‘scientific labour’, is reformed to better suit the needs of capital. The situation is analogous to how craft work once was transformed into factory work. The privatisation of scientific research goes beyond a strengthened patent law. Funding is shifted from the public to the corporate sector, the norm system within the scientific community is weakened, and economic incentives become more important as a motivating and disciplining factor. Given that science is advanced in a collaborative and cumulative process, stretching across institutions, national borders, and generations, the decline of the academic cudos has alarmed many scholars. Corporate backers often demand that discoveries are kept secret. Sharing of information, the lifeblood of academic research and learning, is obstructed because of it. At risk is the role of the university as a dissenting voice in society. Of no less concern is that the priorities of scientific research will be guided by short-term commercial interests. In addition to changing the direction of science, which might be welcomed by some, there are indications that the research becomes less robust, which no-one reasonably can be in favour of. The demand for secrecy and the vested interests among corporate backers make scientists doubt the validity of their colleagues’ work. Instead of building upon former data they are inclined to duplicate the experiments and surveys on their own.8

The many chain effects from making knowledge expensive are demonstrated in the privatisation of the Landsat system during the Reagan administration. The Landsat program provides satellite images of the earth for commercial and academic use. As long as the satellites were managed by the public sector, images from Landsat were made available at the marginal cost of reproduction. When the operation was privatised, the price for Landsat images rose from $400 to $4 400 per image. Increased expenses did not merely result in a dramatic cut in research projects utilising the Landsat system. Privatisation tilted the power balance between well-funded research facilities and poorer universities and it strengthened internal hierarchies. Individual scientists became more dependent on funding and hence on decisions made by university boards and heritage funds. Correspondingly, senior researchers fared better than less merited, or maverick, researchers and students. In addition to steering research into commercial avenues and favouring projects by established scientists, the reliability of the scientific results was called in question. The shortage of satellite images hindered researchers from taking long series of images over an extended period of time and discouraged them from double-checking data.9

The ineffectiveness of enclosing information can also be argued by pointing at the fact that capital itself, from within, is developing enclaves free from property claims. Patent pools and collective rights organisations are set up to reduce the transaction costs of intellectual property. Members aggregate their patents or copyrights in a common pool and enjoy the freedom to draw from it without asking for permission.10 At first sight it might appear as if the partial suspension of property rights is against the grain of the intellectual property regime. Such estimation rests on the misunderstanding that patents and copyrights are simply about enclosing information. The intellectual property regime works by oscillating between expanded privatisation of knowledge and the disbandment of private rights within ‘gated commons’. Commons are needed for setting free the productivity of labour power; gates are required to uphold capitalist relations. In fact, capitalism often advances by incorporating elements on one level are antithetical to its own logic. An early parallel to patent pools can be found in the joint-stock system, which Karl Marx considered to be an abolition of the capitalist mode of production on the basis of the capitalist system itself. On one hand, the existence of patent pools confirms the inadequacy of intellectual property based research; on the other hand, it demonstrates the flexibility of capital to adapt. We might therefore doubt if the loss in productivity caused by the intellectual property regime makes any difference. Capitalism has in any case never worked optimally even when measured by its own narrow benchmark. Despite the notion of capitalism as a hothouse for the development of the forces of production, new technologies have often been suppressed by corporations, with or without patents, to entrench resource dependency and protect market shares.11

The argument here is that the frictions caused by market relations in higher education, scientific research, and product development weigh more heavily the more central these activities become in the perpetual innovation economy. This reasoning could have ended up in a new crisis theory of capitalism, had we not learned by now that capital feeds on crisis—even its own. Capital is not about to tumble into aggravated crisis because of falling profitability and aggravated contradictions. But capital is constrained to reinvent itself. The response of capital is to ‘turn the friction into the machine’, or, phrasing it differently, to make a positive, productive model out of anti-production.12 The somewhat abstract claim can be illustrated with a reference to an optical illusion from cinematic graphics. The illusion occurs in black-and-white movies when a horse wagon is set in motion. At a certain point the speed of the cart and the frequency of the film clips will reverberate. The spokes of the wheel of the cart now appear to run in the opposite direction to the movement of the cart. Though the spokes revolves backwards, the wheels carry the cart in a forward motion. This mirage gives an accurate description of capitalist growth through anti-production. A concrete example of the claim is the research on the terminator gene. American seed corporations craved a fix to protect their genetically modified crops from infinite reproducibility of seeds. The U.S. Department of Agriculture collaborated with a subsidiary of Monsanto in developing a ‘technology protection system’.13 Preventing crops from growing was productive for capital since it strengthened market incentives for seed companies, generated lucrative patents, created jobs and made Monsanto’s shareholders wealthier. The equivalent of the terminator gene on the Internet is Digital Rights Management technology. Judged from the standpoint of use value, terminating the self-reproductiveness of seeds and binaries is hampering productivity, but as far as the valorisation of capital is concerned, it is a boon.

The statement above is at variance with a postulate in scientific Marxism, namely that whatever is productive for capital is at the same time productive in general. Activities are considered to be productive for capital if they generate surplus value. According to this notion, production of surplus value is the current historical form of organising activities that are productive for the human species in general. The chief advantage of capital over the proletariat lies in that this mode of organisation is, for the time being, superior in advancing the forces of production. This assumption does not square with post-modern, late capitalism where circulating capital has surmounted productive capital. The terminator gene is hardly a neutral addition to the cumulative build-up of the forces of production. It is rather an example of how science and technology are developed to entrench the capitalist relations of production. Those things that could reasonably be called productive in general (infinite reproducibility, zero priced public goods, free access to knowledge, sustainability of resources, life) are in conflict with the creation of surplus value. Concurrently, however, it is plausible that a productive relation that does not suppress these energies has an edge over capitalist relations of production. The success of the FOSS development model can be interpreted against this backdrop.

The Pirate as a Worker

The wage relation appears to be the sole form for organising labour. At a closer look, however, it becomes evident that wage labour coexists with a range of different forms of labour relations. While marginalised and oblique, it is nevertheless true that the market society depends on the reproductive work taking place in the family, friendship circles, the voluntary sector, etc. These economies are needed as a complement to the waged economy, solving problems that the market, for one reason or another, fails to address. The FOSS development community could be added to this list, but with one crucial addition: It does not merely complement the market but rivals it. As we have seen, a number of FOSS applications compete with equivalent products developed in the corporate sector. Hence, it is at least conceivable that the FOSS development model could challenge the wage relation as the dominating principle for organising labour.

The first chapter discussed how the strengths of FOSS applications are accounted for by voices within the hacker movement. Three major advantages were identified: Free software development is not hampered by conflicting property claims, hackers have a higher motivation to do a good job than hired programmers, and more people can contribute to a project when the source code is freely accessible. These advantages all refer back to the inferiority of capitalist relations of production in organising labour in the informational sector. The discussion in the first chapter will here be complemented by looking outside the computer underground and the technicians directly involved in writing free code. As was argued in chapter two, the labour process does not end with the product passing from the producer to the consumer. Consumers work on information products when they learn about it and when they adopt their surroundings to the requirements of the product. The usefulness and value of an information product relies on this cognitive and emotional investment by audiences. The claim is clarified by thinking of how software standards are established. Each and every user of a software application contributes to the standardisation of that particular computer program. This highly distributed labour process stretches the organisational limits of the firm. Too many people have to be involved in setting a standard for them all to fit on a payroll. Corporations enlarge the labour pool beyond the in-house staff by involving their customers in the development process. But the price mechanism acts yet again as a limitation on the productivity of labour. Elementary economic theory tells us that a positive price on information reduces the number of buyers. That becomes a real constraint when the same people are the main developers of the service. It is for this reason that companies are experimenting with alternative business models that circumvent the direct point of sale. There are many ways to parcel out and fence in a knowledge commons. Perhaps only commercial uses are charged for, or revenue comes from advertising, or additional payper services are annexed to the free offer, or the company tries out a combination of FOSS licenses. A major drawback with all of these options is that the company gives up control over the consumer market when it suspends its right to exclude non-paying users.

There is a way for corporations to have it both ways, i.e. to stay in control while expanding the pool of developers beyond the limits of the price mechanism. Corporations can take advantage of non-paying, illicit uses of their service. This helps to explain how the software-, music- and film-industries sometimes benefit from pirate sharing.14 The economist Oz Shy comes up with some interesting results from his study of the software sector. His point of departure is that illicit, non-paying software users, or so-called ‘freeriders’, expand the total pool of users of a particular software, with the outcome that the utility of the program is enhanced for paying software users. The increased utility of the computer program manifests itself in a number of ways. The computer program is more likely to be interoperable with other applications, a larger group of employees will be familiar with the graphical interface, it gives an assurance of a path dependency so that skills invested in the software are less likely to become obsolete in the near future, and the size of the potential consumer market is expanded. These are some concrete gains for the network (the software application) when it includes additional nodes (paying and non-paying users). Excluding a node bears a corresponding penalty upon the use value of the network. And yet, exclusion is the sine qua non of property rights. Oz Shy’s conclusion is counter-intuitive: A software company can increase profits by permitting unauthorised uses of its product. The reason is that paying customers are willing to pay a higher price for the product if it is widely used. For sure, failure to enforce copyright will result in that a large number of paying customers turns into non-paying users. But the loss is compensated for by major companies and government agencies that do not have the option to use illicit copies of the software.15 Shy’s reasoning is confirmed in a study by Stan Liebowitz on the illicit copying of academic journals. Liebowitz starts with the assumption that publishers are not necessarily harmed by illicit photocopying of articles. Indirect revenue can be appropriated by charging a higher price for library subscriptions. University libraries are willing to pay more due to increased circulation of the journal among non-paying readers. Since the money that individual subscribers can spend is marginal compared to what a university can pay, a reader might contribute more to the financial standing of the publisher by reading the journal than by paying for his own copy. Hence, Liebowitz avows that publishers can, under right circumstances, be better off not charging individual subscribers.16

The ‘pirate’ now looks more like a franchised vendor than a criminal. In fact, non-paying users are better thought of as unpaid developers of the network. Leading on from this statement, intellectual property can be described as a ‘labour contract of the outlaw’. This is no marginal occurrence since, under current copyright statutes; millions of computer users are being outlawed. We can safely assume that most of them will never be tried for pirate sharing—they are, after all, providing companies with free beer. However, by the same token, it is predictable that a handful will be persecuted from time to time, if only so that the work of audiences stays within the discourse of illegality. This amounts to the same thing as a state of exception, though limited to a small segment of the population. The uncertain legal status of non-paying/non-paid developers gives some leverage to firms over a development process which has escaped the direct supervision at the workplace, does not respond to the corporate command chain, and is not stratified by a technical division of labour. The need for influencing this labour becomes more pressing when users self-organise their activity in communities.

Development Communities at Work

The work of audiences and users includes everything from extremely disparate, haphazard user collectives with no horizontal communication, the users of Windows for example, to tight groups of developers with mailing lists, conferences, and a shared sense of purpose. The Debian-GNU/Linux user community could be an example of the later. Work with FOSS projects demands commitment, advanced skills, and collaboration from its participants. Their sustained efforts give rise to what might be called a ‘community-for-itself’. With a norm system, a common identity, and a political profile, the FOSS development community gains some degree of independence visà-vis external forces, companies and governments in particular. This independence is demonstrated when the interests of the hacker movement and capital diverges, such as in the design of filesharing applications. Filesharing has mostly been debated from the standpoint of the alleged losses of the media industry. It is not pirate sharing that makes peer-to-peer networks subversive, though, but the peer-to-peer labour relations of which this technology is an example of. The application would never have seen the light of the day had software development been confined to the social division of labour, i.e. to professional researchers working in corporate laboratories or government institutions. It is this loss of control that is destabilising to status quo. The intellectual property regime does not merely address the flow of information, but, in doing so, influences the terms under which users can develop (filesharing) technologies. In other words, as much as the intellectual property law aims to prevent unauthorised sharing of information, it seeks to regulate the productive energy of peer labour communities.

Peer-to-peer became a concept with the Napster case in 1999–2001. The inventor of Napster was a young student, Shawn Fanning. He dubbed his creation Napster as it was his nick-name when addressing other hackers. The idea behind Napster was to enable people to access music stored on the computers of other users. Thereby the application opened up a vast pool of music to everyone involved. Napster was not a pure peer-to-peer system. It minimised the required storage space by having the end users storing music files on their own hard drives instead of on a central server, but the search mechanism was centralised. The central index on available music files permitted Shawn Fanning to start a business venture around the service, and, on the downside, for the Record Industry Association of Americas (RIAA) to sue him. From the outset, Shawn Fanning and his associates aimed at drawing the greatest possible number of users into the Napster system. The audience would be their bargain chip when negotiating the price for the service with RIAA at a later stage. In the heydays of the New Economy, this was a good enough business proposal to attract venture capital. Even one of the media giants and a prominent member of the RIAA, the German company Bertelsmann, invested in Napster. And the audience size was impressive. At its peak Napster had more than 70 million registered users. Almost every single one of them swapped copyrighted files in violation of the law. The ‘David and Goliath’ court case against Napster helped to raise sympathy, publicity, and attract even more users to the service. For a while, and for some, Shawn Fanning appeared as a hero taking on the media giants on behalf of music fans and exploited musicians. In truth, the court case was a test of strength to settle the price for the brand, the audience, and the technology of Napster.17 It could have paid off, had Napster not been undercut in the same way it had emerged itself. As soon as the intent of the company became known to hackers they started making clones of the program. OpenNap provided the same service but was licensed under General Public License which guaranteed that the system would stay free. Several more initiatives were taken to sidestep the control of Napster over the audience. To generate revenue and thus to become an attractive partner for record companies, Napster had either to convert into an enclosed subscription service or sell advertising space. The step could not be taken since Napster’s engineers knew full well that any restrictive measure would kill off the user base as fast as it had been built up. Quite right, when Napster was forced by court order to shut down its service until it had developed a feature that enabled subscription fees, users quickly switched to other peer-to-peer filesharing systems. The audience, Napster’s trump card, vaporised at a second’s notice.

One challenger inspired by Napster was Gnutella. Behind it stood Justin Frankel, the inventor of Winamp, an application used for listening to music, and a friend of Shawn Fanning. Gnutella took a decisive step towards a pure peer-to-peer system. It decentralised both the storing and indexing of files. When a user of Gnutella wants to listen to a particular song, she sends out a request to adjacent Gnutella nodes. If the file is not found on these machines, the request is passed on to the next circle of nodes. The request is radiated outwards until the file in question is found. The process is slower than when working through a central server, but the design was a considerate response to RIAA’s lawsuit against Napster. The fact that the Napster team had the option but chose not to develop a monitoring feature within the central indexing system became a liability in the court case. With Gnutella, it was impossible for the authors to survey the activity of the users even if they had wanted to.18 The making of Gnutella is worth expanding on since it comes with an interesting twist. Justin Frankel had started a firm based on the Winamp application. He sold the firm, Nullsoft, to America On-Line, but kept working in the firm. Gnutella was devised by him and other employees at Nullsoft. Technically speaking, America On-Line owned Gnutella. At the time when Nullsoft made Gnutella available on the Internet in 2000, America On-Line was about to merge with Time Warner—one of the largest players involved in the lawsuit against Napster. Nullsoft’s employees were briskly told to take Gnutella down, which they reluctantly did. Instantly, hackers began to reverse-engineer Gnutella and improve on it, most probably with some covert assistance from the employees at Nullsoft. Three years later Justin Frankel did the same thing all over again. For a few hours the Nullsoft server hosted WASTE, a third generation peer-to-peer file sharing program. WASTE was designed to thwart RIAA’s new strategy of suing individual filesharers. In WASTE the connections are established between a small circle of people who trust each other from the start and the communication, in most cases consisting of illegally copied files, is heavily encrypted. The law authorities have a very hard time to find out about the infringements taking place in the private network. In the short time that WASTE was made available by Nullsoft the code spread like wildfire in the FOSS community. The whereabouts of the application was put out of reach of AOL Time Warner. After that the plug was finally pulled on Nullsoft.

The story about OpenNap, Gnutella and WASTE gives a flavour of what can happen when the means to write algorithms are dispersed among the proletariat. FOSS licenses works in a way similar to the architecture of Gnutella. By decentralising the running of the technology, Gnutella’s authors gave up their control over their creation. Thus no legal or economic pressure could be applied on them in order to influence how the technology was used, as had previously been done to Shawn Fanning and Napster. In the same way, FOSS licenses place the development of software applications out of reach of any single individual, group or company. FOSS licenses set loose the productive energies of an anonymous, ambulant crowd, and, while doing so, it offsets the concentration of power and control that follows with individual authorship. We could almost go as far as to say that the politics of the hacker movement is the sum of this mode of disorganising power.

Not only does free source code undermine the intellectual property rights of content providers. It is also destabilising to other sorts of authorities. This was clearly demonstrated when Netscape decided to release the source code of their web browser under an open license. Up till then, Netscape’s web browser had been firmly controlled by the company and only in-house programmers could access the code. Ownership over the code had led to a number of design choices. Among them was the absence of advanced cryptographic features within the browser. Robert Young tells of the consequences of Netscape’s decision to let the code free: “In a move that surprised everyone, including Netscape engineers who had carefully removed cryptographic code from the software, less than a month after the source code was released, an Anglo-Australian group of software engineers known as the Mozilla Crypto Group, did what the U.S. government told Netscape it could not do. The group added full-strength encryption to an international version of Netscape’s Web browser, and made available Linux and Windows versions of this browser.” (Young, 98) Though Netscape didn’t intend it, the influence which the U.S. government could exercise over the web browser was cancelled out when the source code was made public.

The same principle of peer-to-peer production has found outlets outside the FOSS development community. The most politically aware experiments with user-created content are found in the movement for citizen media. Grassroots news reporting became a concept with the birth of the Independent Media Centre. It was a brainchild of the WTO demonstrations in Seattle in 1999. Indymedia consists of regional centres from which activist-reporters can access a global audience with local news. It is concerned with documenting demonstrations and political events and is meant as a corrective to the biased reporting in mainstream media. The slogan “don’t blame the media, become the media” captures the philosophy of Indymedias’ activists. Such ambitions have a long-standing tradition within the left. Indymedia differs from political fanzines and pirate radio broadcasts in its global reach and real-time reporting.19 On the other hand, the political profile of Indymedia causes a constraint and lopsidedness among the readers that are contributing to the project. Arguably, the blogsphere has been more successful in ‘becoming media’ than Indymedia. Bloggers are stitched together in the loosest possible sense, relying on Internet search engines instead of an editor for sorting out noise from information. The loose way of organising their activity has contributed to the rapid growth of the bloggsphere. Once a critical mass of contributors has been built up, grass-root reporting is at an advantage over traditional news reporting. Eben Moglen, a prominent member of the Free Software Foundation, identified this mechanism when noticing that the broadcasting networks, with their over-paid celebrities and expensive equipment, are about the only organisations that cannot afford to be everywhere in the world at the same time. With a digital camera ready-at-hand and an Internet connection close by, the anarchistic mode of news reporting turns any passer-by into a potential journalist for a moment, just as the FOSS model turns every computer-user into a potential bug reporter. Lack of capital and abundance of living labour are here made into competitive assets.20 The apolitical outlook of most bloggers does not disqualify the importance of a distributed organisation of news reporting. That was amply shown during America’s second invasion of Iraq. In the first invasion by the senior Bush administration, the control over the journalists was so tight that it sent Jean Baudrillard pondering over if the war had really happened. The control over news media was even stricter the last time around. In spite of that journalists were embedded with the invasion forces, the junior Bush administration could not prevent damaging information from leaking out. Soldiers-becoming-journalists published home videos disclosing the abuses that they had committed themselves. Coverage of the war came from places where no professional journalist would be let within sight. This suggests the political ramifications when the means of news production spreads beyond the journalist profession.

More areas open up to peer production as the principle of peer-to-peer is applied not only to reduce costs for variable capital (i.e. living labour), but also to cheapen constant capital (infrastructure, machineries etc.). That could be the deeper implication of the SETI@home project. SETI@home is a favourite example in hacker literature, mingling high technology with fascination for science fiction. SETI stands for: Search for Extraterrestrial Intelligence. SETI searches for intelligent alien life by scanning for radio signals from outer space. The huge task of analysing the data received is distributed to volunteers who lend spare capacity on their personal computers. For a succession of years the project has out-performed state-of-the-art supercomputers at a fraction of the cost.21 The SETI@home project is not as dramatic as the controversies surrounding filesharing networks, nor does it have the zeal of grassroots journalism. Nonetheless, peer-to-peer computing could lower the threshold for the public to engage in various forms of computer-aided tasks. A qualified guess is that hackers will have an easier time in the near future when running simulations of hardware devices. That would take them a bit closer towards the goal of building a free computer machine. Cash-strapped laboratories in the Third World might find distributed computing practical for side-stepping pharmaceutical companies and develop generic drugs. And fans could make films with the same graphical sophistication and featuring the same computer-rendered stars as in Hollywood productions, leaving the movie studios without any edge over the amateurs. In short, peer-to-peer computing lessens the need for constant capital and lowers the requirements for the public to enter various productive activities.

When amateur collectives move on from producing fan fiction to producing news, facts, source code, etc., it calls into question the credibility of that material. Installing trustworthiness in a publication is part of the labour process of readers. It can be as important to the success of a journal as the work by the writers. The stories uncovered by citizen reporters are of little consequence unless it is perceived as reliable by the public. The same goes for software code. The performance of FOSS applications is inseparable from the confidence that computer users have in these solutions. While pondering over the future of amateur production, Mark Poster recalls that the credibility of individual authorship is a cultural invention. He notes that in the seventeenth century publishers fought an uphill battle to establish a market in books and newspapers, since readers were suspicious of claims made in print. People trusted information from persons they knew and met face-to-face. It required an educational feat of publishers and newspaper editors to change social norms so that trust was placed in the gatekeepers.22 Nowadays, a source of information is credible if it has been approved by a publisher, a talk-show host, a software company, or a certificate issued by an educational institution. In this model of expertise, a large amount of knowledge about a subject matter must converge in the single author. The accuracy of individualised sources of information relies on the past record of, and future repercussions to, the expert. In other words, accuracy of information is guaranteed by the labour market in experts, or, to be precise, by the employers of experts. We do not need to evoke Michel Foucault or Ivan Illich to recognise the power relations behind authorising texts in this way.

Mechanisms for authorisation of texts are seemingly absent in anonymous, collective authorship. Some hints about where to look for other sources of credibility are given by Wikipedia, by far the most well-known case of a peer labour project outside the FOSS scene. Wikipedia is an Internet encyclopaedia edited by the readers. It began with a vision by Jimmy Wales and Larry Sanger to create Nupedia, a freely accessible encyclopaedia on the Internet. They set out with a traditional approach, employing editors and demanding educational qualifications from writers. The project had only gathered a few hundred articles when it ran out of funding. The articles were published on a separate website named Wikipedia, and, since Jimmy Wales and Larry Sanger had abandoned their aspirations for credibility, they invited visitors to edit the texts. Volunteers joined and the content grew exponentially. In a few years the size of the English-language version of Wikipedia exceeded Encyclopaedia Britannica and it continues to grow all the time. Wikipedia expands also in regards to the languages in which it is represented. Contrary to expectations, much of the text is of fairly high quality. The Nature made a comparison between entries from the websites of Wikipedia and Encyclopaedia Britannica on a broad range of scientific disciplines. Statistically speaking, an article in Wikipedia contains four factual errors, omissions or misleading statements, while articles in Britannica contain three mistakes of the same gravity.23 Admittedly, vandalism and biased accounts are more of a problem in entries covering social sciences and controversial topics. In addition, the risk of libelling is a major concern in the openly edited encyclopaedia. A survey by IBM in 2002 discovered that, on one hand, most high-profile articles had been attacked at some point, and, on the other hand, that vandalised articles were on average restored within five minutes.24 Presumably, facts gone obsolete are corrected at the same speed. This observation could prove decisive to our search for sources of credibility outside the mechanisms of individual authorship. In an environment that changes so fast that no individual can stay up-to-date with her field of expertise, collectively edited texts are likely to gain in credibility over individual authorship. Arguably, this could be taken as a late confirmation of Peter Kropotkin’s assertion of the superiority of anarchism: “The rate of scientific progress would have been tenfold; and if the individual would not have the same claims on posterity’s gratitude as he has now, the unknown mass would have done the work with more speed and with more prospect for ulterior advance than the individual could do in his lifetime.”25

The example of Wikipedia repeats a theme known from FOSS development. Security, stability, and relevance of data depend on the number and heterogeneity of the co-authors attracted to a project. Reliable data can be collected by a group of amateurs, each with a patchy awareness of the subject field, since the group makes up for lapses in knowledge of individuals. It is the size and the diversity of the user base that authorises collectively edited texts. Those two factors mandate openness as a principle. Commons allow a maximum of developers, users and audiences of various degrees of involvement to contribute to a development project. Conversely, as is suggested by proprietary software, secrecy and monopolisation of knowledge fails to provide security and stability. Intellectual property rights prevent feedback cycles between successive stages of uses. The development flow is ruptured by costs, uncertainty, litigations, and design incompatibilities and the production process is slowed down at a time when speed, not scale, is king. In a GPL production line, unshackled from the individual claims on posterity’s gratitude, information costs are near zero and design stays open. The stages where information metamorphoses from means of production to use value and back flow more cheaply, more easily and faster. The end result is not just use values at cut-rate prices; the products are technically more up-to-date. This is, in a nutshell, the economic rationality behind voluntarily entered, peer-to-peer labour relations organised in a commons/community.

Appropriation of Tools and Skills

Access to tools is at the heart of the Marxist critique of capitalism. The proletariat was created when it was deprived of the means of production. The enclosure movement was a decisive episode for establishing this condition. To the first generation of Marxists, reclaiming the means of production necessitated a seizure of the factories and land held by the bourgeoisie. Their revolution could hardly be anything but violent. A takeover of this kind looks highly improbable today, but, then again, perhaps such a step is not called for any longer. User-centred development models suggest that the proletariat is already in possession of the means of production, at least in a restricted sense and limited to some sectors of the economy.

The instruments of labour break down into tools and skills. As far as tools are concerned, these have effectively been put out of reach from living labour for hundreds of years by the large-scale organisation of industrial capitalism. In the flexible accumulation regime, however, both the labour force and the machinery are rapidly being downsized. An illustration hereof is the computer which over a period of thirty years has gone from mainframes to palm devices, and from being a major investment barely affordable to elite institutions to a mass-market consumer product. It is as consumer goods that the means of production are trickling down to the masses, spreading in wider circles with the expansion of markets and with every fresh wave of same-but-different items. Productive tools (computers, communication networks, software algorithms, and information content) are available in such quantities that they become a common standard instead of being a competitive edge against other proprietors (capitalists) and a threshold towards non-possessors (labourers). Once the infrastructure is in place and common to all, additional input must come in the form of more brains/people. Evidence of such a trend has been debated by management writers, industrial sociologists, and Marxists since the early 1980s. In the FOSS industry this anomaly is the rule. Glyn Moody attests to it in his study of the FOSS development model. Businesses based on free and open licenses rely more heavily on the skills and motivation of its staff than firms selling proprietary software: “Because the ‘product’ is open source, and freely available, businesses must necessarily be based around a different kind of scarcity: the skills of the people who write and service that software.” (Moody, 248). Glyn Moody’s observation implies that labour power is multiplied faster by the number of people pooling their capacity to a given project than by improving the equipment. This feature is likely to be in consistency with most pre-capitalist societies. Up until the breakthrough of the industrial revolution, the product of human labour probably increased much more in return to the worker’s skill than to the perfection of tools.26

The lessons from the computer underground bring into relief a debate on capitalism and deskilling that raged in the 1970s and 1980s. The controversy took place against the backdrop of the post-industrial vision that capitalism had advanced beyond class conflicts and monotonous work assignments. Harry Braverman targeted one of its assumptions, that the skills of workers had automatically been upgraded when blue-collar jobs were replaced by white-collar jobs. He insisted that the logic of capital is to deskill the workforce, irrespectively if they are employed in a factory or in an office: ”By far the most important in modern production is the breakdown of complex processes into simple tasks that are performed by workers whose knowledge is virtually nil, whose so-called training is brief, and who may thereby be treated as interchangeable parts.”27 Braverman’s contribution to the debate was very influential. In hindsight, however, the rise of new professions, in computer programming for instance, seems to have proven his critics right. They replied that though deskilling of work is present in mature industries, this trend is counterpoised by the establishment of new job positions with higher qualifications elsewhere in the economy. One of them, Stephen Wood, reproached Braverman for idealising the nineteenth century craft worker. Idealisation was ill-advised, not the least since the artisans had constituted a minority of the working class. Wood pointed at the spread of literacy to suggest that skills have also increased in modern society.28 His comment is intriguing since it brings our attention to a subtlety that was lost in the heated exchange. It is not deskilling per se that is the object of capital, but to make workers exchangeable. When tasks and qualifications are standardised, labour will be in cheap supply and politically weak. From this point of view, it does not really matter if skills level out at a lower or higher equilibrium. Universal literacy is an example of the latter. One of its consequences was that labour power became more abstract and more interchangeable. Literacy is in this regard quite analogous to present-day campaigns for ‘computer literacy’. These reflections on the Braverman-debate give us perspective on the current, much talked about, empowerment of consumers and computer users. Displacement of organised labour from strongholds within the capitalist production apparatus, through a combination of deskilling and reskilling, has prepared the ground for computer-aided, user-centred innovation schemes.

As was expected in the debate in the 1970s, computerisation has spear-headed these tendencies. The reason is that the computer, unlike ordinary ‘dumb’ machinery, is universal in its applications. This feature of computers has not come about by chance, as can be seen from the introduction of computer programming in the industry in the 1950s and 1960s. Managers invested heavily in numerical control machines with the hope of becoming independent from all-round skilled labourers. Special-purpose machinery had failed to replace these workers, since incentives had still to be taken at the shop floor to integrate the separate stages of specialised production. Another drawback with single-purpose machinery was that it locked production into a single, high-volume production line, which created other vulnerabilities to capital in face of workers’ resistance. In contrast, general-purpose machinery simulated the versatility of a human being, thus it was better fit to replace her. In the words of David Noble: “Essentially, this was a problem of programmable automation, of temporarily transforming a universal machine into a special purpose machine through the use of variable ‘programs,’ sets of instructions stored in a permanent medium and used to control the machine. With programmable automation, a change in product required only a switch in programs rather than reliance upon machinists to retool or readjust the configuration of the machine itself […]” (Noble, 1984, 81–82). This universality of computers is directly related to the overall specialisation and lock-in of human knowledge in the capitalist labour process. Software mediation allows the single skill of using a computer program (for example Photoshop) to translate into other skills (operating the machine-language of the computer, the crafts of printmaking and type-setting). Thus computer literacy lessens some of the inertia of human training. It gives an edge to firms and individuals in flexible labour markets ruled by the imperative of ‘life-long learning’. And, of course, it undermines the position of skilled and organised labour. A case often referred to in labour theory is the union struggle in the printing industry during the mid twentieth century. Typographers had traditionally had a strong position based on their knowledge monopoly over the trade. Computerisation of the labour process was decisive for breaking their strength.29 No doubt, the importance of software algorithms in the so-called new economy owes to its expediency in this regard. Programmable automation, i.e. computers, has accelerated the logic of automation to a breaking point, both in its despotism and in its emancipatory potential. Previously, human knowledge was objectified in cogs and wheels, now it is objectified in binaries. The need for living labour is sharply reduced by the fact that electronic texts can be reproduced infinitely, what we know as the ‘information exceptionalism’ dictum. However, as was argued in chapter two, digitalisation comes with a catch. The electronic text cannot alter itself in a novel and meaningful way. The game changes so that living labour must be deployed to produce a perpetual stream of novelty, meaning, affects, and context. To combine abstract cues in novel ways (innovation) or integrate them with lived experience (construct meaning), requires cognitive and analytical efforts of living labour, based on holistic understanding and personal engagement. Labour theoreticians and management consultants concur that Taylorism tends to lay waste to these capabilities.30 What’s more, these qualifications are generic. Anyone born a human is able to conceptualise, communicate, write, perform, etc., which makes the technical division of labour harder to sustain.

Contrary to popular belief, Harry Braverman was not oblivious to these possibilities, though he had no illusions that the emancipatory potential could be realised under capitalist work relations. Approvingly he notified a tendency in the general development of machinery: “The re-unified process in which the execution of all the steps is built into the working mechanism of a single machine would seem now to render it suitable for a collective of associated producers, none of whom need spend all of their lives at any single function and all of whom can participate in the engineering, design, improvement, repair and operation of these ever more productive machines.” (Braverman, 320). His divination is quite similar to what we are witnessing in the computer underground today. Knowledge monopolies are flattened so that larger sections of the proletariat can engage in any (and several) productive activities. Because Photoshop is substituting traditional forms of typesetting and printmaking, crafts that took many years to master and that required major investments in printmaking facilities, a broader public can produce posters and pamphlets that are instantly applicable to their local struggles. The dissemination of productive skills and tools make it much harder to control the productive use of these capabilities than was the case when the means were concentrated in the hands of a few, though organised and relatively powerful, employees. What is true of graphical design equally applies to the writing of software code. An indication hereof is the difficulties of the state and capital to suppress the free development of filesharing and encryption technologies. We would be mistaken, however, to jump to the conclusion that these productive forces are mushrooming independently of capital. The perspective of another of Harry Braverman’s critics, Andrew Friedman, might be more applicable when we move on to look at how capital reintegrates user-centred development communities in the capitalist valorisation process once again. Friedman stressed that in addition to overt coercion and control of workers, managers win over employees by building consensus and giving them leeway.31 The approach of ‘responsible autonomy’, as he called it, becomes the lead tune when firms set out to manage communities of volunteer developers. Needless to say, carrots and sticks are not polar opposites but are part of one and the same strategy. Inside firms, cooperation between employees and managers takes place under the unspoken threat of downsizing and outsourcing. Such tacit pressures are voided when the other part is a volunteer development community. Instead, firms have to fall back on copyright law, patent suits, and various acts on computer decency and cyber-terrorism. Law enforcement authorities are the necessary companion to the soft approach of the corporate allies of the FOSS movement.

Inside the Software Machine

We have worked ourselves towards the conclusion that, if the critical inputs in post-Fordist production are the aesthetic and cognitive processes of workers and audiences, then the proletariat is in possession of the means of production. But doubt lingers if the proletariat can be said to be in charge of the instruments of labour, even if it is their own brains. The ambiguity can be reformulated as a question and set in the context of the FOSS community. Is free software a tool reclaimed from capital, or is it a cog integrated into a larger software-capitalist machine? Harry Braverman followed Karl Marx in differentiating between tools that extend the powers of the human body, and machinery which turn workers into human appendages. The distinction was straightforward in the industrial conflicts which he studied and where machines consisted of mechanical parts. The components of the software machine, in contrast, are made out of signs. In important respects the software machine is identical to human language. In this case, it is the unconsciousness that acts as a human appendage to the machine, rather than the fingers, muscles and eyeballs of the worker. In computer science there is even a word for the human component, the ‘wetware’. The task of formulating an emancipatory project is complicated by the fact that the species being can barely be told apart from the matrix which he is integrated and subordinated to.32 Liberation from this machine can neither be had by claiming legal ownership over it, as in ‘expropriating the expropriators’, nor is it clear how it could ever be smashed, as is advocated by self-described neo-Luddites. Hackers hold out the possibility of a third way. Since the software/wetware machine is omnipresent there can be no external points from which to confront it. Struggle is carried out from inside the enemy host and must therefore be subversive rather than confrontational in character.

To carry this thought any further, we need to examine in more detail what constitutes the machine as opposed to its human limbs. The problematic demands a philosophical line of attack. Humberto Maturana and Francisco Varela have presented a distinction between two forms of organisation, that of the living system and that of the man-made system. The criterion for an organisation to count as a living system is as follows: “[…] they transform matter into themselves in a manner such that the product of their operation is their own organisation”33 Maturana and Varela call these systems autopoietic because they are, not in an absolute sense, but in a number of important aspects, autonomous within their unity. They produce a plenitude of qualities all referring back to their regeneration (for example, a plant that grows seed, strong smell, pure colour, etc). This condition fundamentally set them apart from non-living, man-made systems: “Other machines, henceforth called allopoietic machines, have as the product of their functioning something different from themselves (as in the car for example).” (Maturana, 80) The product of allopoietic systems is their output, which is the sole rationale for their existence. A car that is built without motion would not qualify as a car. The operation of an allopoietic system is defined, and the output is decided over, by an observer—an alien power—not by the organisation itself. In addition, the lack of autonomy of allopoietic systems is established by that they have inputs. That is, they depend on an alien power outside their own reach for acquiring and digesting supplies that are required by their central operations. For a car to function as a car, access to fuel is mission critical.

Maturana and Varela acknowledge that autopoietic systems can be integrated as a component in a larger allopoietic system. Plants grown in industrial agriculture is a case in point. Such corps are breaded to produce not according to their own plenitude of needs, but according to a one-dimensional requirement as determined by an alien power. All the plants in a field are streamlined to duplicate the plant with the highest output of them all, so that diversity is replaced with monoculture. By extension, regulation over a plant’s inputs and outputs translates into control over the farmer tending the plant. Men themselves are grouped into allopoietic ‘social machines’. The first example coming to mind is the factory. Subsumed under capital, living labour has outputs (surplus value) and inputs (commodified needs). Historian of technology Lewis Mumford dated this wretchedness back to Pharaohs’ Egypt. The orchestration of slaves and free men on a grand scale, crowned by the erection of the pyramids, exemplified to him a gigantic, allopoietic system. He labelled it the mega machine. Here the concept of the machine is extended beyond a narrow focus on any single device, giving due credit to the social machine which organises and confines the operation and scope of the technical machine.

Gilles Deleuze and Felix Guattari have gone the furthest in broadening our understanding of what defines a machine, to the point of denying any substantial difference between the living organism and the technical apparatus. They argue that there are always intersecting flows at some sublevel which blurs the notion of a definable, molar object.34 The wasp and the orchid is their favourite example. The fact that the orchid can’t reproduce itself without mediation from the wasp does not disqualify the flower as a living organism. The symbiosis between the flower and the insect forms an assemblage. Thus the plant and the animal qualify as a machine, in the terminology of Deleuze and Guattari. If their reasoning is accepted, the distinction between allopoietic and autopoietic systems cannot be found in that one of them is self-reliant within its unity towards an Outside. But if the definition is modified just slightly, the two systems can be differentiated by the presence or absence of mutual reciprocities in the flows. The central question becomes if a dependency is asymmetrical to the point that one part has the power to define one-sidedly the relationship and the internal composition of the other part. We might say that an allopoietic system is a system that has fallen prey to ‘real subsumption’. In plain language, the distinction between a living system and a man-made system boils down to power relations.

Thus we are brought back to the question of how emancipation is possible inside the software/wetware machine, or, to be more specific, how the proletariat can take charge of the instruments of labour that they already are in possession of. The appropriation of the means of production has proved to be an insufficient condition for achieving freedom. The reason is that the productive tools are framed within a social machine of unfreedom. The post-Fordist labour process has always-already anticipated the independent worker capable of administrating his own labour power. He is held in place by a network of economical, social and ideological constraints that extends far beyond the work situation. Maintaining these conditions is the object of the workfare state. Planned insecurity persuades the worker to willingly take the form of the commodity. Even though the instruments of labour are now at her disposal, her newfound freedom is nonetheless put to the service of production for exchange. Leading on from the discussion above, emancipation would require a reversal of the process previously referred to, whereby allopoietic systems are regrouped into autopoietic systems. Gilles Deleuze and Felix Guattari would have phrased the same thing as the forming of a nomadic war machine. Such airy proclamations tend to create more confusion than they explain. We are left wondering what an autopoietic system would be if it is not a living organism in the commonsensical meaning. Drawing from Maturana’s and Varela’s description, it could mean an organisation where the product of its operation is an end in itself. A concrete example of such an organisational form, to be discussed in the next chapter, is the reputation-based, gift/library economies that stratifies FOSS development projects. The desire and logistics of the market economy are here reprogrammed into an economy of play and excess. Inside these constellations the conditions are provided for employing productive tools and skills for non-instrumental, convivial ends.