1
A Background of the Hacker Movement

The History of Internet

One could argue that cyberspace emerged in 1876 with the telephone. The Internet, as we presently know it, is commonly thought of as the merger of telephony and computers. Leading on from the Internet’s heritage in telephony, Bruce Sterling light-heartedly proclaims that the first hackers were the boys employed as switchboard operators by the telephone companies. The boys played pranks while connecting customers and they were soon replaced with more reliable, female personnel.1 This historical anecdote is in accordance with the portrayal of hacking as it comes across in mainstream media. Hacking is regularly reduced to an apolitical stunt of male, juvenile mischievousness, and, ultimately, it is framed as a control issue. In order to emphasise the political dimension of hacking, it is apt to outline a different ‘mythical past’ of hackers. This story too begins with the invention of the telephone. Graham Bell was not only a prominent inventor but also a forerunner in exercising his patent rights. The business model which his family built up around the patent was no less prophetic. Telephones were leased rather than sold to customers and the monopoly service was provided through a network of franchised subsidiaries. All in all, Graham Bell established one of the most controversial and longstanding monopolies in American twentieth century corporate history. When the communication infrastructure was built, the Bell Telephone Company concentrated on catering to urban dwellers while rural areas fell by the wayside. The telephone had its biggest impact on life in the countryside but it was not profitable for companies to connect distant farmhouses. Long before Bell’s patent had expired, farmers began to construct their own telephone lines, sometimes using fence wire to pass the signal from one farm to the next. The movement spread rapidly in rural areas. The first telephone census made in 1902 counted more than 6000 small farmer lines and cooperatives. Over the years, the farmer lines were incorporated into the national dialling system.2 The most direct parallel to those farmers today are community activists establishing gratis, wireless Internet access in their neighbourhoods. The farmers and hackers both demonstrate the ingeniousness of living labour, to route around constraints and to appropriate tools (even when it takes fence wire) for its own purposes. This interpretation is rather consistent with the original meaning of the term hacking. The word was first used by computer scientists in the 1950s to express approval of an ingenious and playful solution to a technical problem. These privileged few enjoyed a great amount of autonomy to do research and ‘hack’ while having access to very expensive equipment. After the end of the cold war, when computer equipment got cheaper while researchers lost some of their former autonomy, the joy of playing with computers was picked up by groups outside the institutions, by people calling themselves hackers. Though this book is mainly about the latter group, the story begins in the science laboratory. Readers that are familiar with the background of the Internet can skip ahead to the next heading.

If any time and place could be pinpointed as the springboard for the merger of computing and telephony, later to become the Internet, John Naughton suggests that it was the thriving experimental milieu at Massachusetts Institute of Technology (MIT) before and around the two World Wars.3 When Vannevar Bush completed the first Differential Analyser in 1928, it was a massive compartment of gears and pressure cylinders. The machine was used for advanced equations in engineering projects and for calculating ballistic trajectories for the military. To build such a computer represented a huge investment affordable only to the biggest institutions. Despite the immense costs, the computer could only perform a limited set of operations and each calculation had to be hardwired into the machine. To give a significantly different instruction, or to correct a bug, meant to physically replace hardware components. The cost efficiency of computing resources would be vastly improved if computers were made more flexible. This required an architecture where the physical components were given an open-ended function so that more instructions could be provided in software code. Norbert Wiener, the founder of cybernetics, sketched on such a digital computer and his ideas were implemented towards the end of the Second World War. MIT scientists hoped for a deepened symbiosis between man and machine. By shortening the feedback loops between the computer and the user, they envisioned a computer that would function as a complement to the human brain. The computer could take care of intricate and monotonous calculations and leave the humans free to engage in innovative and associative exploration. This dream was held back by the computer design of the time. Batch-run computers were provided with a set of instructions which had to be completed in advance. The computer processed the instructions in one chunk without allowing for any human interruptions. If something went wrong the researcher had no choice but to rewrite the program and start all over from square one.

A solution to this awkwardness was found in an alternative design, timesharing computers. The selling case for time-sharing computers was that several users could share the capacity of a single computer. It saved a very expensive resource, computer calculation time. Later on, the principle of time-sharing was extended beyond the confines of the computer box. It was extended from a number of users in one place sharing a single computer to many users in a wide area pooling and sharing their combined computer resources. This idea occurred to Bob Taylor who presided over Advanced Research Projects Agency (ARPA). The organisation had been set up in the aftermath of the launch of Sputnik. It was part of American policy to catch up with the Soviet Union in the race for technological supremacy. Bob Taylor realised that ARPA was in possession of a cacophony of computer terminals unable to exchange information with each other and that internal communication would be alleviated if these computers were linked together. His ambitious plan was stalled by the fact that the terminals had not been manufactured with the intent to speak to one another. Furthermore, the complexity of the system would grow exponentially with every new computer added to the cluster. To overcome these two problems, of incompatibility and complexity, ARPA researchers placed nodes in-between the terminals. The nodes consisted of small computers that served as network administrators, receiving and sending data, checking for errors and verifying that messages arrived at prescribed destinations. The nodes bridged the incompatibilities of end-user terminals in a decentralised fashion. By dispersing the intelligence to the edges of the network, rather than collecting information on the whole system in a server and guiding every intricate detail of the network from this centre, the problem of complexity was somewhat reduced. This end-to-end solution still remains basic to the architecture of the Internet.

The common notion that Internet originates in Pentagon is partly misleading. It is correct, though, that a theory on a networked mode of communication had been devised previously to ARPA’s undertaking in an organisation affiliated with Pentagon. The individual behind this feat was Paul Baran and his employer was Research ANd Development (RAND).4 Nuclear holocaust was the policy area of RAND’s strategists. A major concern of theirs was the game-benefits of a nuclear first strike. A first strike, or an accident for that matter, could sever the connections running between the headquarters and the missile silos. The mere possibility of such an outcome created uncertainties and jeopardised the MAD (Mutual Assured Destruction) doctrine. A resilient communication system was therefore crucial to guarantee retaliation capacity. Vulnerability was located at the single line which carried the message: ‘fire’, or: ’hold fire’. Hence, the model envisioned by Paul Baran distanced itself as far as possible from a centralised communication infrastructure. In a network all the nodes are linked with their neighbours and there are several possible routes connecting any two destinations. Baran had the plan sketched out in 1962 but he ran into opposition from the phone company. AT&T was entrenched in analogue telecommunication technology and obstructed to build the infrastructure that Baran’s system required. In analogue communication systems the sound waves are faithfully reproduced in a single stream running through the phone line. In a digital communication system, in contrast, the signal is translated into ones and zeroes and sent as a number of information packages. Once the data arrives at the destination, the packages are reassembled and put together so that it appears to the receiver as a continuous stream of sound. Baran’s idea demanded a digital communication system where the signal was divided up in information packages and each package could decide individually for the best route to travel. If a channel was blocked the package could take a different route. Because of the resistance from AT&T, Paul Baran’s plans were left in a drawer and he did not learn about the work in ARPA until much later.5

Towards the end of the 1960s, ARPA built the first computer-to-computer connection and named it ARPANET. It linked together a small selection of universities and military bases. For a long time it remained an exclusive system confined to the top academic and military echelons. Over the years, however, other networks began to crop up in the US and elsewhere. The Télétel service in France is the most well-known example, though less successful trials were made in England and Germany too. It was implemented by the French telephone company in 1982 after many years of testing. The terminals, known as Minitel, were handed out for free with the intention that they would replace the need for printed white page directories. Instead the users quickly found out how to communicate with each other through their Minitels. Most of the traffic was driven by conversations between users and by erotic bill boards, so called ‘messageries roses’.6 The Internet, the network of networks, took shape as these diverging net-clusters were joined together. To cope with a growing diversity of standards, Robert Kahn and Vint Cerf designed a system of gateways in the mid 1970s. The Transmission-Control-Protocol (TCP) and Internet Protocol (IP) links together and carries the traffic over the many networks of Internet.

The increased flexibility of computer hardware has allowed important advances in the utilisation of computers to be made solely on the level of software code. This in turn implies lower costs to innovate and thus less dependency on government or business support. UNIX is a landmark in the history of software development but it is also archetypical in that it partially emerged to the side of institutions.7 The two enthusiasts responsible for UNIX, Ken Thompson and Dennis Ritchie, had been working on an operating system for Bell Laboratory, a subsidiary of AT&T, for some time. They had become disheartened and started their own, small-scale experiment to build an operating system. The hobby project was taken up in part to the side of, in part under the wings of, the American telephone company. UNIX rapidly grew in popularity and became so widely used by AT&T staff that the company eventually endorsed it. Moreover, it also became known among users outside the phone company. An anti-trust settlement in 1956 against AT&T was of utmost significance for the success of UNIX. As part of the settlement the phone company agreed not to enter the computer business. The AT&T was thus barred from selling UNIX or charging a higher tariff against computer transmissions running on its phone lines. Consequently, UNIX could be freely distributed and became widely popular in universities and in the private sector. John Naughton’s explanation for the success of the operating system is instructive: “The main reason was that it was the only powerful operating system which could run on the kinds of inexpensive minicomputers university departments could afford. Because the source code was included, and the AT&T licence included the right to alter the source and share changes with other licensees, academics could tamper with it at will, tailoring it to the peculiar requirements of their sites.”8 It is logical that UNIX was designed to run on relatively inexpensive computers, since for most part it was developed on such computers by users with limited access to large-scale facilities. The same pattern is repeated once again when AT&T’s original UNIX program metamorphoses into versions of BSD UNIX, and later inspires GNU/Linux. This time around the code was written on computers that were just-about affordable to individuals. Using personal computers to write software must have felt like an impediment at the time. And yet the accessibility of small computers was the key factor for the eventual success of operating systems like BSD UNIX and GNU/Linux. The point should be stressed since it highlights two important lessons. First, the success of this technology often stands in inversed relationship to the size of fixed capital (i.e. machinery and facilities) that is invested in it. Second, as a consequence, much computer technology has been advanced by enthusiasts who were, at least partially, independent of institutions and corporations. Users joined forces in a collaborative effort to improve UNIX, fix bugs, and make extensions, and to share the result with each other. The environment of sharing and mutual support was spurred in the early 1980s, thanks to the invention of a protocol for UNIX computers to share files with each other through the phone line. It facilitated community building and fostered values that foreboded later developments. With the option to connect computers over the telephone infrastructure, a cheaper and more accessible communication channel than ARPANET had been created. The stage was set for hackers to enter.

The History of the Computer Underground

It is one of history’s ironies that the roots of the Internet can be traced to two sources, U.S. Cold War institutions and the anti-war movement. The hacker community grew out of American universities in the 1960s. Bruce Sterling attributes the potent ideological hotbed of the computer underground to a side effect of the Vietnam War. Many youngsters then chose to enter college studies to avoid being sent into battle. Disposition for civil disobedience was reinforced by the communicating vessels between university drop-outs, peace activists, and hippies. The radicalism of students mixed with the academic cudos of researchers.9 In the following decade, the mixture of hippie lifestyle and technological know-how was adopted by so-called Phone Phreaks, a subculture specialised in tapping phone lines and high-tech petty theft. Political self-awareness within the movement was propagated in the pioneer newsletter, Youth International Party Line. It was edited by Abbie Hoffman who started it in 1971. He saw the liberation of the means of communication as the first step towards a mass revolt. Two years later the newsletter was superseded by the Technological American Party. The new publication jettisoned most of the political ambitions of its predecessor and concentrated on circulating technical know-how. The forking of the fanzine epitomises two polarities within the movement, still in force today. On one side are activists motivated by ideology and on the other side are ‘techies’ who find satisfaction in mastering technology. Some techies have come to look unkindly upon the efforts by activists to politicise the movement. Techies tend to perceive ‘hacktivists’ as latecomers and outsiders claiming the hobby for their own purposes. The truth of the matter is that the subculture has always been deeply rooted in both traditions. Indeed, the hacker movement was more or less forked out of the New Left.10 De-politicisation came later, mirroring general trends in society. In the aftermath of the clashes of 1968, the line of thought within the hippie and environmentalist movements changed. Rather than engaging in head-on confrontations with the system and the police, hopes were placed in the building of an alternative system. The leading thought was to develop small-is-beautiful, bottom-up, and decentralised technology. The personal computer fits into this picture. A central figure in advocating such an approach, with a foothold both in the environment movement and the embryonic hacker movement, was Stewart Brand, publisher of the Whole Earth Catalog. Another key name in the philosophy of ‘appropriate technology’ was the industrial designer Victor Papanek. They denounced mass production in the same breath as they provided blueprints for Do-It-Yourself technologies. The leading thought was that a ‘better mousetrap’ would win out against faulty industrial products on the merits of its technical qualities. Hackers show no less confidence in the superiority of Free/Open Source Software (FOSS) code and are assured of their victory over flawed, proprietary code. The historian of technology Langdon Winner was more sceptical when he wrote a few years after the Reagan administration had quenched the high spirit of ‘appropriate technology’ campaigners.11 The ease by which the government purged the programs for appropriate technology is a sobering lesson of the raw power of the state. Winner complained that the thrust of the hippie and environmental movement had quickly been deflected inwards, into consumption of bohemian lifestyles and mysticism. His pessimistic account of the events is understandable but must be amended by the fact that he was unaware of the sprouting activity of phone phreaks and hackers at the time he wrote. The ideals of the appropriate technology movement jumped ship and thrived long after the zenith of the hippies and the environmentalists. This could be a precious reminder in a possible future when the hacker movement has faded and its heirs have not yet announced themselves. But it is also possible that the hacker movement proves itself to be more resilient than the New Left. A principal difference, though not the only one, is the motivational force behind hacking. The advocates of appropriate technology were led to experiment with Do-It-Yourself techniques as a deduction of their politics. Hackers, on the other hand, write code primarily for the sake of it, and politics flows from this playfulness.

Steven Levy writes about the hardware hackers gathering at the Home-brew Computer Club in the mid 1970s. His retrospect gives an account of the two, partially coinciding, partially inconsistent, sentiments expressed by the people involved. They were drawn together by the excitement of tinkering with electronics. Even so, the pleasure they experienced from hacking was tied up with a political vision and messianic hopes. By constructing a cheap and available computer able to ‘run on the kitchen table’, they set out to liberate computing from elite universities and from corporate and military headquarters. But persons with overtly political motives found themselves out of place. The initiator of the Homebrew Computer Club, Fred Moore, eventually dropped out, expressing disappointment with the lack of political awareness among club members. Reflecting on his departure, the activist and long-time moderator of the Homebrew Computer Club, Lee Felsenstein, suggested that Fred Moore got his politics wrong. The politics of the Homebrew Computer Club was the “propaganda of the deed” rather than “gestures of protest”.12 Indeed, what the hardware hackers accomplished from playing with electronic junk is impressive. The microprocessor had recently been invented by Intel and the company expected the item to be used in things like traffic light controllers. Hardware hackers thought differently. They combined Intel’s microprocessor with spare parts and built small computers. Ed Robert’s Altair marked a watershed in 1975. Altair was not the first hacker computer but it was the first computer built for small-scale sale that enjoyed some commercial success. Robert’s market consisted exclusively of other hackers and radio amateurs. The purchaser had to assemble the parts painstakingly by himself. If the customer endured, he was rewarded with a completely useless gadget.13 But within the cooperative milieu of the Homebrew Computer Club, improvements were rapidly made and many more prototypes followed. One model was Apple. It departed from the earlier designs in that it was somewhat user-friendly and had functions beyond just being a computer. Apple was a decisive step towards creating a consumer market larger than the cabal of hardware hackers. As demand for computers picked up, venture capitalists began to pay attention to the home computer market. The establishment of a proper industry for small computers was crowned by IBM’s decision in 1981 to launch the Personal Computer (PC). It is true that the burgeoning economic opportunity in home computers led to a decline of idealism among the members of the Homebrew Computer Club. We are, however, equally justified to say that community norms disintegrated in response to the fulfilment of the original aims of the club. Hardware hackers succeeded in democratising computer resources. Up till that point, decision-making over computers had been concentrated in the hands of a few privileged, white-coated engineers who were in charge of mainframe computers. Workers detested these mechanical monsters and hackers hated them for much the same reason, mainframe computers were the embodiment of ‘office despotism’ in the 1960s and 1970s. By channelling their play-drive into computer building, hardware hackers forced the industry to embrace their dream of decentralised computing.

Techies are therefore right in insisting on the centrality of play over ideological zealousness. It is play (desire) which sets the hacker movement apart from the ‘gestures of protest’ of more traditional political organisations. The problem with an apolitical standpoint is rather that it does not stay clear of politics. When class consciousness has been evacuated, the void is colonised by right-wing, commonsensical ideology. Editorials in later issues of the Technological American Party expressed a strong libertarian conviction, a tradition which has been upheld by Wired Magazine. To get the blend of politics and pleasure right is a tightrope to walk. History indicates, however, that radicals can trust the outside world to intervene. The pure joy derived from comprehending and building systems is political in itself in a class society were power relations are mediated through black box designs. At its heart, hacking is a gut reaction against capital’s strategy of taylorism. Even libertarian hackers inevitably partake in challenging capital’s monopoly over technological development. That is not to say that political awareness is irrelevant. The point is rather that play triggers repercussion, and repercussion against play guarantees that class consciousness is passed on from generation to generation. This dynamic can be illustrated with the standpoint that information should be freely shared. The left, the right, and the supposedly apolitical, all rally behind this belief. Their demand boils down to something more than an opinion. As a matter of self-preservation, hackers cannot help but to work towards free information flows. Free access to software tools is a prerequisite for the existence of a hacker community. While the norm of sharing was a given in the academic setting of the 1950s and the 1960s, it is out-of-step with the growing market value of information today. It is unavoidable that this conviction will set the hacker movement on collision course with the establishment.

Two enclosures of software code at the beginning of the 1980s heralded the soaring economic and political stakes in information. It is telling that IBM, currently a close ally of the Open Source movement, was the chief propagator for enclosing software behind copyright law.14 Japan’s Ministry of International Trade and Industry (MITI) promoted in the early 1980s a sui generis intellectual property law on software. The suggested law provided 15 years of protection and provisioned compulsory licensing of software. A similar draft surfaced in the World Intellectual Property Organisation (WIPO) at the time. IBM was agonised by these proposals. Assisted by U.S. trade officials and supported by governments in Europe, IBM managed to submit software under the stricter terms of copyright law.15 IBM’s new-found romance with the Free and Open Source Software movement, which is for sure to be a short-lived one, is a consequence of its strategy in the 1980s backfiring. Microsoft got the upper hand over IBM from the introduction of strong software licenses. At about the same time, in 1982, AT&T was relieved from the anti-trust ruling, which had prevented the company from entering the computer business. The company soon began to enforce ownership rights over UNIX. By then the operating system had been extended and rewritten many times over by students, scientists and enthusiasts collaborating across institutions and corporations. The attempt by AT&T to privatise UNIX qualifies as one of the most notorious enclosures saluting the dawn of the ‘information age’. It had a resolute impact on the collective mindset of the programmers’ community and fuelled their scepticism towards big corporations and the intellectual property regime. AT&T’s bid to own UNIX demonstrated that copyright can be used to rob authors of their work, the very opposite of the ideological justification for the law. Hackers thus realised that the collective authorship of software developers has to be shielded from the legal powers instituted in a single party by copyright law.

The Birth of the General Public License

The politics of the hacker movement gravitate around the issue of public access to source code. Source code provides a list of instructions that can conveniently be read and modified by an engineer. Software released under Free and Open Source Software (FOSS) licenses are required to be published together with the source code. In proprietary software the source code is hidden away as unintelligible binary code. Binary code is the list of instructions, represented as lines of ones and zeroes, as it is read and executed by the computer.16 In addition to the technical obstacles, copyright law forbids users to read the code of proprietary software.

The most explicit vision of how public access to source code relates to social change is voiced by the Free Software Foundation. It was initiated by Richard Stallman as a response to the commercialisation of his own field of work.17 The foundation has set itself the task of liberating computers from proprietary software code. To realise the dream of a computer run entirely on free code, the Free Software Foundation has since the mid 1980s produced non-proprietary software applications. The name of the software published by the foundation, ‘GNU’ is an acronym that stands for ‘Gnu is Not Unix’. In addition to publishing GNU applications, the Free Software Foundation is the maintainer of the most widely used license in the computer underground, the General Public License (GPL).

The need for legal protection of shared work has been learned the hard way by hackers. Richard Stallman made the discovery when he was working with GNU Emacs, an application for editing source code. An Emac for UNIX had previously been written by another programmer, James Gosling. Initially, Gosling distributed his source code for free of charge and without restrictions. Richard Stallman incorporated bits of James Gosling’s work into GNU Emacs. Later, James Gosling changed his mind and sold his copyright to UniPress. The company went after Richard Stallman and told him not to use the source code that was now had become theirs. That experience contributed to the creation of the General Public License. The nick-name for GPL is telling; ‘Copyleft—All Rights Reversed’. In Richard Stallman’s own words: “Copyleft uses copyright law, but flips it over to serve the opposite purpose: instead of a means of privatising software, it becomes a means of keeping software free.”18

Copyright automatically befalls the creator of a literary work. The author has the right to specify the terms under which her creation may be used. The copyright holder is nominally entitled to have her request enforced in court. The General Public License makes use of this flexibility in the law. It lists a number of conditions that protect the freedom of users. As long as these conditions are respected, anyone may access the program without asking for permission.19 If a user violates the GPL agreement the freedoms granted by the license is voided and normal copyright law kicks in. Paradoxically, it is the copyright law that puts teeth in the free license. The General Public License is a free license in four regards. The user has the right to run a program for any purpose. He is allowed to study how a program works. It is up to him to distribute the program as he sees fit. And the user is free to alter the program and publish the modified version. It is a common misunderstanding that the General Public License forbids commercialism. On the contrary, it guarantees the freedom to use a program for any purpose, including commercial uses. In practice, however, the option to sell a copy is constrained by the same freedom of everyone else to give copies away for free. The GPL is not as innocent as many of its advocates make it out to be. As will be argued from a more theoretical angle in later chapters, the GPL directly intervenes in how private property works. Free licenses protect the collective efforts of an anonymous mass of developers from individual property grabs. Under the GPL, the creator inverts the individualising force of copyright by denouncing his individual rights and has these returned back to him as a collective right. He enjoys the collective right not to be excluded from a shared body of work. Private property, on the other hand, is nothing but the right of a single party to exclude all others. A fringe within the hacker movement, whose rejection of copyright law is absolute; objects to FOSS licenses for making concessions to the law. In general, those hackers who define themselves as free software developers tend to be more ambivalent towards copyright law. While politically minded filesharers and activists wish to abolish copyright altogether, such a move could actually be harmful to FOSS development. Nothing would then prevent companies from incorporating free source code into closed software lineages. Hackers could not do the same to companies since closed software is delivered as binary code. As a result, FOSS development would fall behind in the race for the technological edge. Radical critics thus end up in the company of those pro-business advocates who campaign for license schemes that facilitate the private appropriation of software commons. One provision in GPL states that the license must be passed on to derivative copies. In order to include a line of GPL in a software application, the entirety of that program must be licensed under GPL. Adversaries have labelled this characteristic as ‘viral’. This so-called viral feature is designed to prevent the opposite, i.e. that strings of GPL code is subdued under proprietary licenses.20

At this point it is justified to ask: why would a company bother to abide to the terms of copyleft? And would a court care to enforce the Do-It-Yourself-licence? Ira Heffan tries to answer the question if the GPL would stand in court by comparing copyleft with shrinkwrap licenses. Shrinkwrap licenses were introduced by companies who sought a convenient way to define user rights of retail customers. The name ‘shrinkwrap license’ owes to the shrink-wrap plastic that surrounded boxes with computer disks. The terms of the license were visible through the plastic and the customer demonstrated her approval of the terms by breaking the seal. By extension, it is considered to be an equivalent ‘demonstration of consent’ when a user decides to open a software application after the terms have been displayed on the screen, a so-called ‘clickwrap license’. Shrinkwrap license agreements claim protection under both contract law and copyright law. The restrictions to use that are specified in shrinkwrap licenses are no different in principle from those made in the GPL. Since shrinkwrap licenses have been acknowledged in courts, Heffran reasons the GPL is protected on the same merits.21 The resilience of the GPL is indicated by the fact that the license is mostly respected despite legal uncertainties. Indeed, the legal deterrent is secondary to other considerations in support of GPL. Companies tempted to abuse the free license know that such a move would dishearten key employees. Of greater significance, companies recognise that it is often in their long-term interest to keep FOSS development free. It is preferable to managers that the code stays in an information commons rather than to risk that the software is monopolised by a competitor. Furthermore, the move to enclose a software development project is also a stoppage. In other words, it is to cut oneself off from a development flow that runs fastest in the open. In the long run, a firm could suffer more from losing out on the continuous improvements made by the community of developers, than could be gained in the short run from breaking the GPL agreement. It is in this way that Linus Torvalds, the originator of the Linux kernel, explains widespread compliance with GPL: “Somebody might [ignore the GPL] for awhile, but it is the people who actually honour the copyright, who feed back their changes to the kernel and have it improved, who are going to leg up. They’ll be part of the process of upgrading the kernel. By contrast, people who don’t honour the GPL will not be able to take advantage of the upgrades, and their customers will leave them.” (Torvalds, 96–97) On the other hand, companies risk little and can maximise their immediate gains from secretly including GPL source code in their commercial development projects. Two major difficulties arise when attempting to uphold the GPL license. First, the violation must be detected. This can be tricky if the GPL code is sunken into a larger chunk of copyrighted, binary code. Second, the entitlement to assign and then enforce copyleft resides not with the Free Software Foundation. It stays with the original author of the violated code which can be a complication. Fear of an unfavourable court ruling has cautioned the Free Software Foundation in its confrontations with offenders. Negotiations and out-of-court settlements have usually followed when a firm was caught cheating. Even if negotiations end successfully with the firm eventually releasing the source code and pledging to respect GPL in the future, several months have passed and the commercial value of the software might already have been exploited. Thus, firms have an incentive to play a game of hiding its violation, and when discovered, delay settlement till the software has gone out-of-date. A free software development team in Germany working on a sub development project for GNU/Linux, decided to pursue violations all the way to court. In April 14th, 2004, a Munich district court granted the team a preliminary injunction against Sitecom Germany GmbH. Sitecom was banned from shipping their product unless the company agreed to comply with all obligations made in the GPL.22

Though the GPL seems to hold water when tried in the juridical system, new legal inventions are affecting the appliance of copyright. The expansion of patent rights to cover information processes has caused a stir in the free software movement. Previously, software has been protected as an artistic work under copyright law. In the US and Japan, patents on software have existed for a long time but it is now becoming a common practice among companies to enforce them. In EU a struggle has been ongoing for years over the introduction of European software patents. Software patents pose a threat to GPL because companies can follow copyright law and abide to the terms specified in the free license, while restricting access to the source code through patent law. While submitting to the letter of GPL the spirit is abused. Much the same can be done through Digital Rights Management technology. If a free software application is locked up behind a hardware design, having access to the source code and the manuals wont do users any good. The Free Software Foundation hopes to battle these developments in a third version of the GPL by adding conditions against software patents and Digital Rights Management technology. The updated version was released in 2007. However, the decision to adopt the changes suggested by the Free Software Foundation rests with stakeholders in the community. Many of them have objected that the new GPL is too restrictive and that the license will lose in relevance because of this. At the time of writing these issues are still under discussion.

The History of Gnu/Linux

In order to realise the dream of a computer run entirely on free software, the Free Software Foundation has produced a great deal of GNU software tools over the years. However, one crucial part kept missing out, the kernel. The kernel is the heart of an operating system and works like a bridge between the software and the hardware on a computer. Linus Torvalds filled the gap when he initiated the Linux project.23 In 1991 Torvalds was studying computer science at Helsinki University, Finland. He got inspiration from another operating system, Minix, which had been written by Andrew Tanenbaum as an educational device. Where UNIX was made to be run on state-of-the-art machines owned by university departments, Minix was designed for personal computers, expensive but affordable to (middle class, western) individuals. Minix was shipped with its source code but had a restrictive license that limited the options to tamper with the program. Linus Torvalds studied the design of Minix and constructed his own kernel from scratch. Linux and Minix briefly competed for the hearts and minds of a small community of developers. The eventual triumph of Linux over Minix is explained by Linus Torvalds in part by technical niceties. However, he also admits to some technical weaknesses of Linux compared to the competitor. (Torvalds) In the end, the success of Linux over Minix owes not to technical but to social factors. The restrictive licence of Minix prevented users from improving the software. Andrew Tanenbaum’s main concern was to keep Minix accessible and easy for students to learn from. Extending the program with more features would, from this perspective, just complicate matters. Linus Torvalds, in contrast, had made the decision from early on to license his work under the GPL license. Hence, everyone could rest assured that the Linux kernel would stay open for users.24 The breakthrough for GNU/Linux came a few years down the road, again boiling down not to technical superiority, but to social relations of property and licensing.

When the telephone company AT&T begun to exercise ownership rights over the UNIX operating system, researchers at Berkeley University were enraged. They had contributed as much to the development of UNIX as the employees at AT&T had. Abandoning their long-time project was not an option. Instead of starting up anew from square one, like Richard Stall-man and Linus Torvalds, they painstakingly removed the lines of UNIX code which had originated in AT&T. New lines were written to replace the old ones claimed by the telephone company. The result was named Network Release 1 and later Network Release 2. Berkeley sold the product while allowing the purchaser to do whatever he pleased with the software. Over the years the project forked into three versions; NetBSD, FreeBSD and OpenBSD. Many experienced co-developers were working on these releases at the time when Linus Torvalds, single-handedly, started up his garage project. From the outset, the scales weighted heavily in favour of any of the BSD UNIX projects. The promising future of BSD Unix came to a halt in a single blow. AT&T took notice of a company marketing a version of Network Release 2. AT&T sued Berkeley for infringement, since the university had licensed the product to the company. The case was brought up to trial in 1992. When the court case was settled eighteen months later, AT&T had to give up its claims over BSD UNIX. But the damage had already been done. Programmers shunned away from BSD UNIX during the court trial since they feared that their work might end up with AT&T. Suddenly, the Linux development team was flooded with programmers from across the Atlantic.

A lesson from the Linux tale is that the success of one forked project over another does not owe exclusively to its technical features. The history of technology is full of examples of inferior products surpassing more advanced competitors on the market. The schoolbook example is the battle between Betamax and VHS over the industrial standard in video cassette recording. Though the technological excellence of Betamax is widely recognised, VHS won out since the producer, JVC, could muscle the most support from content providers (Hollywood) and instil confidence among retailers and consumers. In short, it was the strength of one capital over another to tie strategic allegiances with other capitalists and to invest in marketing, which proved decisive for the outcome. The performance of the device itself was only of secondary importance. The novelty with free and open source software development is that the social factors determining the success of a fork are reversed. GNU/Linux won out over Minix and BSD Unix not because it was backed by the highest concentration of capital, but to the contrary, because under the GPL it had the purest absence of private property relations. This is true also when FOSS comes up against proprietary software. GNU/Linux is but one out of many successful FOSS development projects.

The Success of the Free and Open Source Software Movement

A measure of the strength of the FOSS development model is given by the extent to which free software outdoes proprietary software in the marketplace. In this regard GNU/Linux has only been moderately successful. Although popular in academic settings and among corporate clients, very few ordinary users have switched to the free operating system. It is hard to reach end users since they value familiarity with the graphical interface higher than the technical performance of the program. Those FOSS applications that target administrative and specialised functions have therefore enjoyed the greatest diffusion. Apache is a software program for running web servers. As of January 2006 it held 70% of the market. The largest commercial competitor, Microsoft, had only 20% of the market. Other competitors were next to eradicated. A major boost for Apache came when IBM gave up its own in-house development project and endorsed the free server program instead.25 Berkeley Internet Name Domain (BIND) is another software application that has become a standard in its niche. It translates domain names into IP-numbers. No less remarkable is the success of Sendmail. Though most ordinary computer users have never encountered it directly, Sendmail has for many years been the most commonly used program for managing e-mail traffic. The success-story of projects developed in the spirit of the FOSS model could also extend to the World Wide Web (www). Technically speaking www is not a software application but a protocol for websites and hyperlinks that makes it more convenient to navigate through the Internet. The idea of www first occurred to Tim Bernes-Lee when he was an employee at Conseil Européen pour la Recherche Nucléaire (CERN), a research centre for particle physics near Geneva. In the early 1990s the www was up-front against a rivalling system called Gopher. That system faced a swift end after the University of Minnesota, from where it had originated, announced their intent to charge a license fee for it. Even though the ownership claim was only partial and it was never fully implemented, the threat was enough to scare away users and developers. To be ‘Gopherised’ became a term describing the process when a software development project hits an evolutionary dead-end due to attempts by one part to enclose it. After the Gopher incident, CERN declared that the institute refrained from any claims of ownership over www in the future.26

Many attempts have been made to explain why the development model of hackers works so well, both by the people directly involved in it and more recently by social scientists. One of the most vocal insiders theorising about the hacker movement is Eric Raymond. In an influential article, The Cathedral and the Bazaar, he compares two opposing styles of software development. He contrasts the Cathedral model of conventional, centralised development with the Bazaar model of accessible, open development. The recurring reference of a software application built as a cathedral is Microsoft’s Windows. However, FOSS projects that are written by a tightly knit group of developers who rarely accept contributions from outsiders also qualify as cathedrals. According to Raymond, Linux was the first large-scale project that demonstrated the efficiency of the opposite approach, the Bazaar model. In this model, anyone with Internet access and programming skills can partake in the development process. Thus, a zero-budget, bazaar FOSS project often involves more working hours from skilled programmers than the biggest corporation can possibly afford.27 The large number of beta-testers and co-developers is a major advantage because it critically speeds up the time of identifying and fixing bugs in the program. To fully utilise the feedback cycle from users, bazaar-developments are released frequently, in extreme cases with one new version every day, and improvements are made continuously. In contrast, upgrades of cathedral-style software must undergo a long period of testing to ensure that all bugs are removed before the program can be shipped to the market. In the long run bazaar-styled FOSS projects will triumph, Raymond attests, and puts it in a sentence reminiscent of old-school historical materialism:28 ”[…] because the commercial world cannot win an evolutionary arms race with open-source communities that can put orders of magnitude more skilled time into a problem.”29 Without doubt, Eric Raymond is a partisan observer, but his claims are echoed by his nemesis. In an internal memo, Microsoft assessed the threat to the company posed by the FOSS movement. The text leaked into the hands of Eric Raymond and was posted on the Internet in Halloween 1998. Subsequently the text is known as the Halloween document.30 Just like Eric Raymond, the authors of the Halloween document pay tribute to the bigger battery of free labour that can be deployed in a FOSS development project and its ability to harvest the collective intelligence of users.

Linus Torvald’s have offered his own explanation to the GNU/Linux phenomenon. The competitive edge of free software over proprietary software owes to the higher motivation of its authors. Speaking at a Linux User Group meeting in San Francisco, he stated: “Those other operating systems aren’t bad because of [technical detail] A or technical detail B. Those systems are bad because the people don’t care”31 Linus Torvald avoids reflecting over how the lack of motivation among hired programmers hangs together with the labour relations under which they work.32 It is not the individuals working as programmers that are the weak link in the proprietary development model. The weakness consists in that when they write software applications for a consumer market, production for use is subjugated under production for exchange. To a hired programmer, the code he is writing is a means to get a pay check at the end of the month. Any shortcut when getting to the end of the month will do. For a hacker, on the other hand, writing code is an end in itself. He will always pay full attention to his endeavour, or else he will be doing something else. It is hard for companies to compete with that kind of commitment.

Yet another take on the matter is offered by Robert Young, chairman of the free software company Red Hat. According to him the success of free software can be explained by the absence of warring intellectual property claims. In property-based research, discoveries are kept secret and inaccessible to others. This creates an overall tendency for proprietary software to break up in separate strains and it prevents sequential development. In free software development the pressure is reversed. If one distributor of GNU/Linux adopts an innovation that becomes popular, the other vendors will immediately adopt it too. Everyone has equal access to the source code and is permitted to use it. Innovations are speeded up since people can build on the discoveries of others. By removing intellectual property barriers there is an overall convergence towards a common standard.33 Robert Young’s case against proprietary software gains weight due to the growing importance of standards in the computer market. The two economists Carl Shapiro and Hal Varian points at the computer market as a showcase of what they call a ‘network industry’.34 In network industries, single products function as parts of a larger system made up of many other products. The components tend to be produced by several competing manufacturers. Interoperability becomes as important to the customer as the price and quality of the individual product. Users desire compatibility, not excludability. In fast moving high-tech markets, customers and suppliers are wary of the risk of investing money and know-how in a product that might soon be obsolete. Historically, the size of capital has been the best insurance that a company’s product will stay in service for a long time. But irrespectively of the strength of the firm, bankruptcy, hostile acquisition, or changes of corporate policy is always a possibility. Software applications with rights holders are in jeopardy of being turned into evolutionary dead-ends. Customers will then be unable to get hold of updated versions that are compatible with the latest utilities. Users of GPL software, in contrast, are guaranteed the freedom to adapt the code for as long as they need it. Hence, the absence of capital, instead of the size of capital, provides the best insurance that a product will stay relevant to users in the future.

The three accounts given above to explain the success of the FOSS development model differ only on a superficial level. They have in common that they testify to the inadequacy of capitalist relations in organising labour in the information sector. The productivity of free software development stands in an inverted relation to the jungle of property claims that impede proprietary software development, as Robert Young bears witness of. The playfulness of hackers proves to be more productive than the estranged wage relation that programmers are caught up in, as is suggested by Linus Torvalds. And the possibility to involve users as co-developers in free software development projects, a factor stressed by Eric Raymond, is obstructed by the commodity form which separates consumers from producers as decisively as buyers are separated from sellers. In conclusion, the achievements of hackers cannot be told as the history of any single event—an ingenious form of organising people, the utilisation of a novel technology, or a bunch of larger-than-life individuals. Neither is it sufficient to combine these factors into an explanation. This phenomenon must be analysed in relation to the totality of capitalist relations. The FOSS movement is unique only because, in exploiting the failures of the capitalist system, it has demonstrated a prototype for struggle that is generic. In the following chapters, a closer engagement with Marxist theory will be made to substantiate this claim. It will be argued that self-organised labour can outrun firms in all sectors were the concentration of fixed capital (i.e. large-scale machinery) and the division of labour (specialised knowledge) is not an insurmountable threshold.

Power Relations Inside and Outside the Hacker Movement

The image of FOSS development presented so far in the argument, as a single, monolithic model for writing software code, needs to be modified. Each project differs from the next in the way that decisions are made, work is delegated, and credits are given. Neither can FOSS coding be clearly separated from the corporate sector. All of the major projects are hybrids that muddle along as half enterprises, half community efforts. A survey discovered that 41% of all FOSS projects were initiated and managed by corporations. A majority of the projects turned out to be driven by individuals, and only 6% were organised in the loose collaborate networks of the kind usually associated with FOSS development.35 The numbers do not necessarily tell the whole story about where most hackers are engaged and to what level of involvement. Projects initiated by firms often aim at customising code made available by the FOSS community. The most well-known and influential FOSS projects, GNU/Linux, Apache etc., are organised in open, collaborative networks.

It bears to be stressed, though, that network organisation does not imply an absence of power relations. Hierarchies are based on reputation, charisma, contacts, shrewdness, and demonstrated technical skill. These values are embedded in a shared norm system that, on the one hand, holds the community together, and, on the other hand, stratifies internal relations and raises barriers to outsiders. All sizeable development projects depend on a core group of chieftains and/or a charismatic leader for taking final decisions. The top 10% of the most productive developers of FOSS projects contributes 72% of the code, with further lopsidedness in the top tier.36 Arguably, what matters is not that there are no asymmetries in power relations and performances. Demanding such purity would paralyse any effort to organise in this messy world. What matters is that power in these communities is not fixed in economic, legal, or architectural dependencies. Common to all FOSS licenses is that they guarantee everyone the option to fork a project. Inertia against forking rests on the commitment and size of the user base. It is the number of devotees which determines the relevance and the future prospects of a fork. The ease by which people can walk away from a project is therefore a significant constraint on how power can be exercised. If a leader is perceived as abusing his position, his basis of power can vaporise very quickly. It is on basis of how the subculture is internally organised in respect to power, rather than a freedom from power per se, that the egalitarian claim can be made.

Similarly, qualifications must be made in respect to how the hacker movement relates to the surrounding world. The self-image of hackers markedly differs from their track record. In A Hacker’s Manifesto, a pamphlet that circulated on bulletin boards in 1980s and that has become something of a founding charter of the computer underground, it was declared that hackers: ‘Exist without skin colour, without nationality, without religious bias’37 A quick glance is all it takes to confirm that the social base of the hacker movement is heavily skewed towards middle-class males living in the West. The demography has its roots in those days when only a privileged few could access computers during their college years. The monetary restrains have since been considerably eased. Prices of computer equipment are no longer a barrier to entry since five year old computers with zero market value work perfectly fine for the purpose of writing free code. The main cost is the leisure time and the peace of mind which it takes to engage in frivolous computing. Spare time is, however, one of the few resources which the unemployed among the western working class have in abundance. Geopolitically speaking, the dominance of USA and Europe will be history once China, South America, and India commit to free software.

But the monetary aspect is not a catch-all explanation, as is shown by the extreme gender imbalance of the hacker movement. According to one policy document, only 1.5% of FOSS community members are female.38 The statistic is puzzling not the least since there are no economic incentives for male hackers to keep women out as there would be in the labour market. Indeed, another survey found that 66% of the men agreed to that the FOSS community as a whole would benefit from more female participants. In spite of this, a majority was of the opinion that it was up to the women to make the effort.39 The voluntaristic and meritocratic ethos in the subculture makes male hackers, and indeed, some female hackers too, impervious to structural explanations to the gender bias. Admittedly, those structures go far beyond the scope of the FOSS community. Because of the division of domestic labour, women on average have less time to devote to improving their computer skills. The gendering of technology as masculine keeps girls from ever getting in contact with free software, or, they do so much later in their lives than the boys, again resulting in less training.40 These are major drawbacks in a community where demonstration of technical skill is deemed to be of paramount importance. Even with the same level of knowledge, female hackers attest to that they have a harder time to gain recognition from their male peers. Often they end up doing tasks with lower status, such as documentation and writing manuals, while men drift towards more prestigious and technologically more challenging assignments. It is not surprising then that out of the small number of female recruits; many quickly drop out because of a general lack of encouragement.41 The absence of public institutions within the community means that these structures cannot easily be counterbalanced with positive discrimination and targets for equal opportunities. Lots of hackers prefer to keep it that way as a matter of self-determination of the community vis-à-vis the outside. Government initiatives of the sort, which can be expected to follow with a more official role for FOSS applications, will not be welcomed by everyone.

Though the union between hacker priorities and feminist politics is far from harmonious, the two groups have things in common. The portrayal of the early feminists and the media image of the hacker are unnervingly alike. Scaremongering against hackers as criminals is only outdone by the stereotype of the male, geek misfit in popular culture. The representation of the geek is similar to the stigmatisation of educated women in the nineteenth century, who were then described as ugly, un-feminine, and unfit for marriage. Now as then, people are discouraged from seeking knowledge that would have increased their autonomy. In order for women to defend their positions in a computerised society, skills in programming are essential. It is with awareness of this fact that a number of women’s groups, such as Haeksen, LinuxChix, and Debian-Women, have started. They support each other and female newbies that are about to join the hacker movement. Additionally, they try to change the attitudes among male hackers. In theory, at least, the graphic interface could be a leveller in respect to gender, race, and appearance, something hinted at in the Hacker Manifesto. Similar thoughts are echoed in cyber-feminism. This brand of feminism expects that everyone will end up as cyborgs in a society that relies increasingly on technology. When the human-machine distinction breaks down they hope that essentialist separations between man/woman will crumble too.42

The inclusion of women in the hacker community is not an act of charity by male hackers but is a fateful question. The emancipatory potential of hacking exists precisely in that it crosses the line of who can access technology. While legal obstacles to entry have been reduced thanks to free and open licenses, know-how required of FOSS users continues to be a barrier. The difficulty to engage ordinary computer users in free software are hotly debated and its grave importance is recognised within the hacker community. In addition to legal, technological and monetary constraints, however, community norms are another barrier that prevents diversification and growth of the base of users and developers. If the goal of making non-proprietary software a standard on desktop computers is ever to be realised, the other half of the population must be let onboard. Techies within the hacker movement have to come to terms with that the implications of FOSS development run deeper than narrowly defined cyber-politics. Hacking affects labour relations, the standing of developing countries, and gender issues. These realities of the mundane will weight heavier upon the hacker community the more integrated FOSS development gets in the global economy and the world of business.

Business Models Based on Free Software

From a liberal perspective, FOSS development is understood as simply another business model that better approximates the free market. The economist Joseph Schumpeter’s idea about ‘creative destruction’ is often invoked at this point. According to him, the creative destruction of capitalism continuously leads to old monopolies being undercut by better technology and smarter entrepreneurs. The appeal of this narrative is twofold. Firstly, legislators, judges and the general public are more receptive to the arguments of FOSS advocates if the challenge to intellectual property rights is framed within a liberal discourse. Secondly, there is a reassurance to hackers in the belief that information technology and free market forces inevitably will defeat the Enemy, artificial monopolies and intellectual property. The liberal interpretation of how free software development and free markets relate to each other needs to be complemented with critical theory.

Corporate involvement in FOSS development is not a final disclaimer of the proposition that hacking is subversive and possibly anti-capitalist. Readers familiar with Marxism know that individual capitalists sometimes respond to contradictions in the capitalist system, in such a manner that the economic system is pushed further into deepened contradictions and decline. Putting it even more poignantly, communism is plausible when the choices which are rational to an individual capitalist, simultaneously, on an aggregated level, are disastrous to capital as a collective class. Corporate backing of the FOSS movement might be such a case. That is the belief of Darl McBride, executive officer of the software company SCO. In the computer underground, SCO is infamous for litigating distributors of GNU/Linux. The company claims that vendors of GNU/Linux have appropriated software code owned by SCO. In a letter to the U.S congress Darl McBride outlined the dire consequences if the government sided with FOSS developers, and the fallacy of other executives in doing so: “Despite this, we are determined to see these legal cases through to the end because we are firm in our belief that the unchecked spread of Open Source software, under the GPL, is a much more serious threat to our capitalist system than U.S. corporations realize.”43 Of course, Darl McBride is keen to portray the special interest of SCO as the common interest of all capitalists. Many hackers would protest that the FOSS model threatens monopolies (especially Microsoft’s) but not markets. The corporate backing of FOSS seems to confirm their argument. They are probably correct, at least for as long as the issue is narrowed down to the provision of software services. From this horizon, the collective capitalist class can do without Darl McBride and a few more unfortunate individual capitalists. Reformist critics of copyright are eager to point out that capitalism can benefit from a commons in software, since it frees up circulation of capital in other sectors. Toll-free roads is a standard example of a public good that does not threaten private property, but quite to the contrary, facilitates the automobile- and petroleum-industry. The comparison might not be entirely justified in this case. Much more is at stake than the sale of software services. In the second chapter it will be argued that the commodification of information is at the heart of the so-called information age. Key to the privatisation of information is capital’s control over code architecture and over electronic, global communications. If the constituent power to write software remains out in the wild, capital faces an uphill battle when enclosing information commons. The sharing of music and films on peer-to-peer networks is only the beginning.

To discern the complex symbiosis between capital and community, a closer look at the motivations and the business models are required. We must not get stuck in a black-and-white dramaturgy of profiteering villains exploiting unaware idealists. The hacker subculture has a pragmatic attitude in this regard. Even the leftwing camp endorse business models as long as these are based upon free licenses. Commercialisation is only possible because it is carried by a strong current within the hacker movement. Most hackers believe that if corporations get a vested interest in FOSS development, corporations will help to diffuse FOSS to the public. Without doubt, corporations played a significant role in elevating GNU/Linux from the shadows of a student project to the echelons of a competing industrial standard. More questionable is the underlying assumption, however, that Open Source is emancipating no matter how (and for whom) it is put to use. Coupled with the agenda of the pro-business camp are individual hopes to make a living out of hacking, ultimately in order to escape other kinds of dull employment. It is not the decisions by a few corporate managers that are the motor behind commercialisation. Instead, commercialisation is driven by individual ambitions among hackers. Many strive to professionalise the hobby so that they can work full-time on writing free code. But focusing on the ambitions of individual hackers is yet again to lose sight of the bigger picture. Their hopes are rational within the irrational world in which they are forced to make a living. The hacker movement is commercialised not primarily by pull from individual capitalists but by push from generalised conditions of plight in a market economy. Capitalist relations are the ‘culprit’ in this drama, if we are to designate one, not any individual capitalist or a band of ‘disloyal’ hackers.

Free software, Richard Stallman never tires of pointing out, means free liberties as in ‘free speech’. It does not mean free prices as in ‘free beer’. Against this viewpoint it can be objected that the distinction between a public/political and a private/economic sphere is not as clear-cut in post-modern capitalism as Stallman makes it out to be. In any case, the GPL permits commerce in so forth that copies of a GNU program can be sold, even if this opportunity is limited by the fact that GPL also allows sold copies to be copied and given away for free. The great surprise is that in this impossible space, where the same object is simultaneously available for free and for a price, some people volunteer to pay. Richard Stallman supported himself initially by selling tapes with copies of GNU Emacs. This niche has been populated by a small but prospering flora of firms committed to the ideals of FOSS development. The first company to base its operation on GPL was Cygnus, founded in 1989. A slogan by the company sums up the contradictory logic behind their business model: “We make free software affordable”. Cygnus expanded steadily for many years without making much fuss. It employed more than a hundred software engineers when it merged with another company based on free software, Red Hat, in 1999. Red Hat is the major symbol of the union between the FOSS movement and commercialism. Changes in policy by Red Hat, and by other commercial distributors of GNU/Linux, have cast doubt on their commitment to ‘the Cause’. Concentration of capital (the merger with Cygnus—euphorically hailed as the establishment of a ‘free software powerhouse’) has been followed up with a narrowing of the service to high-paying, commercial customers.44 It is tempting to brush over the past years of experimentation with free software business models. We could call it a rudimental phase in waiting to be decomposed into a mature form more consistent with capitalist relations (and, as it happens, Marxist doctrine). That would be a bit too convenient of us. It is undeniable that both Cygnus and Red Hat were raised in the free software community and have been loyal to the ideals of free software for many years, while at the same time being highly profitable. For the fiscal year of 2007, Red Hat reports total revenue of $400,6 millions.45 And many small garage firms survive on this peculiar business model. Taken together, it is enough of an anomaly to call for a closer investigation and possibly a revision of Marxist theory. Red Hat earns revenue from selling its own branded packages of GNU/Linux bundled with customer support. Even though variants of GNU/Linux are easy to access for free on the Internet, and even getting a Red Hat version for free is quite possible and entirely eligible, the company manages to charge a price for its product. The chairman of Red Hat, Robert Young, explains this phenomenon with branding. Additionally, it might be cheaper for a company to pay Red Hat for technical support than hiring a programmer of their own. Though Red Hat and a few other ‘bumblebees’ do fly, it is a mistake to jump from these marginal cases to the conclusion; that they embody the business model of the future. This irrational but non-coercive source of income, Young ads in a critical comment, generates only a fraction of the profit compared to proprietary software. Corporations established in a proprietary software model would never volunteer to decommission copyright protection.46

Robert Young’s last comment hints at Red Hat’s role as a niche provider. To identify how ‘free software’-based companies earn their profit, it is therefore not sufficient to focus on the operations of the individual firm. The hidden riddle is in the conditions surrounding free software firms. To carry the analysis any further, we need to plunge into a theoretical discussion a little pre-emptively. According to Marxist theory, labour is the source of ‘surplus value’ (in short, profits). The amount of surplus value that can be accumulated depends on the number of labourers which are set in motion by capital. It is possible, however, for the individual capitalist to acquire more surplus value than he employs labourers. Sometimes the capitalist manages to position his venture so favourably that the surplus value of labourers hired by competitors flow to his pockets instead. The schoolbook scenario is when a capitalist invents a superior way of producing goods. The expense for producing an item then falls below the social average, i.e. that average cost which competitors pay when they produce the item. The units are produced at different cost levels but since they are identical, all items sell in the same market for the same price. Hence, the most cost efficient capitalist (producing the unit at the lowest cost) earns his efficiency gain as a bonus from the other capitalists. This boon is known as ‘surplus profit’. The advantage is only temporal since other capitalists will try to catch up with the inventor. When the majority have adopted the superior way of doing things, the average production cost will even out at the new equilibrium. The surplus profit vanishes for the individual capitalist. It is not efficiency gains in ‘absolute terms’ that provide the sought-for benchmark. It is efficiency gains vis-à-vis other comparable producers. Point is, surplus profits exist per definition as a deviation from the norm.

The existence of the FOSS business model can be understood as a variation on this theme. Companies like Red Hat and Cygnus hire labourers, to customise software code, to provide supportive services, and to brand their products. These activities generate a modest amount of surplus value. The input of waged labour is marginal in comparison to the vast amount of volunteer labour that has been involved in writing the software application.47 Gratis labour is not, though, automatically voided of value. It has value if it duplicates waged labour performed somewhere else in the economy. In other words, the worth of non-waged labour of FOSS developers stands in relation to the waged labour of in-housed programmers. Both are working towards equivalent code solutions. For as long as the social average cost of solving a computer problem is determined by waged labour and intellectual property relations, volunteer labour (hackers) and copyleft licenses cut costs below this social average. Surplus profits do not derive from the reduction of staff by means of technological innovation. It is created when work is emigrated from paid labourers to un-paid users thanks to organisational innovations. It is an open question whether the copyright-dependent fraction of the capitalist class (Microsoft, Hollywood, record studios) can follow suit and close the gap in production costs, without disbanding themselves in the process. Microsoft’s ‘shared source’ policy, where selected customers are given restricted access to Microsoft’s source code, could be seen as an attempt to close in on the distance between proprietary software and FOSS. However, the priority to stay in control will probably spoil their efforts. Quite possibly, these companies are unable to appropriate the FOSS model and still sustain high profitability.

If that statement is correct, Red Hat’s surplus profit business model will prosper for a long time in the margins of society, leeching off the differential level in the cost of production. Taken to its logical end-point, this reasoning leads to the conclusion that shareholders of GPL-based firms like Red Hat are not freeriding on development communities. Through the so-called ‘equalisation of social surplus value’, as it is known by Marxists, Red Hat’s shareholders are exploiting the programmers employed by Microsoft. The second conclusion, more pressing to our inquiry, is that FOSS enterprises will never supersede and replace proprietary business model. The belief is commonly held by libertarian hackers who imagine that as long as entrepreneurs are left to work their deeds, intellectual property monopolies will eventually crumble under the superior rationality of the free market. They fail to see that Red Hat can only be profitable in relation to the inflated social average production cost set by Microsoft. Subsequently, even those firms dedicated to GPL can’t afford to do away with the intellectual property regime altogether. Individual capitalists might have different opinions on how to optimally configure the intellectual property regime. But the demand for an absolute abolishment of intellectual property rights is incompatible with capitalism. As we have now demonstrated, that statement is not falsified by the existence of non-proprietary software business models.

Speaking from the standpoint of Marxist theory, all firms, irrespectively of their policy on free versus proprietary licenses, are based on the exploitation of living labour. Saying that does not mean that executives cannot act with the best of intentions and even do some good from time to time. It is by no means automatic that entrepreneurs within the movement will push for commercialisation at every given opportunity. The involvement of competitors can even create checks in defence of the information commons. This paradox attests to the resilience of the GPL. The race for a graphic interface for GNU/Linux proves the point. Having an object-oriented desktop interface for GNU/Linux, i.e. using a mouse instead of typing in the commands on the keyboard, was a crucial step for the free operating system to become competitive. One attempt to create such an interface was an application called KDE. Though the code was licensed under the General Public License, it depended on a proprietary graphical library named Qt. Without the library to draw from the program becomes rather pointless. Qt was owned by Troll Tech, a Norwegian company, and in most circumstances they charged a developer’s license fee. Consequently, KDE did not meet the conditions of a free license. Though one branch of GNU/Linux users decided to overlook the impurities and move on, others were alarmed by the danger of leaving one company with legal claims over critical parts of the free operating system. A team of developers launched a project called GNOME that would compete with KDE while using a completely free graphical library. Another group of developers chose a second path to circumvent Troll Tech’s property claim. They sat down to create a Qt clone under the project name ‘Harmony’. The technological inferiority of GNOME at the outset did not prove an infeasible hindrance to the success of the non-proprietary fork. Strong community norms abiding to the ideals of free software proved sufficient to compensate for Troll Tech’s first-mover advantage. Eventually, as GNOME and Harmony gathered steam, Troll Tech was forced to renounce its hold over Qt. As will be argued extensively in later chapters, this suggests a high threshold within labouring communities against the crystallisation of private property relations. The second important observation to make is the strategic decision by Red Hat. At an early stage the company decided to throw its weight behind the GNOME-project and stand up for an information commons. The firm took some financial risks by not shipping their Red Hat version of GNU/Linux with the most advanced Troll Tech features. It made more sense for Red Hat to stick to a free license where they could race on a levelled field with other firms, rather than giving up some legal rights to a competitor. (Moody, 252).

To denounce all involvement by firms as a matter of principle can therefore be misleading. Activists must not forget that the pragmatic attitude of hackers towards commercial involvement partly explains their stunning successes. Garage firms are initiated and run by people in the subculture. They share the same values and depend on the support of the community, to the point where the two are at times indistinguishable. Undeniably, software start-ups have helped extending the political influence of the hacker movement, especially when campaigning against copyright legislations and software patents. And then again, a bridge runs in two directions. In the end, the most prominent role of garage firms will probably have been as bridgeheads for major corporations to enter the movement.

The Open Source Initiative

The invitation to the movers-and-shakers was sent in 1998 with the staging of the Open Source Initiative. If any single company could be said to have been responsible for setting off the avalanche, it has to be Netscape Communications. It is telling that the company began as a direct assault on a public-funded project to create a common standard in web browsers. In the early days the most widely used browser for navigating on the World Wide Web was Mosaic. It had been developed by the University of Illinois. A veteran within the software industry, Jim Clark, watched the browser grow in popularity and realised its commercial potential. Clark recruited a handful programmers from the team who had been working with Mosaic at the university, most noteworthy among them Mark Andreessen. They created an improved clone of the original browser and released it for free under the same name. Their infringement on the intellectual property rights of the university was never resolved. The only demand eventually imposed on Clark and Andreessen was that they must not call themselves ‘Mosaic’ any more. Instead they took the name Netscape.48 Ironically, many hackers would later hail Netscape as one of the good guys in the fight against proprietary software. In 1995, Microsoft recognised the importance of the Internet and began to push its own web browser, Internet Explorer.49 A year later, Netscape was in difficulties. Its share of the browser market was declining fast and a drastic change in policies was called for. The company decided to publish the source code of its browser. In January 1998, Netscape made its announcement to a baffled team of journalists and an exited audience of programmers. Netscape had consulted many ‘superstar hackers’ when drawing up an appropriate license. The General Public License was out of the question since it treats all users equally in terms of legal rights. Netscape had to balance the need for control with the urge to involve as many participants as possible in their development project. Their solution was to split the software code and the license in two projects running in parallel. The two licenses were Netscape Public License (NPL) and Mozilla Public License (MPL). While NPL kept some privileges for Netscape and third parties, MPL was fashioned for community development. Mozilla50 was run as a parallel development project backed by the company. The intention was that innovations made in Mozilla’s code would be fed back into Netscape. The company hoped to gain an edge over Internet Explorer by riding on the free labour of the computer underground. Despite the hype, Mozilla failed to attract a critical mass of free developers outside Netscape’s own corporate team. Commercial and communal coding does not mix well. In his seminal study of the Free and Open Source Software movement, Glyn Moody attests: “Netscape’s rise and ultimate fall is, in part, a monument to the failure of the commercial coding model—and a pointer to fundamental weaknesses in other companies that employ it.” (Moody, 203). Mozilla couldn’t save Netscape in the browser war. In the aftermath of the company’s decline, Mozilla developers have staged a comeback with the Firefox browser. Maybe the challenge to Microsoft’s Internet Explorer will be of a more noxious breed this time.

Despite its eventual downfall, Netscape had staked out the path when it published the source code of its key brand product. The road taken by Netscape’s initiative caused a split in the computer underground. In April 1998, all the chieftains of the hacker subculture minus the politically most outspoken of them, Richard Stallman, met up at the Freeware Summit at Palo Alto, to discuss the future direction of the movement. They wished to encourage big businesses to get involved in the computer underground. A crucial element in this strategy was to choose a label that sounded less threatening to status quo than the term ‘free software’. Free software, as the Free Software Foundation never fails to point out, concerns first and foremost the question of freedom. Freeing up technology is a means to deepen democracy. Such notions are just not helpful when corporations are to be courted. The preferred label decided upon at the meeting was Open Source. The focus of Open Source ‘revolutionaries’ is technological superiority while social concerns are tactfully left to the side.51 The term Free and Open Source Software used here is a compromise worked out after much debate within the community. In addition to launching a new brand name for the movement, Open Source differed in a crucial aspect from the GPL license. Like GPL, Open Source requires licensed software to be distributed freely; it guarantees that the source code is kept transparent, and it ensures the user the right to create modified versions of the original software without first notifying the author. All of these clauses are necessary to unlock the creative potential of volunteer labour collectives. Open Source does not, however, demand that the open license is attached to derivatives of the original code. By removing the ‘viral’ feature of GPL, Open Source provides firms with a back-door for appropriating code. Software licensed under Open Source can be ‘ripped, mixed, and burned’ and released under copyright. That is how Mozilla Public License was intended to work for Netscape. This is what repeatedly happens with software licensed under the terms of the Berkeley Software Distribution (BSD) license. In Marxist terms, Open Source licenses can be described as an organisational principle for systematising ‘primitive accumulation’, by which is meant theft, of the social labour taking place in developing communities and in the commons.

The opportunity was amazingly quickly recognised in corporate board-rooms. In the weeks following the launch of the Open Source initiative, IBM announced that they would switch to Apache. Their rationale for hitching onto the Apache project merits a closer look. It testifies to that the size of the user base can bear more clout as a strategic asset than concentration of capital and the expertise of the personnel. In 1998 the software market for web servers was jointly held by Microsoft, Netscape, and Apache. IBM had tried to enter the market but came to realise the strong tide of network externalities working against newcomers. It made sense for IBM to drop their in-house project and jump straight on to the very large user base held by Apache. The fact that IBM abandoned its own undertaking in favour of a non-proprietary project had a strong psychological impact on the business community. IBM has since made considerable investments in GNU/Linux and paraded its high-profile commitment to the FOSS movement. Other multinational corporations jumped on the bandwagon following the Free-ware Summit. Oracle and Informix, two giants providing software for business applications and databases, declared that they would release products that supported GNU/Linux. And hardware vendors, most notably Compaq, Dell, and Hewlett Packard, followed suit. Another important backer for the FOSS movement is Intel. In addition to porting commercial products to GNU/Linux and offering the free operating system to their customers, many companies are paying employees to write free and open source code. Sun Microsystems, for instance, bought a word processor from a German company and released it to the FOSS community. OpenOffice, the name of the program, is challenging Microsoft’s Word.

The willingness of hardware manufacturers, vendors and software independents to line up behind FOSS developers must be understood against the backdrop of Microsoft’s grip over the market. Because corporations have little to gain or lose in face of Microsoft’s de facto monopoly, they can free-ride on non-commercial projects and hope to enlarge secondary profits, by cutting costs for software development, or by distributing software to promote sales of hardware, or to sell support services, or through advertising. But since these profits tend to be inferior to Microsoft’s, this is a peripheral strategy. The preferred option for a firm is monopoly profits. We can therefore stipulate that corporate engagement in FOSS prerequisites an existing monopoly. Though the backing of FOSS developers is the second best option to a corporation, there are some solid economic reasons for bypassing the proprietary software development model. Microsoft’s restricted capacity to upgrade its software imposes time-lags on downstream businesses. Because of the long process that is required in proprietary development models for debugging and releasing software, Intel’s shipments of new processors have repeatedly been stalled.52 To reap the full benefit from advances in semiconductors, hence to persuade consumers to buy the latest technology, software applications need to be optimised for the hardware. The gravity of this concern might be better appreciated when we consider the speed by which fixed capital is devalued. Martin Kenney reports how stations for assembling computer chips, previously outsourced to sweatshops in Asia, are moving back closer to the consumer market in United States. In the two-three weeks it takes to ship a central processing unit (CPU) across the ocean, the product loses 5–10 percent of its value.53 Open source is attractive to fractions of the capitalist class because it solves expensive and dangerous time-lags in development cycles. Based on a public standard such as GNU/Linux, Intel is free to optimise the software for its hardware sales without having to wait for Microsoft’s ‘cathedral builders’. The restrictions imposed by intellectual property are impeding the circulation of capital and, consequently, limited commons in information looks increasingly attractive to companies in order to boost market sales.

Our understanding of the issue is clouded by the crisp line between FOSS and propietary licenses routinely drawn in the intellectual property debate. When critics make ‘open’ versus ‘closed’ into the central cleavage, mirrored in the moral distinction between good, innovative firms and bad, protectionist companies, more important divisions go unnoticed. In particular, we fail to see the extent to which both violations of and alternatives to intellectual property law has always already been overappropriated by the intellectual property regime. Intellectual property does not work by being simply ‘closed’. A fisherman catches no fish if his fishing net stays closed all the time. From this perspective, corporate experimentation with free and open licenses makes perfect sense as a complementary to copyright. Corporations are not unfamiliar with supporting public goods provided that they have control over the situation. A direct parallel to computer giants backing the Open Source Initiative is the sudden change of heart among biotech corporations towards private ownership of genetic discoveries. Pfizer, Merck & Co and other pharmaceutical and chemical industries were the chief architects behind extending patentability to life-forms in the US. Later they were alarmed when small start-ups and universities rushed to file patents on genetic information and act as patent trolls against the corporations. In 1992, the Pharmaceutical Manufacturers Association advised against government ownership of gene sequences, and the Industrial Biotechnology Association insisted that the U.S. government put gene sequences in the public domain.54 When the Human Genome project was launched it came up against an off-shoot of rampant greed. A rivalling project was run by venture capitalists and two researchers, Craig Venter and William Haseltine. They applied a method allowing them to rapidly generate sequences of human genes. The drawback with their method was that the data they collected was too fragmented and random to be of much scientific use. Instead, their research was aimed at hijacking the legal and financial control over the Genome database. The peril from such an outcome was so terrifying that Merck & Co invested substantially in public research to race Craig Venter and William Haseltine and to ensure that the data would stay in the public domain.55 The costs for Merck & Co were relatively small. In fact, it was cheaper for the company to initiate a public research project and create a free-for-all database, than it would be either to set up their own private database, or to pay for access to another company’s private database. Merck & Co gambled that it had the market position to make a net profit even from discoveries made available to its competitors.56 Major computer firms count in the same way that they can get the better out of information commons due to their sheer size. Their rallying around an industrial standard open for competition is not dissimilar to the American ‘open door’ policy in the colonial annexation of China by Western powers. FOSS licenses establish a standard to work from which maximises the pool of consumers, skilled workers, and business partners. This advantage will gain in weight the further capital gets in restructuring its operations into a network of subcontractors and freelancers, and the more global this network grows.

The ultimate prize for companies involved in the hacker movement is to engage a pool of gratis labour in one end of the balance sheet and to sell the output in the other end with no discount. It’s a ‘have-your-cake-and-eat-it’ business model. If a company sets out to make money in this way it can’t advertise its intent since most developers refuse to contribute to projects under such terms. Often it is pulled off as a one-off violation against the terms of a free license. Despite occasional baddies, most companies have come to the conclusion that they have more to gain in the long run by playing by the book. While this fact is a sigh of relief to FOSS project leaders, the same remark looks entirely different from a Marxist vantage point. To a Marxist it suggests that a more systematic way of exploiting labour has been found. Our critique must therefore not stop at the companies that violate the free licenses, or else we will fail to detect subtler forms of exploitation that sidesteps the direct point of sale. For example, if the FOSS community is being engaged to reduce the administrative and overhead costs of a firm while the firm maintains a constant level of earnings from its proprietary services and hardware sales (income in part deriving from purchases by hackers), then it could be argued that exploitation has been intensified. Not only do corporations thereby cut labour costs in individual programming projects. They also impose an overall downward pressure on the wages and working conditions of in-house computer programmers. Cutting to the chase, this is the main reason for corporate enthusiasm for the Open Source Initiative. The labour process of programmers is being outsourced and opensourced. The threat will eventually be felt by people in the computer underground too, since many of them rely on the privileged status and high salary of programmers to fund their passion for hacking. Hackers usually respond to this objection by saying that free software code does not compete with proprietary code since they occupy different market niches. That might be the case for now, however, if companies have it their way, they certainly will try to steer volunteers in such a direction. The outcome depends on if FOSS development communities can fight off attempts at channelling their labour power into avenues that undermines the position of in-housed programmers. Only then will hackers be more of a threat to capital than to organised labour.

To fully appreciate the significance of the open source initiative it must be set in context with other user-centred business models. Once we start looking, it becomes evident that quite many corporations base their strategies on putting users to work. For instance, search engines and databases are constructed so that users automatically add information to the database as an unavoidable side-effect from visiting the site. Though the input from each user is insignificant, the number adds up and, as is shown by Google and Yahoo, the financial worth can be gigantic. To gather more intricate electronic texts requires a greater degree of participation from visitors. This creates a trade-off between the numbers of users and the efforts asked of them. A possible solution is to bundle a service or, in the best case scenario, a community, to the gathering of information. Gracenote is a good example of how volunteer labour can be utilised in this way. The company owns the database CDDB which provides information on music titles. The database was built up by encouraging ordinary users to type in details of one or two of their own favourite albums. In this fashion, Gracenote eventually ended up with the world’s largest database on music. Paying a staff for doing the same job might have been quicker and better coordinated, but also prohibitively costly.57 By allowing the service to stay free for end-users, the company is ensured of a continuous upkeep of the latest information on music. The continuity of volunteer labour is crucial in a rapidly changing landscape of music fads. Revenue comes from charging commercial uses. The business model of Gracenote demonstrates how the pooling of volunteer labour can be combined with a partial closure by discriminating against different uses of a service. The version of the Creative Commons license were the artist preserves rights over commercial uses of her creation works in exactly the same way. Ironically, the open-ended invitation to a pool of unpaid contributors pared with a fee for commercial uses are often wrapped up in an ideological ‘let the business pay, it serves them right’ mentality. The foresightedness of the General Public License is underlined by that it does not permit discrimination of any kind of uses, not even commercial uses. It radically annuls exclusion as a concept and becomes all the more threatening to the world of commerce by not excluding it. The FOSS development model is a two-edged sword, even for hardware manufacturers and software vendors. Proprietary software applications such as Windows continuously demand more computing power. Thus the customers keep upgrading both software and hardware devices on a regular basis. Free software slashes this need in half and makes ten year old computers viable again. In the long run, hardware manufacturers and software vendors risk shrinking the size of their own market, unless they can inflate it by other means, by fashion and prestige of having the latest upgrade, for instance. Secondly, Microsoft cooperates closely with ‘content providers’, i.e. the Record Industry Association of America (RIAA) and Motion Picture Association of America (MPAA), to design software in support of Digital Restrictions Management (DRM) technology.58 It will be hard for content providers to enforce intellectual property rights on the Internet if a free computer architecture becomes the standard.

Firms Confront Foss Development

Hardware manufacturers are striking alliances with consumer pressure groups in a bid to outdo the influence of content providers. There is an obvious risk that the movement for alternative licenses, if activists sail too close to one branch of the industry, is reduced to a pawn in the game between warring fractions of capital. Campaigners for ‘information commons’ should keep in mind that the whole of the collective capitalist class depends on the intellectual property regime. The clash between Japanese hardware manufacturers, Sony and Matsushita, and American record studios over the introduction of digital audio tapes (DAT) in the late 1980s, is instructive. The record studios were unhappy about the lack of restrictions to private copying of digital audio tapes. They demanded technological fixes that would restrict duplication of tapes. Their demand was backed up with the threat not to give Sony and Matsushita access to their music catalogues. Faced with the prospect of shipping a technology with no content running on it, the manufacturers were forced to back down. As a direct consequence of the conflict, Sony acquired CBS records in 1988 and Matsushita purchased MCA with a record division in 1991, bringing content suppliers in to their corporate portfolios.59 The lesson is that both intra-industry pressure and acquisitions to stay independent of such pressure amounts to the same thing: a convergence of interest towards the protection of intellectual property. In fact, manufacturers of hardware are just as dependent on intellectual property law for their high profitability as software producers and content providers. The inflated price of high-tech consumer goods is upheld by patents, trademark law, sui generis directives on circuit boards, and protection of trade secrets. Essentially the rift within the capitalist class centres on the distribution of profit and rent between different sectors and industries. From a Marxist outlook it is obvious that the exploitation of labour can never be abolished in these rows, since exploitation is the source of profits over which capitalists are haggling.

Likewise, the interaction between capital and the capitalist state in relation to FOSS developers is a complex web of rivalries and mutual dependencies. Conflicts of interest arise between authorities at national and local level as well as between states in different regions of the world. For example, strong intellectual property protection works favourable to the US foreign trade balance. The US administration is therefore very receptive to the demands of movie- and record-studios.60 The same fact, however, is the primary obstacle when American businesses confront governments abroad. Initiatives in Third World countries and by local municipals in Europe to mandate the use of free software in public administration and schools, in part to save public money previously spent on proprietary licenses; have alarmed Microsoft and U.S. authorities. Sometimes their heavy-handed approach backfires. In India, Venezuela and Brazil, among other places, the governments have publicly endorsed the use of free software. But not even the different arms of the U.S. state have a unified front against the FOSS movement. A report prepared for the U.S. Department of Defence ended up advocating extended use of Open Source since it was considered to improve national security.61 From a quick glance at court orders and announcements by government officials, the capitalist state might appear to be more supportive of GNU/Linux than of Windows. However, if it comes to a stand-off between the two, hackers donate no campaign money, they have no influence over employment figures, and they do not steer global capital flows. While different branches of the state advance contradicting policies, the fundamental bias is in the existence of the State as such. It is true that Microsoft has been hassled by anti-trust investigations for over a decade, first in America and then in the European Union.62 However, the monopoly which the governments are prosecuting against is also upheld by the same state powers enforcing the patents and the licenses claimed by Microsoft. At the same time as the company was fined by the European Union for unfair competition, the EU commission pushed hard, in part on Microsoft’s behalf, for the introduction of software patents in Europe. Software patents can only strengthen Microsoft’s stranglehold over the market. In this light, the fine which Microsoft was asked to pay looks more like a bribe to pass favourable legislation.

The weakest link of the FOSS movement is its relationship with commercial, educational and institutional allies. By questioning the legality of FOSS licenses its adversaries hope to scare away supporters of free software. The SCO/Caldera’s lawsuit against IBM, Red Hat, and other businesses investing in FOSS utilities, is a case in point.63 Through a succession of acquisitions over the years, some property rights over UNIX have ended up with SCO/Caldera. The company has claimed that parts of the UNIX source code is incorporated in GNU/Linux and has been distributed by IBM and others. After several years of suits and countersuits the U.S. court system has come down against most of SCO’s claims. However, the litigations were as much about public relations as about law enforcement. The worst case scenario is that small businesses and municipals are discouraged from investing in FOSS because of the perceived legal and technical uncertainties of such applications. To counter these fears, the Open Source Development Labs quickly established a legal fund to shield GNU/Linux users from litigation risks. Intel and IBM contributed to the fund while Novell and Hewlett-Packard offered its GNU/Linux customers indemnification from the SCO/Caldera lawsuits.64 One outcome of the SCO/Caldera debacle is irrespective of the fortunes in the court. FOSS developers end up seeking protection under the wings of their corporate allies. Litigations further an interest that both the plaintiff and the defendants have in common, namely: Capital stays relevant to FOSS development. The greatest danger to the community comes from within. IBM, while fighting the lawsuit of SCO/Caldera and parading its support for Apache and GNU/Linux, has also lobbied aggressively in favour of the introduction of software patents in Europe. The company holds one of the world’s largest patent portfolios in the world. IBM has even been awarded an information process patent on an Open Source-like model for developing software.65 In other words, IBM is the owner of the concept of this mode of development. It is a small comfort that IBM has pledged not to go after FOSS developers with its patents. By creating a legal power and withholding it, IBM ensures that it stays a partner worthwhile talking to.

Hacking and Class Struggle

The skirmishes between the hacker movement and corporations and governments have deeper roots than is shown by the confrontations over treacherous code, hostile legislations, and public smear campaigns. More fundamental is that the norms and aspirations motivating people to be hackers are at odds with at least some aspects of capitalism. The central claim of this book is that the hacker movement is part of a much broader under-current revolting against the boredom of commodified labour and needs satisfaction. These sentiments, however, can be made to cut in two ways. In business literature, managers are often advised to encourage a ‘hacker spirit’ among their employees.66 Dennis Hayes gives a good account of how such a hacker spirit among engineers in Silicon Valley educes them to work harder without asking for anything in return. While he acknowledges the autonomy that software engineers enjoy, he doubts that any serious political agenda can arise from it. “Capital and modern technology apparently have seduced the computer builder with rare privilege: a genuine excitement that transcends the divide between work and leisure that has ruptured most industrialized civilizations. […] When computer-building becomes an essential creative and emotional outlet, any politics larger than those governing access to work and tools seem distant concerns”67 Dennis Hayes’ doubts are very justified, though his observations are limited to in-house programmers. The demand for ‘access to tools’ becomes political dynamite once it is articulated outside the wage relation, i.e. by people who are denied access to the tools. When the ‘hacker spirit’ sticks among workers with no foothold in the creative business, the spirit warps into a ‘refusal of work’. The ranks of these people by far outnumber those of the professionals in the media and information sector. And even among the lucky few who enjoy stimulating jobs, many of them will in due time find themselves deprived of their privileges. Programmers are being thrown into the lower tier of the labour market since the computer industry is maturing. Occupations that recently were felt as gratifying, such as writing software code, are becoming as routinised as any other field of activity that has fallen under the spell of exchange value. The shift of control over coding practices from programmers to managers is a major topic in the computer industry. This debate is also reflected in the hacker community in its concern about superficial knowledge in programming languages among volunteer contributors.68

Ironically, the deployment of computer technology has been decisive in degrading work elsewhere in the economy. Its impact on work was highlighted by the sociologist Richard Sennett when he examined the changes that have taken place in a bakery in New York over a period of twenty-five years.69 In the 1970s, baking was an endeavour coupled with physical effort and toil in a hot and sometimes hazardous milieu. On the upside, baking required artisanship and rewarded the baker with some satisfaction. In the modern work environment, computerised ovens oversee the process of baking. It is clean, user-friendly, comfortably tempered, and, by any objective measurement, more ‘civilised’. But the employees are left with no clue about how to bake bread. They only know how to push a few buttons and to call a technician when the bakery machinery breaks down. In the bakery, the source code is the dough, the spices, and the baking recipe, skills which the old bakers had mastered and exalted their peers in. In modern, computerised baking, the source code has been hidden away from the bakers. The growth of the software sector, which is providing exciting new jobs for computer programmers, rests in no small part on the usefulness of software as a means for deskilling the workforce in other sectors. This connection is laid bare when we consider the role of the first computer engineers employed by the industry. These programmers worked in the same company and side-by-side with the blue-collar workers who were subjected to computerisation. David Noble has documented how the embryo of computer software: templates, hole cards, recordable tapes, and numerical control (N/C), was deployed in heavy industry exactly for the purpose of intensifying the techniques of Taylorism. “By making possible the separation of conception from execution, of programming from machine operation, N/C appeared to allow for the complete removal of decision-making and judgement from the shop floor. Such ‘mental’ parts of the production process could now be monopolized by managers, engineers, and programmers, and concentrated in the office”.70 Crucial to this strategy was to keep the workers ‘in the dark’ about the source code. In the same breath as N/C technology was designed to lock workers out, workers were held in contempt for being too simple-minded for programming tasks. Nonetheless, supervisors attested that workers learned on their own to read the program language backwards. It was useful for them to know the program in order to anticipate the next move by the machine, and to foretell malfunctions and possible accidents. Workers were not meant to have this knowledge though. The routine was that upon discovering a bug, the worker had to report it to an engineer. It was a cumbersome and frustrating procedure to both the worker and the programmer.71 Instead of following the correct procedures, workers often showed ingenuity in fixing bugs by themselves. Such initiatives by workers were beneficial to the bottomline of the firm. In order to take full advantage of the N/C technology it had to be opened up to allow feedback loops from the workers back into the work process. But managers had embraced the technology for exactly the opposite purpose. The machinery had been devised to regulate the performance of workers and to force a higher work pace upon them. With insight into how the machinery and the software functioned, workers also knew how to use the technology to their own advantage. They could now alter the instructions of the machinery and reduce its speed. This practice spread spontaneously yet rapidly in factory districts and was occasionally discovered and documented by supervisors. Managers fought back by trying to make the clockwork of the machinery impregnable and incomprehensible. Antagonism between capital and labour was contested on code level and ‘access to tools’ was the name of the game.

The dream of managers to build away workers’ discontent through black-box technologies has continuously been frustrated by hacking. Computerisation has not eradicated workers’ resistance but displaced it, from the execution stage to the conception stage. When more and more people are assigned to conceptualise rather than execute work processes, capital must economise this labour force too. The same tight regime is imposed on engineers and programmers as has previously been, with their help, forced upon blue-collar workers.72 At this point, however, Taylorism runs into its own limits. There is no easy way to deprive ‘knowledge workers’ of knowledge and still have them working. One unexpected outcome from the mechanisation of the office is that the opportunities for hacking and sabotage abounds. A high-profile case of employee hacking occurred in 1996 when Timothy Lloyed discovered that he was going to be fired from Omega Engineering. He wrote six lines of code that erased the design and production programs of the company and allegedly resulted in $12 million dollars in damage. According to a survey in 1998 conducted jointly by Computer Security Initiative and the FBI, the average cost of successful computer attacks by outsider hackers was $56,000 while the average cost of malicious acts by insiders was $2,7 million.73 A culture of spontaneous sabotage among employees contributes for most of the computer downtime in offices. The fact that these attacks are charged with labour discontent almost always goes unreported. Managers are anxious not to inspire other employees to work the same deed. With these reflections in the back of the mind, Andrew Ross insists that the perspective on hacking must be broadened. The media image of hackers as apolitical, juvenile pranksters belittles the issues at stake: “While only a small number of computer users would categorize themselves as ‘hackers,’ there are defensible reasons for extending the restricted definition of hacking down and across the case hierarchy of systems analysts, designers, programmers, and operators to include all high-tech workers—no matter how inexpert—who can interrupt, upset, and redirect the smooth flow of structured communications that dictates their position in the social networks of exchange and determines the pace of their work schedules.”74 Employees crashing the computer systems of their employers gives a clear indication of that hacking can be an act of labour resistance. How does this observation reflect upon hacking done by students, unemployed, and spare-timers, in other words, hacking unrelated to the workplace? After all, both the self-image and the stereotype of the hacker portray someone positioned outside and against the profession.

First it must be acknowledged that the site of production is fuzzy in networked capitalism. The production process has gradually shifted from the factory and office to work-at-home schemes and commissioned freelancers.75 Computerisation is closely related to this larger picture of a restructured labour market. The availability of personal computers, mobile phones, and Internet connections has been pivotal for making telecommuting and work-at-home-schemes practical. Furthermore, to the arsenal of freelancers, leased workforces, and subcontractors, firms can now add FOSS developers. As Red Hat’s and IBM’s engagement in GNU/Linux demonstrates, communities have become important sources of surplus value for capital. Hence, development communities confront capital just as waged labourers do. In order not to foreclose the many places where capital extracts surplus value and were there are a potential for labour conflicts, we need to take into account a much broader terrain than the direct workplace. The factory has spawned outwards to the whole of society. The concept of the ‘social factory’ was first suggested by Mario Tronti in La fabbrica e la società back in 1962: “At the highest level of capitalist development social relations become moments of the relations of production, and the whole society becomes an articulation of production. In short, all of society lives as a function of the factory and the factory extends its exclusive domination over all of society.”76 The concept of the ‘social factory’ offers a promising avenue for analysing contemporary capitalism. The whole of society has been subjected to capitalist discipline and capitalist relations of production. For instance, everyday leisure activities like watching television or using a search engine on the Internet is turned into advertising revenue. Antonio Negri and Michael Hardt draw some far-reaching conclusions from this fact. The proletariat, who are defined as those being within capital and sustaining capital, are present everywhere. Not only wage labourers, but equally so home wives, unemployed, and students, are qualifying to belong to the proletariat.77 This all-inclusive categorisation calls for a different description of how and why the working class opposes capital. Traditionally, Marxist theory has pinned down the workers’ struggle to a contest over surplus labour. By surplus labour is meant the amount of working hours that people are forced to toil in excess of what they require for their own subsistence. Surplus labour is accumulated and becomes capital. Class struggle can thus be anchored directly in key concepts in Karl Marx’s political economic critique. The collision between capital and labour is hell-bent since the employers desire longer and more intense working hours while employees want the very opposite. The definition is very elegant, perhaps a bit too elegant for its own good. During the twentieth century, under the influence of trade unions, the tug-of-war over surplus labour led to a narrow focus on pay levels and working hours, at the expense of other aspects of the class struggle. In the social factory, were the workday extends beyond the direct employment situation and even leisure activities have been put to work, the struggle over surplus labour cannot be read out of the stroke of the office clock. Antonio Negri and Michael Hardt make another drastic suggestion at this point. With inspiration from Michel Foucault, they argue that life is being nurtured and administrated by capital as an intrinsic part of the value-adding process. The resistance of the proletariat is therefore a bio-political struggle, at once economical, political and cultural, and centred on conflicting forms of life. They conclude their reinterpretation by stating that one way for the proletariat to fight back is to invent new public spaces and communities incompatible with the value form. (Empire 56)

These remarks assist us in our attempt at understanding the hacker movement in terms of labour struggle. The conflict over surplus labour that characterises the antagonism between labour and capital at the workplace has little explanatory power in the computer underground. Hackers volunteer to write software applications. They are more likely to be happy about spending an extra hour in front of the computer than trying to sneak a shortcut. As far as money is concerned, many hackers couldn’t care less if a corporation profits from a project that they have contributed to. From the perspective of a trade unionist, amateurs labouring for free are nothing short of alarming. The unsuspecting hacker is ripe for exploitation, and what’s more, while working away he is weakening the bargain position of employed programmers too. What hackers do care about, mainly free access to information, seems peripheral in comparison to social, labour, and environmental concerns. The glaring ignorance towards labour issues in the hacker movement has convinced Alan Liu to write off cyber-politics as subcultural ‘bad attitude’. He charges that the demands for free information are individualistic, consumerist and entrepreneurial.78 Alan Liu is mistaken because he portrays information in the same way as ‘content providers’ do, as merely a consumer product. From this perspective, the hacker’s wish to have information for free appears like just another angry customer demanding more value for his money.

If we acknowledge that information also is a means of production, it becomes clear that the demand for free information is the same thing as ‘access to tools’. With free licenses the tools to write software code are made accessible to everyone, thus they are free as in free from knowledge monopolies, white-collar professionals, and corporate hierarchies. Hacking undermines the technical division of labour that is pivotal to Taylorism. Furthermore, the failure of hackers to mention labour issues is consistent with the fact that their politics is the politics of ‘zero work’. At first it might sound odd, but the statement above is consistent with the extreme motivation and discipline of many hackers when they develop software code. The radicalism of the FOSS development model springs exactly from the distance it places between ‘doing’ and the wage relation. Hackers are contributing to radical social change because they prevent the labour market from being the sole determinant over the allocation of programming recourses in society. As a consequence, the economic rationality and instrumentality of technological development can not be taken for granted anymore, at least not in the computer sector. The model for developing technology invented by hackers is guided by the most non-instrumental of human activities: the play-drive.79 Software code is not the end-purpose of hacking but rather an excess flowing from the playful form of life that hackers are choosing for themselves. Hackers may or may not be conscious about and motivated by the wider political implications from promoting access to computer tools. Linus Torvalds, for instance, has repeatedly proven his political innocence in rows with the Free Software Foundation. Nonetheless, he made the key decision to license the Linux kernel under a free license. The demand for free information is not grounded in ideological convictions as much as in the fact that the public space that hackers draw from can be sustained only if software technology stays open and accessible. It is the form of life of hackers that command resistance. Their commitment to sustaining the FOSS community is in conflict with at least some priorities of capital, though, admittedly, it also plays into the hands of capital in other respects. Would it not be fair to object that with corporations making millions of dollars out of FOSS applications, the liberating potential of hacking has been lost? In that case we must also say that the struggle of waged employees is non-existent since corporations make millions of dollars out of them. The fact that the hacker movement has partially been recuperated by capital does not falsify hacking as a radical praxis, unless we badly want to think so. The hacker movement is in continuation with more than two hundred years of labour struggle.