4

Critique of the Political Economy of the Commons

The dissemination of the paradigm of the commons in the 1990s, as we have just seen, was closely linked to the rise of resistance movements challenging neoliberalism in Latin America, the United States, and Europe. But the commons paradigm also came to prominence during this period in an academic literature produced by researchers – principally Americans – who were determined to formulate a new theory of political economy. The issue at stake in this research was not trivial: it sought to overcome the otherwise constitutive distinction in legal and economic theory between private and public goods, private and public ownership, and between the market and the state.

There is no question this orthodox opposition is still alive and well in contemporary protest movements. In this respect, denunciation of widespread commodification often leads to a reflexive defense of national or public services and a call for expanded state interventionism. Whatever its merits, such demands are counter-productive at best insofar as they refuse to challenge an opposition that codes the market as the rule and the state as the exception. Conceptualizing the state as a mechanism for resisting the invasion of the market thereby doubly justifies this division of labor between the market and the state, since it concedes a “proper” sphere of activity to each entity. Since at least the 1950s, standard economic theory fully accepted the legitimacy of public or government production on the pretext that certain goods are inherently suited to private ownership while others are naturally suited to state management. In this respect, standard political economy is merely obeying the basic principles of political philosophy that, at least since Hobbes, assigned the state a dual function: to protect private property and furnish public goods that society’s egoistic atoms are incapable of furnishing on their own. Adam Smith himself adopted this same framework. In both political philosophy and classical economics, the “market” and the “state” are viewed as necessary and sufficient poles for ensuring a properly run society. The aim of the political economy of the commons, however, was precisely to depart from this consensual opposition between the state and the market, both practically and theoretically.

This theory of the commons, whose best-known representative is Elinor Ostrom (Ostrom was awarded the Nobel Prize in Economics in 2009),1 focuses on the practical and institutional conditions for managing common resources. This school of thought thus distanced itself (albeit incompletely) from standard economics insofar as the latter assigns the production of goods to different apparatuses (i.e., the state or the market) based the intrinsic nature of the goods themselves. By revealing the institutional dimension of resource management practices, Ostrom’s work led to a theoretical breakthrough whose importance cannot be underestimated. That said, Ostrom’s ability to reveal the importance of institutional analysis was nonetheless limited by her dependence on the dominant naturalistic framework of standard economics, wherein particular resources are viewed as more (or less) suited to collective management based on their intrinsic properties. The “common,” within this strain of institutional political economy, is thus a qualifier that applies to resources that are naturally “common”: because of their intrinsic character, such resources are more rationally managed through collective action than by either the market or the state.2 By adopting this economic framework, the commons becomes inscribed in a typology of goods based technical criteria: there are goods that are “common” by their nature and consequently conducive to collective management, just as other resources lend themselves to public or private management according to their intrinsic particularities. After studying numerous cases of collectively managed natural resources at a micro-social level, Ostrom and her team used these technical criteria to develop a research program on today’s emergent digital “knowledge commons” on a much larger scale.

The consequences of adopting this naturalistic framework are not insignificant and they tend in opposite directions. If the common is a natural property of certain goods, this justifies the creation of a special economy designed to meet the special needs of these “common goods” in between – or outside of – the vast economies of market-produced goods and state-produced goods. In this respect, then, there is nothing revolutionary about this thesis at all; in fact, it is rather conservative. But, in a very different sense, if this political economy of the common succeeds in showing, or in making us believe, that our most essential economic and social goods are naturally “common,” then this school of thought would seem to have solved the million-dollar question vis-à-vis our exit from capitalism. Indeed, much contemporary critical theory takes precisely this view. André Gorz’s chapter titled “The Exit from Capitalism has Already Begun” from his book Ecologica (2010) – in which Gorz explains how “the sphere of what is free is extending irresistibly” within the “knowledge economy” – is a perfect example of this interpretation:

Information technology and the Internet are undermining the reign of commodities at its foundation. Everything translatable into digital language and reproducible or communicable at no cost tends irresistibly to become private property, if not indeed universal property, when it is accessible to – and useable by – everyone … This is a revolution that is undermining capitalism at its base.3

As we will argue, however, adhering to this naturalistic typology of “goods” cannot help us identify the specific characteristics of the common – indeed, the special nature of the commons can only be revealed by critiquing this naturalistic framework. And yet, we must stress here that our critique of economic naturalism is only possible because of the theoretical opening produced by Ostrom’s research on the governance of the commons. Indeed, when all the implications of this new problematic are properly unfolded, it collapses the naturalistic rationale of standard economics in two senses: by undermining the supposed natural egoism of the human agent and by undermining the classification of goods according to their intrinsic nature.

“Private Goods” and “Public Goods”

Standard economic discourse so permeates both public debate and theoretical discussion that its categories are still the dominant terms used to express and articulate our understanding of the “common” today. Political economy, much like legal theory, tends to think in terms of “goods” (even if the definition of a “good” differs between the two disciplines). In the legal sphere, goods are defined as objects that are appropriable. In standard economics, on the other hand, an economic good is defined by its ability to satisfy some need, by its mode of consumption, and by the method of its production (market or otherwise). Within this framework, then, the commons is considered a property of goods themselves rather than a property of institutions. As we argue, however, the “commons” are not reducible to “common goods.”

We must first begin by understanding more precisely what is meant by the expression “common good.” The term is often confused with synonyms like the “collective good” or the “public good.” But what distinguishes a public – or collective – good from a private good? According to standard economic doctrine, most goods must be produced by private firms operating in competitive markets because of the intrinsic technological and economic properties of the goods themselves.4 There are, however, a handful of goods whose specific characteristics make them “naturally” suited for production by the state or other social organizations (churches, unions, parties, associations, etc.) that are capable of exercising discipline over their members, in one way or another, to ensure the production of non-market goods. In short, the reason public goods aren’t produced by the market is because the needs they satisfy aren’t suited to a market logic of production based on voluntary individual payment.

Private goods, on the other hand, are defined in standard economics as “excludable” and “rivalrous.”5 A good is said to be “excludable” when its owner or producer is able to restrict right of ownership over the good to any person who refuses to buy the good at the price the owner demands. A good is said to be “rivalrous” when its purchase or use by one individual diminishes the quantity of the good that can be consumed by others. A “pure” public good, then, is a good that is both non-excludable and non-rivalrous. A good is said to be non-excludable when its owner cannot restrict its use on the basis of payment; and a good is said to be non-rivalrous when it can be consumed or used by large numbers of people without additional production costs because the consumption of the good does not diminish the overall store available to others. Classical examples of these goods include street lighting, fresh air, fireworks, light from lighthouses, or national defense. The fact that there are certain needs that can only be satisfied by public goods accordingly justifies state intervention in the economy, as is described in classic works in economics produced by Richard Musgrave and Paul Samuelson in the 1950s.6 According to Richard Musgrave, one of the functions of the state is to ensure optimal allocation of economic resources. This means the state must subsidize or directly produce those goods that cannot be produced by the market as a result of their intrinsic properties. Public goods are thus defined negatively as goods that cannot be spontaneously produced by a market in which private interests are satisfied by an act of voluntary purchase. Public goods, in other words, are “market failures.” It is because certain goods are somehow defective or deficient in the context of standard market norms that they must be produced by the state or non-profit institutions (insofar as they are deemed sufficiently necessary for economic efficiency or the well-being of the population). If no one can be excluded from the consumption of a good, and if the consumption of a good by one individual does not diminish its availability for others, it means the good is both non-excludable and non-rivalrous. Accordingly, then, the production and financing of such goods can only be realized on the basis of a certain compulsion – a moral compulsion in the case of a church or a non-profit association, or a political compulsion in the case of the state. Of course, the mere fact of a market failure does not, on its own, oblige the government or any other local authority to intervene in the economy, but in some cases the intrinsic properties of certain goods may justify such an intervention.

Any justification for political intervention in the economy is always closely linked to the so-called “free-rider” problem. A free rider is a calculating individual who willingly leaves the burden of payment for a good (from which he profits) to others on the basis of the good’s non-excludability. A free rider can also be someone who refuses to assume of the costs of his own activities (as in the case of a polluter). It is important to observe, at this junction, that the entire logic governing this intrinsic or naturalistic distinction between goods rests upon a very elementary utilitarian foundation: namely, the premise that the “rational” individual is egoistic, calculating, maximizing, and singularly motivated by personal self-interest. In other words, the whole model is underpinned by the postulate of economic man. This hegemonic figure can only enjoy goods selfishly and is incapable of producing or consuming in common with others. In The Logic of Collective Action (1965), Mancur Olson showed that an individual of this kind has no motive to collectively produce a good that others might consume without paying for it, and so it is contrarily better to allow others to bear costs in one way or another, since benefit from the collective outcome is the same for those who do not pay as for those who do.7 Compulsory taxation, or some other form of political or moral compulsion, is therefore the only means of providing collective goods. And this is precisely why, in a great many cases, “public goods” are goods provided by the state. It is also true, as Olson pointed out, that regular interaction between individuals within groups had a tendency to reduce instances of free riding.

Since the 1950s, the theory has been refined to show how the classic examples of public goods actually constituted a very specific category of “pure public goods” that had to be integrated within a more general theory of “externalities” – i.e., beneficial or detrimental communal effects produced by private actions. The market, for example, may produce too few positive externalities on its own – such as the benefits of education when it is entirely paid for by users – and, inversely, the market may produce too many negative externalities – such as atmospheric pollution without any public regulations governing greenhouse gas emissions. What is at issue here is not the way in which these goods are consumed, but rather the general consequences for society as a whole. But whether it is because of the particularities of public goods or in order to correct for externalities, public intervention in the economy is always justified by market failures. And market failure, in turn, is always a function of the intrinsic characteristics of goods, whether it concerns the manner of their consumption or the general social effects involved in their production.

A public good is therefore defined negatively within the framework of a theory in which the market is the default mechanism for allocating resources. While our aim here is not to exhaustively examine the characteristics sufficient for justifying public intervention, we should, however, stress the fragility of the boundaries established between the public and private by the theoreticians of the public economy in the 1950s. Indeed, the neoliberal theorists have very cunningly preyed on this fragility in order to argue that even if certain goods are of a special nature, their production does not necessarily require production by the state. The European Union’s doctrine, to take one example, no longer uses a vocabulary of “public goods” or “public services,” but rather opts for terms like “service of general interest” because this terminology leaves space open for private companies to produce previously public goods or services (albeit within regulatory frameworks still determined by public authorities). And we have seen in the previous chapter how the theory of property rights provides further intellectual legitimacy to such policies aimed at greater privatization and commodification.

The orthodox economist thus reasons on the basis of a double postulate concerning the intrinsic nature of goods and the behavior of economic man, and this rationale justifies the shared distribution and production of goods and services between the state and the market. But the reality is that the production of goods does not conform to a reductive economic rationale that excludes politics or ethics. The reason why a given good or service is produced or provided by the state or the market is never solely due to the natural properties of the given good or service, but is rather a more complex outcome of a variety of political, cultural, social, and historical factors that we must not ignore (as is often the case in standard economic doctrine).8

The Discovery of “Common Goods”

Despite the rigor of the economic theory mentioned above, numerous economists have since discovered that one cannot exhaustively classify all economic goods through such a reductive framework that delegates the distribution and production of goods between the market and the state on the basis of technical considerations – such as exclusivity or rivalry – or by distinguishing between purely public and private goods. It was from the recognition of the failures of this prior model to exhaustively account for all goods that Elinor Ostrom’s research on common resource management was born.

For instance, if we combine (as was done during the 1970s) the two properties of economic goods – exclusivity and rivalry – we in fact distinguish between four, not two, types of goods. In addition to “purely private goods” (excludable and rivalrous) – such as donuts purchased at the supermarket – and “purely public goods” (non-excludable and non-rivalrous) – such as public lighting, national defense, or the light emitted from lighthouses – there are also “hybrid” or “mixed” types of goods. There are also what are called “club goods,” which are both excludable and non-rivalrous – such as tolled bridges or highways, or artistic and sporting performances. While there is a cost of consumption in each of these cases, the consumption of these goods by one individual does not diminish the quality of consumption by others. There are also other goods referred to as “mixed goods” – which we will refer to as “common goods” – that are non-exclusive and rivalrous, such as fishing grounds, open pastures, or irrigation systems. These goods are “mixed” because it is difficult to restrict access to such goods, except by establishing rules of usage. It is these mixed goods that Ostrom refers to in her work as “common-pool resources” (CPRs). While these goods can be individually exploited, there is a risk that the overall quantity of the resource will be diminished or even exhausted if everyone tries to maximize their own personal utility. Mixed goods, or CPRs, can also be provided by state authorities, as in the case of national parks. For Ostrom, however, the use of these goods does not necessarily presuppose a rigid choice between individual property or public ownership. On the contrary, these goods can be effectively sustained by forms of collective management, as is demonstrated by the empirical studies carried out by Ostrom and her team in Switzerland, Japan, Spain, and the Philippines. What was most novel about Ostrom’s approach was that she focused on the institutionalization of these resources, and on this basis she worked toward a systematic theory of self-organizing and self-governing collective action.9 And while her approach was indeed novel in relation to standard economic theory, it was based on extensive historical inquiry into centuries- and even millennia-old communal practices that have long been the object of systemic denigration in the field of modern economics.

The “Tragedy of the Commons” Debate

In order to understand the true stakes of Ostrom’s political economy of the commons, we must situate her work within the context of the larger debate that developed around Garrett Hardin’s famous article “The Tragedy of the Commons.”10 In his 1968 article, Hardin believed he could show how communal land, even before the enclosure movement, was destroyed by overexploitation at the hands of sheep-breeders driven solely by self-interest.

Hardin’s rationale is based on the premise of economic man’s rational behavior, and the inability or unwillingness of economic man to consider the larger effects of unrestrained exploitation of a common resource. Hardin’s driving concern in his famous essay is the issue of overpopulation, which the Malthusian Hardin viewed as the most distressing problem facing humanity as a whole. Hardin’s objective was thus to critique the idea that a given population will reach an optimum population under conditions in which everyone decides on matters of procreation based solely on self-interested motives. For Hardin, humanity will surely be led to ruin if families are permitted the freedom to have as many children as they want: given the finite nature of the world, freedom in matters of procreation is simply impossible. Hardin thus returns to Malthus as a means of critiquing Smith: for Hardin, any hope in the population’s capacity to spontaneously self-regulate, as if by a kind of “invisible” demographic hand, is folly. It is in this respect that Hardin (likely without realizing it) actually revives a long-standing historical debate about communism. That idea that everything in the natural world belongs to everyone, that the world is a banquet open to all, was one of the central communist themes in Étienne-Gabriel Morelly’s The Code of Nature: “the world is a table sufficiently stocked for all the guests. Sometimes the dishes belong to everyone, because everyone is hungry, while sometimes they belong to only a few, because others are satisfied. No one is the absolute master, and no one has a right to pretend to be.”11 Morelly’s gesture is a refusal, on behalf of the priority of satisfying our most basic and primary needs, of the notion of scarcity when it comes to the most basic goods. He therefore rejects the claim that there is not a place at the table for everyone on earth. This is of course a position that is diametrically opposed to Malthus’s view, which Hardin cites in his article:

A man who is born into a world already possessed, if he cannot get subsistence from his parents on whom he has a just demand, and if the society does not want his labour, has no claim of right to the smallest portion of food, and, in fact, has no business to be where he is. At nature’s mighty feast there is no vacant cover for him. She tells him to be gone, and will quickly execute her own orders.12

There is no such thing as a “free lunch.” This is the fundamental idea behind what Hardin calls the “tragedy of the commons.”

In order to illustrate his thesis, Hardin imagines the exploitation of an open pasture. All the “rational” herders are self-interestedly compelled to increase the number of livestock they place on the pasture without limit, and this invariably leads to the overexploitation of the communal lands. In effect, then, if every herder directly pursues their own personal utility by augmenting the number of livestock they place on the pasture, the “disutility” wrought by the overexploitation is suffered by all. This is why Hardin opts for the term “tragedy,” which he uses according to the strict definition provided by philosopher Alfred North Whitehead13: tragedy, for Whitehead, is an irreversible and inevitable process, and this is precisely the nature of a logic that compels each herder to maximize his own profit without worrying about the cost of his or her collectively supported behavior:

Each man is locked into a system that compels him to increase his herd without limit – in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons. Freedom in a commons brings ruin to all.14

Hardin’s assertion, as we can see, is not as simple-headed as his critiques often suggest. What he is questioning is not the existence of the commons as such, but rather the “freedom” to exploit the commons in an unlimited fashion.15 The fundamental lesson Hardin wants to convey in his parable of the “rational” herders is thus the exact opposite of the lesson Bernard Mandeville conveys in his “Fable of the Bees”: by allowing each individual to maximize their individual utility without limit, humanity does not achieve “the greatest happiness for the greatest number” – to borrow Hardin’s dearly held Benthamite formula – but rather courts collective ruin.

But what kind of “commons” is Hardin talking about in his article? One of the defining features of Hardin’s famous article is in fact the manner in which he indiscriminately collapses every form of common resource within a single generic category. Are overexploited pastures the same as oceans depleted of fish or whales? Are they the same as polluted air and water? And are oceans, rivers, and urban air pollution the same as social programs and national parks? In a certain number of these cases, it is obvious that limited access, an operating quota, taxation, or some other forms of control would be sufficient to avoid the destruction of most of the natural resources Hardin mentions. It has been amply shown, for instance, that unhindered access to groundwater in India during the “Green Revolution” accelerated water depletion. Indian states actively encouraged their wealthiest farmers to tap into wells without limit and at the expense of other uses of water, such as irrigation, canals, or reservoirs. One group of experts has estimated that half the country’s water needs will no longer be met within twenty years.16 The probable consequences of this overexploitation of water thus correspond fairly well to Hardin’s warning about the “ruin of the common.” On the other hand, it is worth noting that Hardin’s conclusion does not sufficiently account for the extent to which this destructive process was also due to the total absence of public regulation, along with the fact that unfettered access for well drillers – i.e., the richest farmers who sell water to the poorest farmers as well as to intermediaries who truck water to urban centers – was carefully planned. The behavior of these calculating and maximizing egoists, in other words, did not arise spontaneously: it was a product of social planning and, in this specific case, arose directly from a very water-intensive form of capitalist agricultural development.

The fact remains, however, that Hardin’s examples of the “commons” are really a confused amalgam of very heterogeneous cases. He confuses what Roman law called res communis – that which belongs to no one and is unappropriable, such as the sea or the air – and res nullius – that which has no owner but can be appropriated, such as fish caught in the sea.17 But above all, he confuses these categories with the unrestricted exploitation of goods taken from a limited stock. Hardin’s argument thus rests on a sophism in which he introduces economically “rational” behavior – i.e., behavior driven entirely by the logic of self-interest – into a normative context that precisely excludes such behavior, since virtually every instance of common resource management in history has been based on rules that were precisely designed to prevent such cases of overexploitation. In other words, Hardin’s fable completely ignores the existence of a “moral economy” – to borrow E.P. Thompson’s phrase – that informs customary-use rules for common resources. This is a major historical error, insofar as it allows Hardin to conclude that the destructive consequences wrought by free access to common resources can only be avoided by individual privatization, or nationalization/centralization of the common resource. In other words, Hardin’s analysis leaves no room for a third term between the market and the state.

Despite these problems, an abundant neoliberal literature has since spawned from Hardin’s famous analysis. This literature uses Hardin’s thesis to bespeak the advantages of private property while emphasizing the inefficiencies of any form of public or collective management. Public services or social insurance systems fail, so argues this literature, due to the fact that they all fall prey to “free riders” who enjoy all the benefits while paying none of the costs. If Hardin’s thesis was principally interpreted in neoliberal terms, it was because it resonated with powerful currents within dominant economic theory that were renewing arguments in favor of private property and which were violently opposed to all forms of communal or state property (which were accused by the neoliberals of transferring costs onto the community and risking resource depletion). The theory of property rights accordingly emphasizes the idea that only private ownership is able to “internalize externalities” – whether positive or negative – while all other forms of property simply fail to weigh the negative externalities imposed upon others while preventing those who justly accrued wealth through meritocratic effort from enjoying the full benefits of their wealth.18

The Institution as the Heart of the Commons

The principal contribution of the new political economy of the commons was precisely its ability to sort out the widespread confusion that is clearly manifest in Hardin’s article between completely free access, on the one hand, and collective organization and management on the other. In other words, its core insight is to have recognized that the commons were originally based on self-organized, collective regulation. Elinor Ostrom thus showed (though often without sufficient explication) that these natural commons should not be defined as physical “things,” pre-existing the practical uses made of them. Nor, for Ostrom, should these commons be defined as natural spaces that are merely overlaid with additional rules. These communal resources should rather be understood as a product of social relations between individuals who exploit certain communal resources based on rules of use, sharing, or cooperation. To borrow Yochai Benkler’s very apt phrase, these communal resources are “institutional spaces.”19 This revised understanding is undoubtedly at the root of the crucial terminological shift from “common goods” to the “commons.”20 The major contribution of this new political economy of the commons resides in this insistence on the necessity of diverse practical rules for enabling the production and reproduction of common resources. As David Bollier writes, “the commons paradigm does not look primarily to a system of property, contracts, and markets, but to social norms and rules, and to legal mechanisms that enable people to share ownership and control resources.”21 This emphasis on the collective establishment of rules of practical action, which is what Ostrom conceives as an institution, introduces a governmental conception of the commons in which the latter is conceived as an institutional system of cooperative incentives.

For Ostrom, an institution is “simply the set of rules actually used (the working rules or rules-in-use) by a set of individuals to organize repetitive activities that produce outcomes affecting those individuals and potentially affecting others.”22 As Ostrom continues, “working rules may or may not closely resemble formal laws that are expressed in national legislation, administrative regulations, and court decisions … working rules are those actually used, monitored, and enforced when individuals make choices about the actions they will take in operational settings or when they make collective choices.”23 A “successful” institution is one that is capable of adapting to changing circumstances and is able to regulate internal conflicts. For Ostrom, then, there is a close link between the sustainability of a common, the adaptability of its internal regulations, and the “institutional diversity” that relates to the ability of members to adapt diverse conditions of production. Communal resources may be exploited by groups of varying sizes, but in order to endure, these groups must obey a system of collective rules concerning their productive operations, group boundaries, and the procedures by which rules are elaborated and modified. The latter, which constitute “constitutional rules,” are the institutional parameters in which “operational rules” are elaborated and discussed. As Oliver Weinstein aptly writes, “the hierarchical system of rules that regulates a common and its governance thus appears to be a genuine political system.” In short, the commons are institutions that allow for communal management according to several levels of rules established by the “appropriators” themselves. Benjamin Coriat, for instance, defines “common lands” according to the following resonant formula: “collective resources managed by means of a governance structure that ensures a distribution of rights between members participating in the common (‘commoners’) in order to facilitate the orderly exploitation of the resource while ensuring its long term sustainability.”24

There is thus no need to privatize the commons by imposing exclusive property rights upon it, as the neoliberal institutionalists would have it; nor is there any need to appeal to a Leviathan in order to control individuals and force them to obey the sovereign, who is the sole source of information. There are many collective ways to agree and create cooperative rules that are not reducible to the market or state control. And this has been empirically demonstrated in the numerous cases wherein groups have completely bypassed statist constraints or private ownership in order to avoid the well-known “tragedy of the commons.”

The “Analytical Framework” of the Commons

Ostrom outlined the framework for the institutional analysis of commons management in most detail in her 1990 book Governing the Commons.

While she examines the management of communal resources in very different settings in this book, each case is similar insofar as they all feature a small group of people who establish collective rules for the use of some form of communal property. Managing the production of communal natural resources – such as pastures, fisheries, forests, irrigation systems, etc. – functions according to a certain number of institutional principles foregrounded in Ostrom’s theory. Ostrom’s approach doesn’t, therefore, try to find a single model at work in every situation. Instead, it posits an analytical framework comprised of variables that interact and condition the codification of various forms of activity. In other words, there is no one single or proper way to manage a commons that is transposable everywhere. And while it is true that Ostrom’s theory places great emphasis on the dynamic diversity of the institutions she studies, the main thrust of her analysis is to identify a number of fundamental issues that need to be resolved if any given system of collective exploitation hopes to be sustainable. For instance, the common must always have clearly definable limits in order to be able to identify the specific community concerned with the common in question; institutional rules must be well adapted to local needs and conditions and must be consistent with community objectives; the individuals governed by these rules must regularly participate in the committees charged with modifying them; participants’ rights to determine and modify these rules must be recognized by external authorities; a member-oriented self-monitoring apparatus must be established, along with a graduated system of sanctions; community members must have access to an inexpensive system of conflict resolution; and, finally, the positions for overseeing the various regulatory functions must be adequately distributed amongst the group.

This list of principles for managing the common might seem somewhat underwhelming at first glance. Nonetheless, these requirements reveal an essential aspect of the common that standard economic theory systematically ignores: namely, the close connection between the norm of reciprocity, democratic management, and active participation in resource production. This is because a common does not bring consumers together within the context of a market, nor does it rely on an administration unconnected to the actual production process. Rather, a commons involves “co-producers” who work together by giving themselves collective rules. In this sense, then, the paradigm of the commons not only challenges the economy of private goods but also its complimentary opposite, the economy of public goods (as we have seen above), by analyzing forms of activity and production, outside the market and the state, that are shaped by productive communities that political economy has been radically unable to account for (up to this point). Ostrom’s approach has also been verified through empirical examination of various “knowledge commons,” whereby production corresponds to very specific social and political conditions. Within these commons, the economic production of resources is inseparable from civic engagement: production is closely linked to norms of reciprocity and presupposes both relations between equals and democratic frameworks for establishing and implementing rules. It is worth noting, in this respect, how this political economy of the commons is not entirely dissimilar to an earlier tradition of socialism, which also made cooperation the antidote to the capitalist logic of competition.

But above all, the theory of the commons is all about the constructed character of the commons. There is nothing in the theory that suggests – as some libertarians are tempted to believe when it comes to the expansion of the Internet – that a common can operate without established rules, or that it could be viewed as a natural object whereby “free access” is synonymous with absolute legal and economic “laissez-faire.” Spontaneism will simply not do: reciprocity is not an innate biological fact, and democracy is not an eternal datum of human existence. Instead, the common must be thought of as the construction of regulatory frameworks and democratic institutions that organize reciprocity and thereby avoid both Hardin’s “free rider” problem and the tendency of market societies to produce passive consumer citizens.25 In a certain sense, then, the theory of the commons is perfectly contemporaneous with neoliberalism, insofar as the latter theorizes, supports, and promotes the commodification and the construction of markets through the development of property rights, contractual forms, and constructed modes of competition. In other words, the theory of the commons is based on a similar theoretical constructivism, but it turns in the opposite direction: it encourages, at a practical level, the establishment of regulatory frameworks for enabling collective action.

This regulatory system of the commons is a collective invention that is transmissible while, at the same time, being susceptible to change according to circumstances and constraints; it is also a series of incentives that orients individual behavior. In this strict sense, Ostrom is herself inscribed within the paradigm of neoliberal governmentality insofar as her work is all about guiding individual behavior through institutional incentives and disincentives:

Institutions shape the patterns of human interactions and the results that individuals achieve [through incentives]. Incentives are the positive and negative changes in outcomes that individuals perceive as likely to result from particular actions taken within a set of working rules.26

Ostrom’s approach should therefore be viewed as a form of dissident neoinstitutionalism. Its methodology is based on an analytical framework that similarly understands economic phenomena as arising from the institutions that guide behavior and “shape human interaction,” according to North’s definition.27

But Ostrom’s work, of course, does not extol the virtues of property rights in the same manner as North and the other neoinstitutionalist theorists. On the contrary, Ostrom’s work is all about showing how a set of rules can encourage individuals to renounce opportunistic behavior and adopt cooperative conduct. Or even better, Ostrom’s work in fact shows that it is often the rules themselves that produce “perverse incentives” that compel individuals to act opportunistically and “ruin the commons.” For Ostrom, accordingly, the dilemma of the commons is best interpreted as a variant of the famous “prisoner’s dilemma.” The prisoner’s dilemma tells us that in a situation in which individual decisions are interdependent, but where these same individuals cannot communicate or deliberate on the basis of a common plan, non-cooperative strategies tend to prevail. But of course this outcome is largely derived from the structure of the situation as imposed by game theorists themselves. The reality is that whenever individuals are able to assemble, talk amongst themselves, and make collective decisions, cooperative strategies become possible and agreement – one that is not imposed from the outside – is often the result. Of course this is not always the case, as Ostrom points out: it may be the case that individuals interested in protecting a common resource find they are unable to do so because “the participants may simply have no capacity to communicate with one another, no way to develop trust, and no sense that they share a common future.”28 And while it is not an insurmountable obstacle, Ostrom also adds that such agreements are often undermined by relations of force and structures of domination operating within groups: “alternatively, powerful individuals who stand to gain from the current situation, while others lose, may block efforts by the less powerful to change the rules of the game.”29

Generally speaking, Ostrom appropriates a number of the central tenants of neoliberal doctrine, from Hayek and Public Choice Theory, and inflects them in a new way. Two aspects of these doctrines are especially operative in her analyses: first, she adopts the idea that rational individuals are best suited to creating institutions that promote interaction by diminishing uncertainty30; and second, she adopts the thesis of “adaptive efficacy,” found in both North and Hayek, that tells us that the only institutions able to survive are those best able to adapt to changing internal and external conditions.

Ostrom’s hypothesis concerning individual rationality as a basic premise of ad hoc solutions is, however, balanced by an insistent reference to a social reality that conditions the governance of the common. Unlike North, who tries to unify all the social sciences by rendering rational, individual behavior the sole behavioral concern of institutional constraints, Ostrom argues that social conditions favoring or hindering the establishment of practical rules must always be taken into account. For the ability to collectively develop rules of use is itself dependent on a community-specific system of norms and on the possibility of establishing communicative exchanges between individuals. From this perspective, then, Ostrom’s institutionalism veers away from mainstream methodological individualism by borrowing sociology’s model of the socialized individual and cognitive psychology’s theory of experiential learning.31 Ostrom’s work focuses on a social art she calls “crafting”: the term refers to the skilled work of an artist or craftsman, and in this respect differs greatly from the mere application of a system of rules imposed from above or outside by experts or scholars. The “artisanal” character of crafting, as Ostrom explains, stems from the fact that every commons has its own unique mode of government and, therefore, “crafting institutions … requires skill in understanding how rules, combined with particular physical, economic, and cultural environments, produce incentives and outcomes.”32 When it comes to the commons, there is simply no “one best way.” The creation of institutions presupposes an extended process of imagination, negotiation, experimentation, and correction of the rules whose practical effect on behavior changes over time:

Rules governing the supply and use of any particular physical system must be devised, tried, modified, and tried again, and considerable time and resources will be invested in learning more about how various institutional rules affect participants’ behavior. Thus, the choice of institutions is not a ‘one-shot’ decision in a known environment but rather an ongoing investment in an uncertain environment.33

By using the same methodological tools of the rational choice theorists, Ostrom theorizes institutions in terms of “social capital.” For Ostrom, social capital is just as important as physical capital when it comes to the creation and maintenance of the commons.34 Yet despite Ostrom’s use of standard economic vocabulary, the process of institutional “crafting” she describes is, in fact, profoundly sociological and political. These cooperative incentives that constitute the institution mobilize all the knowledge belonging to the social group in charge of the common, and often presuppose external political conditions that permit and encourage self-governance of the commons. As Ostrom accordingly remarked concerning the creation of an irrigation system:

The crafting of irrigation institutions is an ongoing process that must directly involve the users and suppliers of irrigation water throughout the design process. Instead of designing a single blueprint for water-user organizations to be adopted on all irrigation systems within a jurisdiction, officials need to enhance the capability of suppliers and users to design their own institutions. Involving suppliers and users directly will help ensure that development institutions are well matched to the particular physical, economic, and cultural environment of each system.35

The Limits of the Institutional Analysis of the Commons

Elinor Ostrom’s analysis undeniably breaks with many of the dominant presuppositions of neoclassical economics: her work shows how the commons requires voluntary engagement, dense social ties, and clear, durable norms of reciprocity. On the other hand, however, Ostrom’s theory of the commons was never meant to be a general principle for re-organizing society as a whole. Her theory is rather a pragmatic appeal that advocates a plurality of forms of activity, property rights, and economic regulations. For Ostrom, there is no one single correct way to organize production, since the conditions of production always differ so greatly. While the construction of the commons is necessary in particular situations and for specific goods, it in no way questions the rationality of the market or the state as such. Her pluralism is linked to her analytical preference for individual rationality, which is the underlying method for selecting the best solutions in heterogeneous situations. According to Ostrom, rational and egoistic individuals may create markets, they may call for state intervention, and they construct a commons; it simply depends on the demands of different situations. Does she not understand how institutions shape subjectivities and, as Olivier Weinstein points out, what is most interesting about the common is not its productive efficiency but its capacity for creating new ways of life and new subjectivities?” 36 The fact is, Elinor Ostrom is not an anti-capitalist or an anti-statist: she’s a liberal. Given her preference for institutional diversity, she believes individuals should have the freedom to invent, on their own (and without governmental constraint), contractual arrangements that they find most beneficial.37 In reality, then, her theory is essentially a critique of the theory of exclusive property rights as well as a critique of the state’s authority when it comes to imposing solutions “from above.” From this perspective, then, her analysis constitutes a kind of academic protest against the systematic devaluation of social cooperation both by apologists for property rights as well the “socialistic” justification for centralized state intervention. That said, her implicit argument that an archipelago of commons might survive within the icy waters of the market and the state as a result of the superior rationality of the commons (as inferred from a number of specific cases) suggests she may have underestimated the gravity of the larger economic and political context in which these commons are forced to exist. What reason is there to think such local institutional arrangements, and their organization forms, would not be heavily constrained by the imperatives of capital? Reading Ostrom, we are forced to assume that the individuals populating these commons will not, somehow, also be immersed in the global economy, will not suffer its effects, and will not import the logic of capital into their communities or workplaces.

The reality is that Ostrom’s concepts limit her analysis. Her concepts are drawn from the standard corpus of economics and game theory, and they struggle to account for real relations of power and exploitation. By obfuscating relations of power in each “community,” her approach’s inability to grapple with the great historical conditions that have led to the destruction of innumerable traditional commons is all the more conspicuous in her overall theory. By failing to deal with relations of power within groups themselves, Ostrom’s theory is also unable to consider the effects of systemic domination on behavior. And by restricting her analyses to local arrangements alone, her theory is unable to examine hierarchical relations that may exist amongst different forms of production and different types of social relations. Ostrom’s approach simply says nothing about the social system as a whole,38 despite the fact that institutions are always “embedded” in society, and its stakeholders are always a product of history.39

Ostrom’s guiding premise of institutional diversity forbids her from considering the political possibility of constituting the common as a generalizable alternative rationality.40 It a fortiori ignores the question of how other institutions might develop according to the principle of the common. In short, there’s no inquiry at all about how we might pass from the commons to the common. Ostrom doesn’t propose any generalizable principle of organization, and she deflects criticism on this point by falling back on a “polycentric” analysis of economic reality. From this perspective, then, Ostrom’s work falls far short of the ambition of, say, regulation theory, which tries to understand how “the founding institutions structuring contemporary society” evolved.41

The limits of this new institutional economy of the commons ultimately lie in the fact that it has not completely freed itself from the fundamental hypotheses underpinning the theory of private and public goods.42 It never adequately divorces itself from the postulate that the form and the framework in which goods are produced ultimately depend on the intrinsic qualities of the goods themselves. From this point of view, the critique her economic theory of the commons mounts against Hardin’s thesis remains, unfortunately, problematic. While natural resources of limited stock may be compatible with the various institutions described in its empirical studies, many other goods are still “naturally” assumed to be more efficiently produced by the market or the state. Of course, were economists to take historical reality into account in a serious way, they would immediately see that the enclosure movement did not arise from the landowners’ sudden realization that land is “naturally” an exclusive and rivalrous good – rather, they would see that the enclosure movement was in fact the product of changing social relations in the English countryside (as Ellen Meiksins Wood’s remarkable work has recently shown).43 Ostrom’s theory of the commons is thus, at best, a refinement of the theory of public goods developed in the 1950s. It extends and reproduces the same limitations that plague every naturalism bent on classifying goods according to their intrinsic properties.

And lastly, even if her analysis does indeed culminate in a new conception of the individual as socially committed to the management of the commons, Ostrom never jettisons her premise about the rational actor who always acts on the basis of a cost-benefit analysis. Her work thus reproduces the figure of the calculating individual who chooses the institution of commons in order to arrive at a strictly private advantage. The fact that collective norms always exert pressure on individual choices and norms, not to mention the importance of such norms for conditioning situational variables – especially in terms of a given country’s economic and political structure – merely underscores the insufficiency of this basic neoclassical postulate.44 Thus, despite all her empirical realism, Ostrom’s work falls prey to the “error that consists in presenting the theoretical view of practice as the practical relation to practice,” as Pierre Bourdieu aptly put it.45 To suppose that the decision to adopt forms of collective management stems from the calculation of rational individuals is to forget that the common is no more decreed from the outside than it is the aggregate result of isolated individual decisions. The common is rather a social process with a logic all its own.

From One Common to Another

The language of the commons is not only “descriptive and performative,” as David Bollier emphasizes, but it is above all extremely inclusive. That is to say that it provides a singular frame of reference for theorizing modes of activity, protest movements, and other social relations that, at first glance, appear to have very little to do with one another. As early as the mid–1990s, it was becoming apparent to Elinor Ostrom and the various researchers associated with her approach that the spread of digital technologies, the rapid extension of the Internet, and the concomitant growth of online communities and other forms of networked exchanges were opening up a new domain in which the framework for analyzing the dilemmas of collective action could be applied. But, at the same time, it was necessary to grasp the singularity of these “new commons,” and the ways they differed from the “natural commons” (water, forests, fish, fauna and flora, etc.) that had been the previous object of Ostrom’s analyses.46 What, then, are the implications of extending this political economic concept of the “commons” into “cognitive,” “digital,” or “informational” activities that have little to do with the management of natural resources? Are the naturalistic limitations of her analyses overcome once our attention is shifted to goods with very different characteristics?

In the first place, we must emphasize that the conception of knowledge we are dealing with here is situated within the specific context of economic theory, and in this particular domain, the meaning of the term “knowledge” is exceptionally loose. It is defined as a resource that can be produced and shared, regardless of its purpose, relevance, or use. “Knowledge,” in economics, designates ideas or theories just as much as it designates information or data in any form whatsoever. For instance, Charlotte Hess and Elinor Ostrom consider “all intelligible ideas, information, and data” as forms of knowledge, and they even extend this definition to forms of intellectual and artistic creation.47 Economics, even in its most standard version, has long viewed cognitive and informational resources as strategic factors when it comes to competitiveness and growth, which is why it developed the now largely trivialized notion of the “knowledge economy.” To claim knowledge as a common, in this context, thus exceeds mere academic concern. The claim is a “global” counter-strategy that, in the works of some authors, means the “knowledge economy” must be re-founded on a completely new basis and, as envisioned by André Gorz, as a new non-capitalist society.

Sharing-based practices, such as disseminating scientific or artistic works, creating free software, constructing collaborative encyclopedias, etc., are all contemporary examples of the “new knowledge commons” that is based on values of social commitment and reciprocity. This type of commons, which has become an object of growing interest in the United States, has specific features that distinguish it from so-called “natural” commons. The first distinguishing feature is that knowledge commons are not necessarily confined to small communities, as was often the case when dealing with the collective exploitation of common natural resources. Open virtual communities, vast networks of international researchers, and all manner of extended and prolific universes populate the blogosphere, and so it is clear we are no longer dealing with the same phenomenon, or the same types of problems. In this respect, Ostrom’s analysis is not all that dissimilar from Mancur Olson’s, for whom the problem of “free riding” could be neutralized by restricting the size of the group. Size, for Ostrom, is likewise one of the decisive criteria for predicting whether the co-production of communal rules is likely to take hold or not. Indeed, Ostrom argued that the non-cooperative models described by Hardin, Olson, or the prisoner’s dilemma were “useful for predicting behavior in large-scale CPRs in which no one communicates, everyone acts independently, no attention is paid to the effects of one’s actions, and the costs of trying to change the structure of the situation are high.”48 This is the core theoretical issue when it comes to studying practices of cooperation that are far more extensive: it compels us to ask whether a change in scale forces us to modify our analysis of the “commons.” In a text dedicated to the “future of the commons,” David Harvey emphasizes how Ostrom’s typical case studies were never larger than one hundred co-owners. For Harvey, the restricted size of the communities managing common property promotes a rather idyllic image of the commons that obscures all hierarchical dimensions through its focus on direct interpersonal relations. In Harvey’s view, the nature of the problems and their concomitant solutions necessarily changes once we move from one scale to another, and this especially applies in the transition from the local to the global: “lessons gained from the collective organization of small-scale solidarity economies along common-property lines cannot translate into global solutions without resorting to nested hierarchical forms of decision making.”49 Perhaps we are right, then, to be skeptical about the ability of a category like the “commons” to be able to account for very different phenomena – from the family management of the refrigerator, to a municipal library, to scientific knowledge, or to the planet’s atmosphere – under the pretext that these are all cases of “shared resources.” One the one hand, Harvey certainly has good reason to critique the radical left’s phobia of hierarchy – if not its phobia of organization as such – as well as its problematic idealization of “horizontalism.” This phobia has led to political impasses and demoralizing failures as a result of the left’s inability to create durable organizational models that are adequate to the object of the left’s demands and the size of its movements (Occupy Wall Street, Indignados, etc.). On the other hand, however, Harvey misunderstands the extent to which the work of Ostrom and her collaborators are interested in practices and communal norms governing much larger groups. This is precisely the aim of Hess and Ostrom’s research on the “new knowledge commons,” whereby the two theorists attempt to come up with an analytical framework capable of transposing the characteristics of natural commons onto new these objects.50

Is Knowledge Naturally Common?

According to Ostrom and her collaborators, creating use rules are fundamental to governing every kind of commons. But these rules can vary from one type of common to another. “Traditional commons,” to use Ostrom’s terminology, are primarily threatened by the kind of human overexploitation that results from competition and the accumulation of private wealth. Collective use rules are therefore designed to achieve an equitable and collectively optimal system of sharing that does not exhaust but rather renews resources like water, fish, and pastures. The same, however, is not true for knowledge. In fact, we could even say that the problematic is inverted when we pass from natural commons to knowledge commons. In the case of knowledge, use rules must be designed to prevent the artificial depletion of the resource as a result of property rights, patents, entry barriers, etc. – in other words, all those devices now known, thanks to the work of James Boyle, as “new enclosures.” Whereas natural resources are scarce resources – i.e., both non-exclusive and rivalrous – a knowledge commons involves non-rivalrous goods: their use by one individual does not merely fail to diminish its utility for others, but in many cases actually augments it. To put it in Hess and Ostrom’s terms, the difference between a knowledge commons and natural resource commons is that whereas the latter is “subtractive” (i.e., rivalrous), the former is not. Once produced, new technologies allow knowledge to be disseminated at a marginal cost that is exceptionally low, sometimes zero. And the more that useful knowledge is shared, the more knowledge is subsequently produced on a network or in a community of knowledge, and the more said knowledge increases in value.51 This property of knowledge – which is nicely articulated by the French expression “plus on est de fous plus on rit” – leads to a form of the commons that is a veritable “cornucopia.” In other words, knowledge is not “subtractive” (as are natural commons) but is rather aggregative or cumulative: not only does knowledge not lose value when it is consumed, but it actually acquires additional value and, above all, allows more knowledge to be produced. Knowledge is thus an essentially productive good, insofar as its consumption by others not only fails to diminish the knowledge of others but, by favoring the production of new knowledge, produces a general augmentation. In this respect, one often encounters the well-known remark by Thomas Jefferson:

If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the receiver cannot dispossess himself of it. Its peculiar character, too, is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lites his taper at mine, receives light without darkening me.52

It is not difficult to imagine why so many movements and authors are interested in this “natural” characteristic of knowledge. For it would seem, according to this view, that this dynamic is, in itself, capable of subverting the very foundation of the market economy. But again, we must be careful not to adopt an overly naturalistic position (as is clearly manifest in Jefferson’s remarks). Far too often, analysis of the knowledge commons rests upon the supposed intrinsic properties of knowledge as a purely public good – i.e., its natural productivity and its cumulative character. The economic categorization of knowledge is undoubtedly seductive: it casts any barrier erected within the domain of knowledge an unnatural intervention. Indeed, it inverses the prior liberal naturalism by interpreting all enclosures of knowledge as so many anti-natural, and therefore anti-productive, constructions. The reality, however, is that neither the enclosures nor the knowledge commons themselves are “natural.” They are governed by legal systems, regulatory frameworks, and institutions that sometimes favor the restriction of knowledge to the point of reserving exclusive access for a select few, while at other times they may favor the freest and widest dissemination possible. It is not the “nature” of knowledge that makes it so productive: it is the juridical rules and social norms that determine the scope of its extension and its corresponding fecundity. As Nancy Kranich has shown in her research on the strategies and methods used by private publishers to profit from American libraries and universities, there is nothing in the nature of knowledge itself that naturally makes it a shared resource.53 Digital technologies alone are not enough to make knowledge “naturally” open and available. This is evident, according to Kranich, by the development of commercial or governmental enclosures that limit or even prohibit access with increasing frequency. This limitation or prohibition on access is, of course, nothing new. This form of control always crops up whenever knowledge is reserved for elite castes or favored classes. It takes specific institutions such as schools, public libraries, scientific institutes, and universities to disseminate knowledge and stimulate research. But it is precisely these institutions that have been weakened or transformed by the spread of the logic of intellectual property. By codifying this legal category and extending it to an ever-increasing range of immaterial goods – as we saw in the previous chapter – lawyers and legislators have enabled the proliferation of increasingly long-lived monopolies dedicated to protecting economic rents that are ostensibly designed to encourage innovation. Yet exclusive property rights have undoubtedly yielded largely detrimental effects when it comes to creativity and the dissemination of works and ideas, while most digital commons have contrastingly demonstrated a marked ability to encourage creativity and innovation. What is more, the digital commons allow large communities of contributors to form more open, innovative, and productive online communities. Yet such collaborative productivity is only possible through the invention of technological and social rules, as well as through legal apparatuses that favor cooperative practices and convey an ethic of sharing. As Benjamin Coriat emphasizes, an informational commons is only ever the result of “foundational and constituent acts.”54

According to Hess and Ostrom, the fundamental problem of knowledge can be reduced to the problem of digital capture. If, according to these authors, knowledge is a purely public good – i.e., if it is both non-exclusive and non-rivalrous – there are nonetheless technical means of transforming knowledge into a good that is the exclusive property of a title holder who can refuse to transfer ownership except through payment. And it is precisely because knowledge can be captured by new technologies that knowledge commons can be assimilated into the same framework as a natural commons, and why knowledge “must be managed, controlled, and protected into order to guarantee its sustainability and preservation.”55 It is in this precise sense that we ought to speak of a “knowledge commons” or “informational commons.” To be clear on this point: it is because knowledge has ceased to be a purely public good that it must be regarded as a common good, which is to say that it must be placed within the same analytical framework as the “natural commons.” The productivity of knowledge must be secured by rules similar to those that ensure the sustainability of natural resource stocks. Failure to establish such social rules would lead to what Michael Heller refers to, in the context of property rights in biomedical research, as the “tragedy of the anti-commons.”56 The purpose of such rules is to prevent the depletion of creativity and innovation that would result from the imposition of property rights and commercialization. For Hess and Ostrom, knowledge commons are much more vulnerable than is often thought. The centralization of information on private or public websites raises fears that informational resources might disappear altogether. But, perhaps above all, the creation of enclosures by both the market and the state (as Boyle and Bollier have emphasized) can restrict access to information and cause informational flows to dry up. As Hess and Ostrom thus argue, “the challenge is how to blend systems of rules and norms related to this new commons to guarantee general access to the knowledge that empowers humans while ensuring recognition and support for those who create knowledge in its various forms.”57 Knowledge commons are not, therefore, common by virtue of the “non-subtractive” nature of the resource, but because of the apparatus that protects the production of knowledge from enclosure and commercialization. In other words, the emphasis is once again on the social rules that govern the resource, rather than the technical aspects of the resource itself. New technologies can create very different types of objects when it comes to knowledge. They can produce contradictory objects, and sometimes they even do so within a single institution. American universities, for instance, systematically apply for informational patents while, at the same time, developing open-access online courses.

This is the lesson that Hess and Ostrom draw – again according to their very naturalistic perspective – from their analysis of the knowledge commons framework. It is diametrically opposed to the spontaneous ideology of hackers, for whom freedom within digital networks presumably necessitates the absence of rules. If important domains of knowledge have already been restricted or threatened by market elements, it should also be noted, according to the advice of Ostrom and her collaborators, that these domains can be re-organized as legal commons in order to protect knowledge from the logic of property or from the direct control of capital and commercial exploitation. Indeed, the researchers that developed the notion of a “knowledge commons” initially modeled these commons on self-organized academic communities, who took advantage of the opportunities for collective work that these new information technologies promoted by developing a set of rules that governed the operation of these communities.

The “Constitutional Basis” of Knowledge Commons58

Intellectual innovation and productivity depends on strong rules and norms that ensure the free circulation and growth of knowledge through the sharing of research and results. The validity of this assertion can be immediately verified by considering several “canonical” examples of a knowledge commons, beginning with the Internet. We know that the “network of networks” was born out of a series of initiatives and discussions largely conducted amongst a small community of researchers who established a system of egalitarian and reciprocal exchanges. The primitive Internet, which was academic and non-commercial, was established using public funds within the context of public research, and its purpose was explicitly outlined in a document written by the creators of the Arpanet – the Network Working Group – during the late 1960s and early 1970s: “we hope to promote exchange and discussion to the detriment of authoritarian proposals.” Twenty years later, one of the founding members of the group again wrote, “the result was to create a community of network researchers who believed strongly that collaboration is more powerful than competition.”59 As Patrice Flichy put it, it was not the technology that allowed academic research to become networked, but quite the opposite: it was the decision to work cooperatively through the open exchange of information that, in an unintended manner, disclosed the potentialities of networks. This cooperative model, which merely responded to the demands of its designers, has informed practices and shaped techniques that have subsequently spread far beyond scientific and research circles.

With the growing success of peer-to-peer networks, it is evident how much the academic community has significantly shaped, up the present day, cooperative and exchange-based practices. If academic institutions were indeed the midwives of the network revolution, it was because of the dual values of a cooperative ethic amongst scholars and the series of rules, explicit or implicit, which prohibited private appropriation of the result of common research. Thus, while the Internet may appear to be recent development, it is in fact the offspring of a rather old tradition of “open science.” In the 1940s, American sociologist Robert King Merton enumerated a certain number of conditions that defined the “scientific ethos,” and that leads, in his view, to the production of knowledge. Amongst such ethical components – which include universalism, disinterestedness, moral integrity, and organized skepticism – Merton listed “communism,” admittedly a strange term to flow from the pen of a sociologist who was not particularly sympathetic to the political regime of the same name. What Merton meant by this term had to do with the idea that science presupposes an organization of relations between scientists that truly renders knowledge a kind of global and communal heritage, which is incrementally enriched by all scientists as their work advances, and from which they can draw to continue their research. “Communism,” in other words, was an essential aspect of open science, for Merton. And it could only develop if researchers remained independent from political and economic powers, and were satisfied with symbolic remuneration and regular career advancement.60 For Merton, then, science was incompatible with the norms of capitalism, as he asserted in the most emphatic way. Accordingly, the financing for scientific work could in no way be dependent on the anticipation of results, as if it were a kind of private investment. Rather, science should be supported by society as a whole or, failing this, funded by philanthropic means devoid of any contractual obligations.

The centrality of this scientific ethos in the rise of collaborative networks also played a major role in the creation of free databases for researchers in the 1980s. Faced with rising prices for scientific publications, American academics created digital systems designed to disseminate research and create journal portals and databases that implemented the principles of “open science” on a vast scale, in contravention of the commercial practices of publishers and “university entrepreneurs” who tried to monopolize the results of scientific work through patents. And it was this same spirit that permeated the free software movement. By permitting free access to source code, the free software movement operates through the creation of “rules of freedom” that allows each participant to study how a given program works in order to improve its design and redistribute new copies.61 This movement was an ethical revolt against the imposition of proprietary software (as opposed to free systems), and it prompted a computer scientist at the Massachusetts Institute of Technology (MIT) artificial intelligence laboratory, Richard Stallman,62 to create a new regime of legal protection for classifying “open” common property, based on open source code.63 In 1985, Stallman created the Free Software Foundation, whose explicit goal, according to Philippe Aigrain, was to “build a set of software tools to meet the needs of general computing and to ensure these tools remained available under a common property regime.”64

After years of effort, the free software movement currently mobilizes hundreds of thousands of developers and involves users numbering in the tens of millions. The GNU/Linux operating system, which is used in most websites today, is the direct result of this collaboration, along with other very successful projects such as Firefox, Apache, or Debian. As Aigrain again emphasizes, Stallman’s major contribution to these successes was the protection of the GNU system through a GPL (General Public License).65 The GNU GPL was released in 1989 and created a genuine commons by defining rights and duties for users.66 The community of users and producers is protected by a legal regime of communal intellectual property titled “copyleft” – a term created by Don Hopkins, a friend of Stallman’s, to emphasize its opposition to traditional copyright. It is a general concept that defines a series of principles applicable to software distribution licenses, and it is designed to protect the rights of user-producer communities, rather than individual authors. The central idea of copyleft is to give anyone permission to run a program, as well as to access, distribute, and modify the source code.67 Anyone can draw from the results accumulated by the community, including for commercial uses, in order to make new contributions (but without appropriating developmental results, which must remain common).

In short, copyleft inverts the logic of copyright in order to meet the opposite end copyright was intended meet: as opposed to restricting the use of software, copyleft is a means of allowing it to be “free” – which is to say not exclusively appropriable – “for the benefit of the entire community.” Copyleft actively excludes exclusion, which differs from the simple abandonment of software into the public domain, insofar as it imposes explicit rules on users to ensure free access to any subsequent modifications. Copyleft is not, therefore, a negation of property, but a paradoxical use of the rights of creators over their creations: the user is free to use the software as he wishes, except when it comes to its mode of distribution. This is intended to ensure the continued enrichment of the commons.”68 As Pierre-André Mangolte points out, “the copyleft clause creates an egalitarian and durable institutional framework that is conducive to the prolonged development of free software and the establishment of cooperative forms of production.”69 The range of licenses offered by the creative commons movement has henceforth made it possible to produce commons in multiple domains, by disseminating systems of rules created but computer experts and lawyers that allow cultural, scientific, artistic, and intellectual works to be made available to anyone through these special licenses. These licenses create a particular form of copyright that, for example, obliges disclosure of the origin or paternity of a work while prohibiting its commercial use. And it is worth noting how much the free software movement has drawn its strength from the close ties it forged between computer practitioners and legal theorists who have become increasingly interested in new technologies, such as James Boyle, Eben Moglen, Lawrence Lessig, and Yochai Benkler. This alliance between computer scientists and legal theorists only confirms the assertion that a knowledge commons presupposes a legal and normative architecture that is resistant to commercial strategies and the logic of property.

The use of free software has, of course, been vigorously opposed by the great tech oligopolies who attempt to impose their particular standards on software – with the notable exception of IBM, which, starting in 1999, partnered with the movement in a commercial context. Stallman and many others involved in the movement prioritize freedom of use and creation, as is evident in the very expression “free software.” In their view, if property is legitimate in matters of material goods, it is not so when it comes to intellectual creations, which pose specific and very diverse problems (depending on the specific case at hand). It is always counter-productive to extend the category of property to immaterial goods, such as software, because the latter are not “objects” but collective processes of creation that are by definition unfinished. As Mikhaïl Xifaras emphasizes, it is in the name of a “particularly altruistic and civic utilitarianism” that Stallman has fought against the imposition of copyright onto software. And the social offshoots of the free software movement are also very interesting. Digital co-production communities can now be found in all shapes and sizes. The rapid and continuous development of Wikipedia, which since 2001 has been the most visible of the new types of collaborative resources, contains millions of articles written by hundreds of thousands of authors, and is visited by hundreds of millions of users. Wikipedia’s founding principle is of course well known: it is a “freely distributed encyclopedia that everyone can improve,” according to the website’s slogan. We can clearly see the principle of digital cooperation in the example of Wikipedia: the latter presupposes the establishment of rules to facilitate the free dissemination of content, as well as rules to stimulate article writing and the establishment of an organized mechanism for collectively monitoring any modifications. We might also cite, as another example, the rapid expansion of open educational resources, which is fuelled by institutions or teachers who are contributors to a communal education apparatus through online courses, lectures, exercises, or educational games that are freely shared under a “creative commons” license.

In addition to these classic and perhaps overcited examples, we should also include some more recent developments, such as the transformation of knowledge commons into “common manufacturing.” Since the mid–2000s, the “maker” movement has transferred the principles of digital cooperation into the world of material production. It combines desktop technologies that enable digital autofabrication (especially 3D printing) with the online collaboration of members of the maker community.70 Certain authors like to imagine a profound re-composition of the conception and production of material goods that aligns with the production of digital services, and which would no longer require the large-scale mobilization of fixed capital. This would, indeed, be a new individualized form of production – it would be free of patents and entail a “free” mode of product design. Not only could the rules of the free software movement govern the vast field of the “Internet of Things,” but they would also apply to traditional products through the digital re-making of these objects and the collaborative “mixing” of the two domains. But beyond these few examples, which fuel imagination and prophecy alike, we should not overlook all those online practices of sharing data, information, or works, the birth of participatory media, and all the smaller-scale cooperative and collaborative practices that cluster around particular interests. These smaller efforts have been especially aided by digital communications technologies and the incredible growth of social networks based on the shared content of their users (even if this content is captured by a market logic).

Through its profusion, and the multiplicity and diversity of its contributors, the Internet is undoubtedly a collection of resources that are produced and shared by users who give and receive on the basis of reciprocal giving. But is it, as Mikhaïl Xifaras asks, a liberated and emancipated space that will deliver us from private property and move us from cognitive capitalism to “informational communism”? What is most characteristic of these movements, and of the free software movement in particular, is their ability to create a range of legal processes in order to avoid the negative effects of the exclusivity of “intellectual property,” which tends to impose itself as a common right over the entire field of immaterial production. Free and open-source licenses also form the “constitutional basis” of the knowledge commons in all its various forms, from l’Art libre up to open access for academics and researchers, and they constitute the background condition for the creation of collective wealth by a community of users.71 Yet the question remains as to whether the resistance posed by the supporters of free and open-source software to the extension of intellectual property is a generalizable model for challenging the domination of the logic of property, not only in the digital and informational domain, but in all sectors of production, and thereby serving as a model for a new type of society.

A New General Ethic?

The theorists of the “hacker ethic” are one of the best examples of the practical extension of the ethos of open science to software creation and its protest against the new enclosures.72 For numerous commentators who often tend to extrapolate observable trends from this movement and apply them to the larger society, the ethic and aesthetics of hackers are currently in the process of altering the economy and our society as such. A “hacker” – a term that designates both a programming enthusiast and a computer-savvy practitioner – should not be viewed as a kind of “lone wolf” who is merely out for their own self-interest. Neither are hackers simply “geeks” who are obsessed with computer technology.73 According to the jargon, “the term ‘hacker’ … tends to connote membership in the global community defined by the net … It also implies that the person described is seen to subscribe to some version of the hacker ethic.”74 The hacker ethic, as described in a number of books, has several dimensions. It is based on a certain ethic of enjoyment, a commitment to freedom, and a relationship to the community geared toward “generalized giving.” According to the work of American anthropologist Gabriella Coleman, the ethic does not come from the influence of a guru, but was rather a gradual construction that emerged as collaborative practices on the net were extended, and as these practices began to encounter very real obstacles created by the logic of property. At the same time, however, this ethic was not cut from whole cloth. It inherited many of its values from the counter-culture of the 1960s, and this helped transform cyberspace from an essentially technical tool into a social and political project.75 More often than not, it was older Californian hippies that fed the imaginations of these “virtual communities” created on the basis of encounters in which anyone is free to partake from his or her own personal computer. These communities are thus no longer merely based on academic interests, as was the case in the early days of the Internet, but are now largely formed on the basis of common interests such as music and literature, for example.

According to philosopher Pekka Himanen, the hacker ethic is a new work ethic that shifts the quest for efficiency and profit to a search for passions and for solidarity.76 Indeed, some theorists have gone so far as to describe the hacker as an “anti-Homo oeconomicus.”77 Unlike the alienated industrial worker, the hacker is an ordinary artist who is very different from the inspired and solitary genius of European romanticism. The hacker inscribes his practice within a collective aesthetic dynamic, and the hacker’s somewhat idealized worldview is reminiscent of the discourse of the artistic avant-garde of the twentieth century.78 If we follow Himanen, this new work ethic is in the process of supplanting the puritan morality of sacrifice and renunciation, and is gradually spreading through the entire economy and giving rise to a new general spirit capable of forging a new type of economic system.79

Gabriella Coleman summarizes the hacker aesthetic as follows: “Hackers … tend to value playfulness, pranking, and cleverness, and will frequently perform their wit through source code, humor, or both: humorous code.”80 This was certainly true of Richard Stallman, who often emphasized the dimension of shared joy in the practice of hacking.81 Creative play is the principle motivation of the hacker. The hacker must demonstrate self-deprecation, mischief, and skill in order to be recognized in his or her community. This dimension lends the activity of the hacker characteristics that are diametrically opposed to the forced work that predominates in capitalist society. The activity of hacking – which is founded on passion rather than constraint – breaks the boundary between work and leisure and renders the latter no longer a space of passivity and isolation but a site of collective action.

If humor – which is always linked to “performances” in the aesthetic sense of the term – is one of the principal characteristics of these communities, they are also unified by their radical liberal beliefs: freedom of expression, freedom of association, and the freedom to access information and culture are fundamental and intangible principles of the hacker community. In other words, the hacker movement resists neoliberalism and the ideology of intellectual property by returning to the principle of freedom of individual expression that lies at the foundations of liberalism itself. The hacker slogan “code is speech” means software developers must enjoy the same freedoms as any other citizen. Yet if private appropriation prevents this expression, it is because speech is not free. As Himanen puts it, “freedom of expression and privacy have been important hacker ideals.”82 It is in this sense that we should view the hacker spirit as a resurgence of American movements that favor the protection and extension of fundamental civil liberties, liberties that are threatened or violated by state surveillance or Internet oligopolies. The creation of the Electronic Frontier Foundation (EFF) by Mitch Kapor and John Perry Barlow in 1990 was one of the key moments in the definition of cyber rights and the defense of cyberspace’s independence. The movement became increasingly politicized as state power (whether totalitarian or “liberal”) became increasingly interested in controlling the Internet. The close alliance between large Internet companies and the state has made cyberspace a space of intense surveillance where there are potentially no limits to how much these entities can intrude into our personal data or exchanges.83

The hacker ethic therefore constitutes an actualization of the ideas of the most traditional form of moral and political liberalism, rather than a prefiguration of informational communism. It especially inherited certain libertarian sensibilities from the counter-culture of the 1960s and 1970s. From this perspective, true creativity is only possible in the least regulated and least hierarchical context possible. This sentiment is clearly evident in the examples furnished by Eric Raymond in his classic text of hacker literature in which he opposes the creativity of the cooperative “bazaar” to the “cathedral” of the classical computer companies. As is very common in hacker literature, horizontalism, equality, and the greatest possible individual freedom are touted as the primary assets of collaborative networks and are regularly opposed to the obsession with prescription and control by businesses and administrations.84 Undoubtedly things are not so simple, and quite often the coordination and selection of contributions to projects is directed by an informal hierarchy, or even a “benevolent dictator,” according to the term used to designate leaders of cooperative projects, such as Linus Torvalds. If these idealized conceptions of the hacker community and its practices are to be taken with caution,85 we must also ask ourselves if this ethic, which is nourished on a literature that simultaneously describes and constructs this ethic, does not in fact reflect the existence of a set of diffuse norms of mutual help and solidarity that strongly inhibits the opportunistic behavior of “free riding.” In this sense, then, the hacker ethic plays a role that is at least partially similar to the collective norms established in the institutions and rules of natural commons. In any case, and however accurate these various commentaries on the hacker ethic may be, the practices developed in the fields of free software, participatory media, collaborative sites, data sharing, etc. have demonstrated that social, civic, and ethical factors play a major role in the intellectual and aesthetic creativity required for the digital production of certain goods and services. And, above all, they provide us with practical evidence that refutes the dominant neoliberal view that the only effective incentive in the domain of knowledge, as guaranteed by the logic of property, is monetary reward.

“Freedom” and the “Common”

Freedom must be understood, according to Richard Stallman’s now-famous phrase, not in the sense of “free beer” but in the sense of “free speech.” Many authors in the field, such as American legal theorist Lawrence Lessig, describe freedom – defined as the absence of regulation by the market or the state – as the ultimate value of the free software movement and, more generally, of the larger movement in favor of a creative Internet viewed as an “innovation commons.”86 According to Lessig, free resources “are those available for the taking.”87 Lessing – and Stallman, for that matter – thinks the struggle is no longer between the market and the state, but between the exclusive ownership of information and knowledge and free access to these resources. The Internet is the best example of such a “free resource,” which is to say a resource “held in common.” Does this mean that the knowledge commons, in contrast to the natural commons, exists in a universe that need not be governed by any norms or laws? Of course, surfing from one site to another by clicking on what you want is not really the same as co-producing knowledge and information. Many authors, in this respect, stray quite far from Ostrom’s lesson when they describe open access in cyberspace as a new “terra nullius” that can be accessed by anyone for any use.88

It is worth wondering whether the insistence within the free software movement on freedom of use, freedom of diffusion, and freedom of modification, which is in turn supposed to give rise to a free culture, does not tend to overshadow the principal feature of the movement – namely the constitution of communities of co-users and co-producers. For the libriste movement, in this respect, is not merely a reiteration of the utopian idea of free circulation of information and the generalized transparency made possible by computer technologies. To make sure the fruits of collaboration between hundreds or even millions of Internet users cannot be exclusively appropriated by any one member of the community, but can, on the contrary, be used and modified by all, requires a specific form of “freedom” and “openness” that is created by constructing institutional rules.

In short, a knowledge commons always presupposes rules, and these rules are determined according to the collective tasks being performed, the required competencies, and the size of the community. The selection of members, the coordination of contributions, the raising of necessary funds, the preservation of archives, all of these tasks demand real work if a commons is to be durable and productive. We should add, furthermore, that the rules about the “openness” of any given community may change from one cooperative project to another. For instance, if Wikipedia authorizes large-scale open collaboration, there are other projects in which developers are more strictly selected according to competencies and their adherence to the philosophy of free software. This is what stands out in Sébastian Broca’s description of the development of the Debian operating system, which is very similar in its architecture to the natural commons analyzed by Ostrom and her colleagues – for instance, the distinction drawn in the Debian system between constitutional rules and operational procedures, or the establishment of a conflict resolution body (the “Technical Committee”).89

What is undoubtedly the most misleading interpretation of Internet “freedom” is the tendency to overlook the series of rules incorporated by the technological system as such, which is of course a system that facilitates or hinders certain modes of exchange or communal work. As Lessig put it, cyberspace’s “code is its law.”90 The code, or architecture, of cyberspace includes all the principles and instructions contained in the infrastructural hardware and software that constitute the structure of the web. During the early decades of the Internet, these infrastructures were not centrally controlled and consequently were able to be developed according to the applications and contents of any given contributor. According to Lessig, “the system is built – constituted – to remain open to whatever innovation comes along.”91 As Lessig put it in his first book, published in 1999, there is nothing “natural” in cyberspace: everything is a choice, everything has been constructed according to a certain logic favoring the free flow of information.92 That said, the code, which is of course the system’s true regulator, is always susceptible to change.93 One of the Internet’s most important architectural principles is the TCP/IP protocols, which allow data to be exchanged without its contents being reported or disclosed on the network. This initial elementary architecture made it impossible for a powerful agent – such as the state or large telecommunications corporations – to regulate the interactions between Internet users. The protocol was based on the end-to-end principle, according to which the development of data exchanges – and the entire Internet in general – can be accomplished without central interference. The other important defining principle of the Internet was “net neutrality,” which is a principle that concerns the circulation of “packets” of data within networks. To say the Internet is built on a principle of net neutrality means all these “packets” are treated in a strictly equal fashion, regardless of their content.94 This essential condition is, today, greatly threatened by the growth and power of Internet oligopolies. By concentrating and allying themselves, these oligopolies could radically transform cyberspace through a market logic that accumulates data on Internet users and uses this data in order to maximize advertising profits.

By the late 1990s, Lessig had begun to see how governments and large corporations were transforming the Internet’s code in order to better manage cyberspace according to their increasingly overlapping interests. The identification of the Internet user, the collection of personal information, and generalized espionage by businesses and intelligence agencies was overwriting the “primitive code” and altering the nature of cyberspace. To fight for freedom on the Internet is to defend personal liberty against the double menace of corporate influence and state surveillance. But it is also a fight to defend a shared space that is not governed by either the logic of the market or by state censorship.

The Illusion of “Technological Communism”

A number of commentators, usually based on partial observations and a very general understanding of knowledge, have begun to see the incarnation of a generalized informational communism in the expansion of the Internet. In a text that skillfully parodies the Communist Manifesto, Eben Moglen, a professor of law at Colombia University, argues that the class struggle has been displaced onto the terrain of knowledge and now pits the “creators,” who are allied with the workers, against the property-owning class. One of the more interesting aspects of this text is its reproduction of the Marxian tension between a historical law that necessarily leads to an emancipated society and an analysis of the antagonistic relations between classes that does not, in itself, allow one to pre-emptively designate a victor before the struggle comes to completion. As Moglen puts it:

the advance of digital society, whose involuntary promoter is the bourgeoisie, replaces the isolation of the creators, due to competition, by their revolutionary combination, due to association … The network itself, freed of the control of broadcasters and other bandwidth owners, becomes the locus of a new system of distribution, based on association among peers without hierarchical control, which replaces the coercive system of distribution for all music, video, and other soft goods.95

Internet communities are thus viewed as the prefiguration of a new mode of social and political organization based on generalized cooperation enabled by networked computers.

Yet this whole notion is, in our view, overly optimistic. Whenever one reads an author who advances this thesis, one should always ask the following: does the advent of communist society depend on a social movement that has equipped itself with the instruments of struggle and created new institutions that correspond to the principles of a society founded on cooperation? Or does the transformation of capitalism and the passage to the reticular communist society rest entirely on the new forms of value created in the knowledge economy? It is not entirely uncommon to find both logics, usually in a confused state, in works by the same author. There is no question, for instance, that Hardt and Negri’s description of the future society as an extension of the free software movement, wherein “open source” becomes the matrix of this coming society, is an attempt to avoid the more problematic versions of technological determinism:

We might also understand the decision-making capacity of the multitude in analogy with the collaborative development of computer software and the innovations of the open source movement … When the source is open so that anyone can see it, more of its bugs are fixed, and better programs are produced.96

For while there is still significant risk here of trying to solve political problems with technological solutions, it is nevertheless conceivable that the future society, as Hardt and Negri envision it, will indeed emerge through a political struggle that tries to impose “freedom” and “openness” onto closed systems.97

For other authors however – and sometimes for Hardt and Negri as well, at different moments – the cooperative model will smoothly triumph by virtue of the immanent logic of capital itself, a logic that operates on the basis of networks and creates value by capturing the free cooperation of the hive mind while it produces collective knowledge.98 According to this thesis, contemporary firms strive to capture the positive externalities generated by social communication and cognitive cooperation, which have recently become the principle source of economic value. By taking the exploitation of collaborative work in the context of free software as a kind of economic prototype, this “cognitive capitalism” begins to spread as firms become increasingly aware of the economic importance of online, virtual communities. This new “mode of production,” based on interconnected intelligence in the network, begins to then create the conditions for capitalism’s overcoming.99 André Gorz, at times, falls prey to this technological determinism when, for example, he argues, “the computer emerges here as the universal and universally accessible tool through which all forms of knowledge and all activities can theoretically be pooled.”100 What we find here is utopian thinking – which is often quite old – whose recurrent theme consists in extrapolating the effects of certain systems of organization or technical infrastructures into full-blown models of social organization. Whether it was the industrial system for Saint-Simon, or the development of cybernetics a century and a half later for Norbert Wiener, in each instance economic and technological forms serve as points of support for futuristic projections of complete social re-organization.

The “Knowledge Commons” from Capital’s Perspective

If one consults contemporary managerial discourse, one realizes that connectionist capitalism (le capitalisme connexionniste) has had its eyes on the common for over a decade and a half, and capitalism’s grip over the economy is in fact enhanced by its use of new technologies and by the commercial necessity to assemble, communicate, and invent in common. Not only is this literature in total discord with the thesis on the ineluctable emergence of the democratic commons from today’s cognitive capitalism, but rather it testifies to the private sector’s ability to stop at nothing to construct, both internally and externally, managerial and commercial quasi-commons. There is, by now, an abundant business literature that describes, in detail, the efforts that are being made today to re-think the corporate business model along network lines, and to identify and develop the mechanisms for creating an “ersatz” common in the direct interests of capitalist accumulation.101 If cooperation within the firm does not happen on its own, it must be the primary task of new managers to create the conditions for cooperation, argues Oliver Zara. Collective intelligence and the management of this knowledge are now viewed as the two fundamental resources that determine business performance.102 The management of collective intelligence “favors a new art of collective work based on sharing, intellectual assistance, and co-creation,” argues Zara benignly.103 It is all a matter of substituting “command and control” with “connect and collaborate.”104

The management of cooperation, which is an exemplary manifestation of what Luc Boltanski and Eve Chiapello call the “new spirit of capitalism,”105 obviously runs up against the ultimate goal of any business – profitability – and the strictly individualistic incentives that determine career advancement and make higher pay the sole purpose of one’s employment.106 Accordingly, then, Zara argues that interfirm cooperation is never spontaneous: it requires incentives, tools, specific modes of organization, an entire “art” the manager must master with increasing refinement. Of course, it goes without saying that this intra-business organization of cooperation has nothing whatsoever to do with democracy: “businesses are not democracies (some exceptions aside) and for their survival and sustainable development, it is preferable for businesses to remain undemocratic.”107 Instead, this new mode of capitalist governance creates “in-house” commons that mobilize the ideas and knowledge of collaborators, but without ever suggesting that employees might participate in collective decision-making. The novelty of this transformation is therefore more apparent on the marketing side of business, which tries to create communities of consumers that, depending on their volume and the density of their interconnections, may play a very important role in the financial value of the business.

Marketing professionals have long been in the habit of collecting personal information to make customer profiles and construct consumer databases in order to either target advertising to individuals directly or to sell or lease this data to other businesses. Yet these marketing practices are becoming increasingly important for organizing the “constitutional basis” of the firm as such. Firms are no longer only concerned with attracting a market made up passive consumers that do not know each other and do not trade amongst themselves. Today, firms must build “consumer capital” – i.e., a community of consumers who are invited to enter into the universe of brands themselves, who participate in the development of products, and who even innovate as co-producers. Far from the naïve vision of capital as a mere parasite feeding upon open networked connections, what we are witnessing now are increasingly sophisticated marketing strategies designed to organize the free cooperation of consumers. The goal is to retain these customers and to endow their consumption with collective meaning, but also to increasingly exploit the information they provide about themselves and the employees with whom they interact – or, even better, to elicit creativity from a group of people of varying skills and, ultimately, to profit from their largely voluntary work. This is the basis of crowdsourcing, for instance, which relies on the free and spontaneous collaboration of users to elicit opinions about products and to propose new or improved sales techniques or after-sales services. With the rise of the Internet, “ordinary people,” argues Jeff Howe, “who use their free time to create content, resolve problems, and even contribute to institutional R&D [research and development] have become a new reservoir of labor, cheap labor.”108

If we take the new management gurus at face value, then the key to managerial success now lies in the ability to construct a commercial commons. According to John Hagel and Arthur Armstrong, firms must now rely on the organization of “virtual for-profit communities,” which have become the true vectors of profit in the new economy. Virtual communities are no longer simply spaces of freedom and sharing as imagined by the heirs of the counter-culture. They are now new commercial forms. A winning business strategy now consists in not only informing customers about products on websites, but in creating communications between customers on the basis of common interests related to the products being sold, or in providing a service that allows customers to sell to each other through a for-profit intermediary. The construction of such communities by corporations is increasingly necessary due to the law of returns that says, “the more you sell, the more you sell” – a principle that ensured the success of both Microsoft and Facebook.

In emerging markets such as these, it is crucial to be the first to fully benefit from an extremely lucrative monopolistic dynamic; a good example of the profitability of organizing such social networks is Twitter.109 Examples like Twitter show how much the community of consumers determines the value of a business – or, better, how much the virtual community allows companies to reduce production costs through the use of free labor. The capitalization of the community is a perfect example of the manner in which the logic of the network has been instrumentalized by marketing. The larger the network, the greater the financial value for the firm. According to Hagel and Armstrong, the name of the game is all about organizing the virtual commercial community in order to “capture value from members.”110

And, of course, consumers do not merely supply corporate websites and other forums with their opinions, advice, or information – they also participate in the R&D of products. Under networked conditions, consumption becomes the production or co-production of goods. Consumer-users are transformed into voluntary co-producers of innovation, according to Eric von Hippel111: innovation is increasingly created by composite communities of people with differential competencies, and this diversity is an important source of creative fecundity for firms. According to some authors, the online collective project model will, at least partially, substitute the classic business model when it comes to the development of a product, because it enables substantial cost reductions in R&D by using the voluntary labor furnished by a community formed on the basis of a passion or interest. In the age of “wikinomics,” a neologism created by Don Tapscott and Antony D. Williams, consumers have become “‘prosumers’ by co-creating goods and services rather than simply consuming the end product.”112 This opens up a new era in which billions of people voluntarily participate in the production of wealth that is then appropriated by firms. As Tapscott and Williams rather naïvely assert, we “can now actively participate in innovation, wealth creation, and social development in ways we once only dreamed of.”113

Firms are now faced with a situation in which they must “harness the new collaboration or perish,”114 and this means radically modifying their organizational structure. Capital’s extension over the field of cooperative organization is aimed at making productive use of the time and motivations that exceed the normal parameters of the salaried labor force. Employees’ free time or the activities of retirees or students now become productive voluntary time, and consumption and leisure are increasingly integrated into the production of goods. Yann Moulier-Boutang is thus mistaken when he argues, “entrepreneurial intelligence consists in converting the wealth already present in virtual digital space into economic value.”115 All indications suggest that “entrepreneurial intelligence” is now a matter of constituting the free cooperation of consumers and thereby producing collective knowledge that will be directly incorporated into the production cycle at minimal cost. The common is thus already a managerial category that compounds the classical exploitation of workers with the unprecedented exploitation of consumer-users.116

One cannot but wonder if the businesses associated with these online communities, who “are already discovering the true dividends of collective capability and genius,” will be able to continue to profit from this free workforce and continue to develop this model of exploiting free collaborative work.117 The proponents of capitalism’s digital revolution seem convinced that powerful motivations are yet to be exploited. For the consumer becomes the cooperator not by economic constraint, as is the case with the employee, but by seduction, a taste for sharing, a valorization of skills, the recognition one receives from others, the passion invested in a voluntary activity, etc. Of course, integration within a community of consumers is not in itself new. It was already at work within the logic of “branding,” which makes each consumer wear a corporate insignia or logo and thereby become a voluntary marketer. It is because the commodity is a signifying mark for the consumer that a symbolic community becomes desirable and functions as a means of identification. But marketing is now in pursuit of something more than mere unremunerated participation in the sales department: it now wants unpaid labor in the productive force, a free and voluntary labor force in the service of the firm. It is a matter of putting consumers to work: making them co-producers of goods and co-producers of their own subjection. And, to momentarily draw on the psychological categories of human resource management, this is only possible by taking advantage of all the dissatisfactions associated with compelled labor (i.e., wage labor) and by overcoming monetary or “extrinsic” motivations in order to better exploit non-monetary or “intrinsic” motivations and the workers’ aspiration for collective work. From the skewed perspective of capital, then, the commercial commons of digital capitalism is leading us toward a more democratic future:

We are becoming an economy unto ourselves – a vast global network of specialized producers that swap and exchange services for entertainment, sustenance, and learning. A new economic democracy is emerging in which we all have a lead role.118

As we can see, then, waiting for the spontaneous cooperation of individuals in computer networks, or for the “production of knowledge by knowledge” that flows from the cognitive dynamic itself, is an illusion.119 Today, it is the firm that builds the commercial quasi-common by constructing an interactive framework for creating profit: businesses are “monetizing” the ecosystems they have designed and made available for their creative clients. In other words, it is primarily capital that is in the business of producing the “knowledge commons.”

In summary, then, it is hard not to notice similarities between the advocates of informational anarcho-communism and those extolling the merits of digital capitalism. We find the same arguments in each discourse, and the same technological illusions propped up into absolute truths. But if there is one aspect of the new, networked firm that is continually emphasized by its advocates, it is the constructed character of the commercial quasi-commons. Is this merely a case of vice paying homage to virtue? Whichever way one looks at it, it should be clear that the transition from cognitive capitalism to informational communism will not happen naturally or spontaneously. Knowledge is no more naturally rare than it is naturally abundant. Its production, circulation, and use depend entirely on the institutions that order and shape its related practices. Contrary to the claims of a certain techno-spontaneism, “network effects” do not naturally arise as a result of the simple act of connecting computers together: they are generated by a system of rules (including technological rules) that favors sharing, discussion, collective creation, passion, enjoyment of the game, etc. Developing the potential of digital technologies is not synonymous with rapt fascination or naïveté. Authors like Stallman and Lessig have sufficiently shown that certain characteristics of any given technological architecture may favor the constitution of communities, whereas other characteristics might very well destroy these communities.120

In his attempt to develop a new “political economy of the information network,” Yochai Benkler shows the fragility of web structures in the face of the mechanisms now used by large corporations to control, often in alliance with state security and politic agencies, today’s information and communications environment. While the dissemination of the personal computer and the achievement of increasingly dense interconnections between Internet users may pose a material obstacle to the concentration of the means of communication and the production of information, there is no guarantee that this technical and economic condition, which is essential to the constitution of a new public space, will be sufficient in itself to ensure the future of informational democracy and a culture of communal production. If, as Benkler maintains, the Internet has indeed given rise to a new informational environment that is much more suitable to the vitality of political democracy, or even the collective creation of a “more critical and self-reflexive culture,”121 this is undoubtedly due to the “political ownership” of the technology, which permits direct contact between Internet users, the pooling of resources, and cooperation in the production of information.122 But like Stallman and Lessig, Benkler also points out that Internet technology alone does not determine a particular social and political form; at most, it facilitates such a form and makes it possible. However, “different patterns of adoption” can serve very different strategies and induce very different social relations.123 The question of common knowledge must therefore be addressed in terms of a “battle” that spans the whole field of these new technologies. In other words, we must categorically reject those prophesies that forecast the inexorable arrival of a free society as a result of nothing more than the dissemination of digital technologies.124

To avoid these kinds of errors, it is crucial to draw on the major lessons of the Elinor Ostrom’s political economy of the common. The analysis she developed with her collaborators at the University of Indiana broke with the naturalism of orthodox economics while, at the same time, giving theoretical expression to the actuality and efficacy of collective practices, most of which were very old. Her approach emphasized the “institutional dimension” of the emergence and management of the commons, and this led her to the conclusion that it is not so much the intrinsic quality of a given good that determines the “nature” of the common, but rather it is the organized system of management that institutes an activity and its object that creates a common. While one may get the feeling from the literature on the commons from the 1980s that only “natural resources” can be governed as a common, subsequent developments concerning the production, dissemination, and maintenance of knowledge and information have shown, in the view of Ostrom and her team, that while the institutional character of a given commons depends, of course, on considerations related to economic efficiency (and hence an adequate relationship between the nature of the resource and the rules governing its production), the institutional character of the commons is not about economic efficiency alone: it is also based on “normative choices.” It is through the recognition of the normative character of the commons and its associated institutions that Ostrom’s work offers a two-fold critique of reigning economic orthodoxy.

Ostrom thus responds to the reigning economic dogma with a double argument. First, she shows that an institutional system for communal organization may be much better suited to the “sustainability” of resources – or to the production of knowledge – than the market or the state. But more than this, Ostrom decisively shifts the question of the commons into the terrain of collective action and its political conditions. This displacement in favor of rules of governance made it possible to put so-called natural commons and knowledge commons on the same plane, and it allowed her to broaden her analysis, especially near the end of her career, in order to deal with important questions concerning the environment and democracy at a global level.

In this sense, then, the emergence of the “commons paradigm” owes much to Ostrom’s work. By showing how threats to the environment or threats to our ability to freely share intellectual resources are all linked to systems of rules – whether explicit or implicit, formal or informal, or actual or potential – that can destroy a commons or prevent its development, Ostrom’s work made it possible to fully appreciate the dangers of economic behavior guided by the logic of appropriation, particularly in terms of the latter’s tendency to irredeemably deplete natural resources. On the other hand, her work also emphasizes the risk of intellectual and cultural underproduction wherever knowledge is privatized, and how privatization threatens creativity and communication by restricting intellectual co-production by hindering the use of our public inheritance. In both cases, her analysis encourages an examination of the rules that make it possible to counter these dangers. On this point, Ostrom cannot have failed to see the potential political impact of her work (though she remained extremely cautious about the practical applications that might be drawn from her studies). Ostrom allowed us to see how Hardin’s dilemma of the commons was about more than the use of local resources in small communities. It showed us that many of the political, social, ecological, and military problems facing nation-states and the world as a whole conform to the terrible logic of the “prisoner’s dilemma,” which locks us into a rigidly individualistic rationality rendering us incapable of arriving at collective solutions. “Much of the world is dependent on resources that are subject to the possibility of a tragedy of the commons”125 – though, of course, Ostrom showed that the tragedy of the commons is really just a tragedy of impossible cooperation whenever individuals are imprisoned by their own self-interest. In any case, by shifting the discourse from production to institutions, Ostrom initiated a profound critique of economic naturalism – but she did not complete it. She fundamentally transformed the common from a natural phenomenon into an activity-based principle and institution, and thereby uncovered a new logic that, in turn, calls for a new theoretical approach.