The strangest thing about this remarkable return of ‘humankind’ into history is that the Anthropocene provides the clearest demonstration that, from an environmental point of view, humanity as a whole does not exist.
Christophe Bonneuil and Jean-Baptiste Fressoz1
In an analysis of Google's business model in Wired on 23 June 2008, Chris Anderson showed that the services provided by this company – which are based on what Frédéric Kaplan has called linguistic capitalism2 – operate without any reference whatsoever to a theory of language.3
Continuing with a form of reasoning similar to that which he applies to the epidemiology of Google, Anderson comes to the conclusion that what is referred to today as ‘big data’,4 consisting of gigabytes of data that can be analysed in real time via high-performance computing, no longer has any need for either theory or theorists – as if data ‘scientists’, specialists in the application of mathematics to very large databases through the use of algorithms, could replace those theoreticians that scientists always are in principle, regardless of the scientific field or discipline with which they happen to be concerned.
Four months later, on 23 October 2008, Alan Greenspan appeared before a Congressional hearing to explain the reasons behind so many financial catastrophes that were unleashed after the subprime crisis of August 2007. Challenged for having failed to anticipate or prevent the systemic crisis, he defended himself by arguing that the scale of the crisis was due to the misuse of financial mathematics and automated calculation systems to assess risk, mechanisms established by digital trading in its various forms (from subprime to high-frequency trading): ‘It was the failure to properly price such risky assets that precipitated the crisis. In recent decades, a vast risk management and pricing system has evolved, combining the best insights of mathematicians and finance experts supported by major advances in computer and communications technology.’5 Greenspan also stressed that such approaches had been legitimated through the Nobel Prize for economics6 – his intention being to assert that, if there is blame to be apportioned, it ought not to fall only upon the president of the US Federal Reserve: the whole apparatus of computerized formalization and automated decision-making undertaken by financial robots was involved, as well as the occult economic ‘theory’ that gave it legitimacy.
If until August 2007 this had managed to function (this paradigm having ‘held sway for decades’), if computerized formalization and automated decision-making had been imposed in fact, this ‘whole intellectual edifice, however, collapsed [that summer] because the data inputted into the risk management models generally covered only the past two decades, a period of euphoria’.7 I would add to Greenspan's statement that the ideologues of this ‘rational risk management’ were undoubtedly not unaware of the limitations of their data sets. But they assumed that ‘historic periods of stress’ had occurred only because these financial instruments did not exist during these periods, or because competition was not yet ‘perfect and undistorted’. Such was the concealed theory operating behind these robots, robots that supposedly ‘objectify’ reality and do so according to ‘market rationality’.
Not long after the publication of Chris Anderson's article on Google, Kevin Kelly objected that, behind every automated understanding of a set of facts, there lies a hidden theory, whether there is awareness of it or not, and, in the latter case, it is a theory awaiting formulation.8 What this means for us, if not for Kelly himself, is that behind and beyond every fact, there is a law.
Science is what goes beyond the facts by making a claim for an exception to this law: it posits that there can always be an exception (and this is what ‘exciper’ means in law: to plead for, or to claim, an exception to the law) to the majority of facts, even to the vast majority of facts, that is, to virtually all of them, an exception that invalidates them in law (that invalidates their apparent coherence). This is what, in the following chapters, we will call, alongside Yves Bonnefoy and Maurice Blanchot, the improbable – and this is also the question of the black swan, which Nassim Nicholas Taleb has posed in a way that remains closer to the epistemology of statistics, probability and categorization.9
The ideology of perfect and undistorted competition was and remains today the discourse of neoliberalism, and this includes the discourse of Alan Greenspan, who concluded his 2008 Congressional testimony by expressing himself in such terms: ‘Had instead the models been fitted more appropriately to historic periods of stress, capital requirements [for funds held in financial institutions] would have been much higher and the financial world would be in far better shape today, in my judgment.’ But what this comment obscures is the fact that ‘with ifs, one could bottle Paris’.10 For had these capital requirements ‘been much higher’, the model would simply never have developed. For this model developed precisely in order to paper over the systemic insolvency of consumer capitalism (that is, of ‘growth’), a form of capitalism afflicted for over thirty years by the drastic reduction in the purchasing power of workers, as demanded by the conservative revolution – and by financialization, in which the latter fundamentally consists, and which made it possible for countries to become structurally indebted, and hence subjected to an unprecedented form of blackmail that indeed resembles a racket (and which we can therefore refer to as mafia capitalism).11
The application of this model based on the ‘financial industry’ and its automated computer technologies was intended both to capture without redistribution the capital gains generated by productivity and to conceal, through computer-assisted financial fraudulence operating on a worldwide scale, the fact that the conservative revolution had broken the ‘virtuous circle’ of the Fordist-Keynesian ‘compromise’.12
With the conservative revolution, then, capitalism becomes purely computational (if not indeed ‘purely mafiaesque’). Max Weber showed in 1905 that, on the one hand, capitalism was originally related to a form of incalculability the symbol of which was Christ as the cornerstone of the Protestant ethic, the latter constituting the spirit of capitalism.13 But he showed, on the other hand, that the transformative dynamics of the society established by this ‘spirit’ consisted in a secularization and rationalization that irresistibly thwarts it – what might be called the aporia of capitalism.14
We shall see that as contemporary capitalism becomes purely computational, concretized in the so-called ‘data economy’, this aporia is exacerbated, this contradiction is ‘realized’, and in this way it succeeds in accomplishing that becoming without future referred to by Nietzsche as nihilism – of which Anderson's blustering assertions and Greenspan's muddled explanations are symptoms (in the sense given to this term by Paolo Vignola).15
Anderson's storytelling belongs to a new ideology the goal of which is to hide (from itself) the fact that with total automatization a new explosion of generalized insolvency is readying itself, far worse than that of 2008: the next ten years will, according to numerous studies, predictions and ‘economic assessments’, be dominated by automation.
On 13 March 2014 Bill Gates declared in Washington that with the spread of software substitution, that is, as logical and algorithmic robots come increasingly to control physical robots – from ‘smart cities’ to Amazon, via Mercedes factories, the subway and trucks that deliver to supermarkets from which cashiers and freight handlers are disappearing, if not customers – employment will drastically diminish over the next twenty years, to the point of becoming the exception rather than the rule.
This thesis, which in the last few years has been explored in depth, has recently come to the attention of European newspapers, firstly in Belgium in Le Soir, which in July 2014 warned of the risk of the loss of half of all the jobs in the country ‘within one or two decades’, then in France. It was taken up again by Journal du dimanche in October 2014, in an article based on a study the newspaper commissioned from the firm Roland Berger. This warned of the destruction by 2025 of three million jobs, equally affecting the middle classes, management, the liberal professions and the manual trades. Note that the loss of three million jobs represents an increase in unemployment of about eleven points – an unemployment level of 24%, in addition to the rise of ‘part-time’ or ‘casual’ under-employment.
Ten years from now, and regardless of how it is counted, French unemployment is likely to shift to between 24% and 30% (the Roland Berger scenario being relatively optimistic compared to the forecasts of the Brussels-based think tank Bruegel, as we shall see). Furthermore, each one of these studies predicts the eventual demise of the Fordist-Keynesian model, which had hitherto organized the redistribution of the productivity gains obtained through Taylorist automation in the form of purchasing power acquired through wages.
Hence this portends an immense transformation. Despite this, the report submitted by Jean Pisani-Ferry to the French president in the summer of 2014 as part of a ‘government seminar’ had not one word to say about these literally overwhelming prospects – which represent an upheaval for any macroeconomics to come.
Pisani-Ferry's France Stratégie report, France Ten Years From Now?, does, of course, discuss employment, but in a wheedling tone more or less amounting to the injunction, ‘Let us set modest, realistic goals: in terms of employment, let's aim to be in the top third of similar countries.’16 And it goes on and on in these tepid terms for two hundred pages, never deigning to mention the prospect of a drastic reduction in employment, on the contrary asserting:
[T]he goal must be full employment. As far as we can see today, this is the normal way in which the economy functions. Any other social condition becomes pathological and involves an unsustainable waste of skills and talents. There is no reason to give up on reaching this, given that for a long time we experienced a situation of very low unemployment and that some of our neighbours have today returned to such a situation.17
According to Pisani-Ferry, Commissioner General of France Stratégie, then, the goal of full employment should be reaffirmed, but in a ‘credible’ way – but the reasoning behind this argument turns out to be extraordinarily tenuous:
To set this target today for 2025 would not be deemed credible by the French public, which has suffered decades of persistent mass unemployment. A goal that is perceived, rightly or wrongly, as being too high can have a demotivating effect. It is better, as the Chinese proverb says, to cross the river by feeling the stones. Furthermore, the problem with setting goals in absolute terms lies in not taking into account the global and European economic situation. Reasoning in relative terms avoids this pitfall. In this spirit, we can aspire to return sustainably to the top third of European countries in terms of employment.18
The claims of France Ten Years From Now? are contradicted by Bruegel, the Brussels-based policy research institute that had been headed by Pisani-Ferry himself until his May 2013 appointment as Commissioner General of France Stratégie. Bruegel argues, through Jeremy Bowles and by taking note of the figures provided by Benedikt Frey and Michael Osborne,19 that Belgium could see 50% of its jobs disappear, England 43%, Italy and Poland 56% – and all this, according to Le Soir, ‘within one or two decades’.
At the time he submitted his report (in June 2014), Pisani-Ferry could not have been unaware of these forecasts made by the very institute he helped found in 2005. How did he allow himself to so dissimulate? The reality is that, like Greenspan, he internalized a calamitous situation that he continues to misunderstand thanks to a deeply flawed analysis, thereby preventing France from taking stock of a highly dangerous situation: ‘[C]ashiers, nannies, supervisors, even teachers […], by 2025 a third of jobs could be filled by machines, robots or software endowed with artificial intelligence and capable of learning by themselves. And of replacing us. This is a vision of the future prophesied by Peter Sondergaard, senior vice president and global head of research at Gartner.’20 We shall see that this ‘vision’ is shared by dozens of analysts around the world – including the firm Roland Berger, which released a study arguing that ‘by 2025, 20% of tasks will be automated. And more than three million workers may find themselves giving up their jobs to machines. An endless list of sectors is involved: agriculture, hospitality, government, the military and the police.’21 To conceal such prospects is a serious mistake, as noted by an associate of Roland Berger, Hakim El Karoui:
‘The tax system is not set up to collect part of the wealth generated (by the digital), and the redistribution effect is therefore very limited.’
Warning against the risk of social explosion, [El Karoui] calls for ‘anticipating, describing, telling the truth […], to create a shock in public opinion now’. Otherwise, distrust of the elites will increase, with serious political consequences.22
To anticipate, describe, alert, but also to propose: such are the goals of this book, which envisages a completely different way of ‘redistributing the wealth generated by the digital’, to put it in Hakim El Karoui's terms. Is a different future possible, a new beginning, in the process of complete and generalized automatization to which global digital reticulation is leading?
We must pose this as the question of the passage from the Anthropocene, which at the end of the eighteenth century established the conditions of generalized proletarianization (something that Adam Smith himself already understood), to the exit from this period, a period in which anthropization has become a ‘geological factor’.23 We will call this exit the Neganthropocene. The escape from the Anthropocene constitutes the global horizon of the theses advanced here. These theses posit as first principle that the time saved by automatization must be invested in new capacities for dis-automatization, that is, for the production of negentropy.
Analysts have been predicting the end of wage labour for decades, from Norbert Wiener in the United States to Georges Friedmann in France, after John Maynard Keynes himself foreshadowed its imminent disappearance. Marx, too, explored this hypothesis in depth in a famous portion of the Grundrisse on automation, known as the ‘fragment on machines’.
This possibility will come to fruition over the next decade. What should we do over the course of the next ten years in order to make the best of this immense transformation?
Bill Gates has himself warned of this decline in employment, and his recommendation consists in reducing wages and eliminating various related taxes and charges. But lowering yet again the wages of those who still have jobs can only increase the global insolvency of the capitalist system. The true challenge lies elsewhere: the time liberated by the end of work must be put at the service of an automated culture, but one capable of producing new value and of reinventing work.24 Such a culture of dis-automatization, made possible by automatization, is what can and must produce negentropic value – and this in turn requires what I have previously referred to as the otium of the people.25
Automation, in the way it has been implemented since Taylorism, has given rise to an immense amount of entropy, on such a scale that today, throughout the entire world, humanity fundamentally doubts its future – and young people especially so. Humanity's doubt about its future, and this confrontation with unprecedented levels of youth worklessness, are occurring at the very moment when the Anthropocene, which began with industrialization, has become ‘conscious of itself’:
Succeeding the Holocene, a period of 11,500 years marked by a rare climatic stability […] a period of blossoming agricultures, cities and civilizations, the swing into the Anthropocene represents a new age of the Earth. As Paul Crutzen and Will Steffen have emphasized, under the sway of human action, ‘Earth is currently operating in a no-analogue state.’26
That the Anthropocene has become ‘conscious of itself’27 means that human beings have become more or less conscious of belonging to the Anthropocene era, in the sense that they feel ‘responsible’28 – something that became visible in the 1970s. After the Second World War and the resultant acceleration of the Anthropocene, there was a growing ‘common consciousness’ of being a geological factor and the collective cause of massive and accelerated entropization via mass anthropization. This occurred even before the formulation (in 2000) of the concept of the Anthropocene itself – a fact that Bonneuil and Fressoz highlight by referring to a speech delivered by Jimmy Carter in 1979: ‘Human identity is no longer defined by what one does, but by what one owns. But we've discovered that owning things and consuming things does not satisfy our longing for meaning. We've learned that piling up material goods cannot fill the emptiness of lives which have no confidence or purpose.’29 It is striking that an American president here declares the end of the American way of life. Bonneuil and Fressoz recall that this runs counter to the discourse that would subsequently appear with Ronald Reagan: ‘If Carter's defeat by Ronald Reagan in 1980, who called for a restoration of US hegemony and the deregulation of polluting activities, shows the limits of this appeal, his speech does illustrate the influence […] that criticism of the consumer society had acquired in the public sphere.’30 In recent years, and especially since 2008, this ‘self-consciousness’ of the Anthropocene has exposed the systemically and massively toxic character of contemporary organology31 (in addition to its insolvency), in the sense that Ars Industrialis and the Institut de recherche et d'innovation (IRI) give to this term within the perspective of general organology.32
There is now a general awareness of this pharmacological toxicity, in the sense that factors hitherto considered progressive seem to have inverted their sign and are instead sending humanity on a course of generalized regression. With this in mind, the Anthropocene, whose history coincides with that of capitalism, presents itself as a process that begins with organological industrialization (including in those countries thought of as ‘anti-capitalist’), that is, with the industrial revolution – which must accordingly be understood as an organological revolution.
The Anthropocene era is that of industrial capitalism, an era in which calculation prevails over every other criteria of decision-making, and where algorithmic and mechanical becoming is concretized and materialized as logical automation and automatism, thereby constituting the advent of nihilism, as computational society becomes a society that is automated and remotely controlled.
The confusion and disarray into which we are thrown in this stage – a stage that we call ‘reflexive’ because there is a supposedly ‘raised consciousness’ of the Anthropocene – is a historical outcome in relation to which new causal and quasi-causal factors can now be identified that have not hitherto been analysed. This is why Bonneuil and Fressoz rightly deplore ‘geocratic’ approaches that short-circuit political analyses of that history which begins to unfold with what they call the Anthropocene event.33
To Bonneuil and Fressoz's historical and political perspective, however, we must add that, as a result of this event, what philosophy had denied in a structural way for centuries has now become clear, namely, that the artefact is the mainspring of hominization, its condition and its destiny. It is no longer possible for anyone to ignore this reality: what Valéry, Husserl and Freud posited between the two world wars as a new age of humanity, that is, as its pharmacological consciousness and unconsciousness of the ‘world of spirit’,34 has become a common, scrambled and miserable consciousness and unconsciousness. Such is ill-being [mal-être] in the contemporary Anthropocene.35
What follows from this is an urgent need to redefine the noetic fact in totality – that is, in every field of knowledge (of how to live, do and conceptualize) – and to do so by integrating the perspectives of André Leroi-Gourhan and Georges Canguilhem, who were the first to posit the artificialization of life as the starting point of hominization.36 This imperative presents itself as a situation of extreme urgency crucial to politics, economics and ecology. And it thereby raises a question of practical organology, that is, of inventive productions.
We argue that this question and these productions necessarily involve (and we will show why) a complete reinvention of the world wide web – the Anthropocene having since 1993 entered into a new epoch with the advent of the web, an epoch that is as significant for us today as were railways at the beginning of the Anthropocene.
We must think the Anthropocene with Nietzsche, as the geological era that consists in the devaluation of all values: it is in the Anthropocene, and as its vital issue, that the task of all noetic knowledge becomes the transvaluation of values. And this occurs at the moment when the noetic soul is confronted, through its own, organological putting-itself-in-question, with the completion of nihilism, which amounts to the very ordeal of our age – in an Anthropocene concretized as the age of planetarizing capitalism.
It is with Nietzsche that, after the Anthropocene event, we must think the advent of the Neganthropocene, and it must be thought as the transvaluation of becoming into future. And this in turn means reading Nietzsche with Marx, given that, in 1857, the latter reflects upon the new status of knowledge in capitalism and the future of work, in the section of the Grundrisse on automation known as the ‘fragment on machines’, in which he also discusses the question of the general intellect.
Reading Marx and Nietzsche together in the service of a new critique of political economy, where the economy has become a cosmic factor on a local scale (a dimension of the cosmos) and therefore an ecology, must lead to a process of transvaluation, such that both economic values and those moral devaluations that result when nihilism is set loose as consumerism are ‘transvaluated’ by a new value of all values, that is, by negentropy – or negative entropy,37 or anti-entropy.38
Emerging from thermodynamics about thirty years after the advent of industrial technology and the beginning of the organological revolution lying at the origin of the Anthropocene, both with the grammatization of gesture by the first industrial automation and with the steam engine,39 the theory of entropy succeeds in redefining the question of value, if it is true that the entropy/negentropy relation is the vital question par excellence. It is according to such perspectives that we must think, organologically and pharmacologically, both what we are referring to as the entropocene and what we are referring to as neganthropology.
The kosmos is conceived at the dawn of philosophy as identity and equilibrium. Through this opposition posited in principle between an equilibrium of ontological origin and the disequilibrium of corruptible beings, technics, which in fact constitutes the organological condition, is relegated to the sublunary as the world of contingency and of ‘what can be otherwise than it is’ (to endekhomenon allōs ekhein), and thereby finds itself as such excluded from thought.
The Anthropocene, however, makes such a position untenable, and consequently constitutes an epistemic crisis of unprecedented magnitude: the advent of the thermodynamic machine, which reveals the human world as being one of fundamental disruption,40 inscribes processuality, the irreversibility of becoming and the instability of equilibrium in which all this consists, at the heart of physics itself. All principles of thought as well as action are thereby overturned.
The thermodynamic machine, which posits in physics the new, specific problem of the dissipation of energy, is also an industrial technical object that fundamentally disrupts social organizations, thereby radically altering ‘the understanding that being there has of its being’41 and establishing the era of what is referred to as ‘technoscience’. As it consists essentially in a combustion, this technical object, of which the centrifugal ‘flyball’ governor will be a key element at the heart of the conception of cybernetics, introduces the question of fire and of its pharmacology both on the plane of astrophysics (which replaces cosmology) and on the plane of human ecology.
The question of fire – that is, of combustion – is thereby inscribed, from the standpoint of physics but also from the perspective of anthropological ecology, at the heart of a renewed thought of the cosmos as cosmos (and beyond ‘rational cosmology’ as it was conceived by Kant42): the Anthropocene epoch can appear as such only starting from the moment when the question of the cosmos itself becomes the question of combustion in thermodynamics and in astrophysics – and, via the steam engine, in relation to that eminent pharmakon that is domestic fire as the artifice par excellence brought to mortals by Prometheus, and watched over by Hestia.43
As a question of physics, the techno-logical conquest of fire44 puts anthropogenesis – that is, organogenesis that is not just organic but organological – at the heart of what Whitehead called concrescence, and does so as the local technicization of the cosmos. This local technicization is relative, but it leads to conceiving of the cosmos in its totality on the basis of this position and on the basis of this local opening of the question of fire as the pharmakon of which we must take care – where the question of energy (and of energeia) that fire (which is also light) harbours, posed on the basis of the organological and epistemological revolution of thermodynamics as reconsidered by Schrödinger, constitutes the matrix of the thought of life as well as information, and does so as the play of entropy and negentropy.
Establishing the question of entropy and negentropy among human beings as the crucial problem of the everyday life of human beings and of life in general, and, finally, of the universe in totality for every form of life, technics constitutes the matrix of all thought of oikos, of habitat and of its law. Is it not striking from such a standpoint that at the very moment when Schrödinger was delivering the lectures in Dublin that would form the basis of What is Life?, Canguilhem was asserting that the noetic soul is a technical form of life that requires new conditions of fidelity in order to overcome the shocks of infidelity caused by what we ourselves call the doubly epokhal redoubling?45
What Canguilhem described as the infidelity of the technical milieu46 is what is encountered as an epokhal technological shock by the organological and pharmacological beings that we are qua noetic individuals – that is, as intellectual and spiritual individuals. This shock and this infidelity derive fundamentally from what Simondon called the phase shifting of the individual. This de-phasing of the individual in relation to itself is the dynamic principle of individuation.
We have developed the concept of the ‘doubly epokhal redoubling’ in order to try to describe how a shock begins by destroying established circuits of transindividuation,47 themselves emerging from a prior shock, and then gives rise to the generation of new circuits of transindividuation, which constitute new forms of knowledge arising from the previous shock. A techno-logical epokhē is what breaks with constituted automatisms, with automatisms that have been socialized and are capable of producing their own dis-automatization through appropriated knowledge: the suspension of socialized automatisms (which feeds stupidity in its many and varied forms) occurs when new, asocial automatisms are set up. A second moment of shock (the second redoubling) then produces new capacities for dis-automatization, that is, for negentropy to foster new social organizations.
Knowledge always proceeds from such a double shock – whereas stupidity always proceeds from automaticity. Recall here that Canguilhem posits in principle the more-than-biological meaning of epistēmē: knowledge of life is a specific form of life conceived not only as biology, but also as knowledge of the milieus, systems and processes of individuation, and where knowledge is the condition and the future of life exposed to return shocks from its vital technical productions (organogenetic productions, which it secretes in order to compensate for its default of origin).
Knowledge [connaissance] is what is constituted as the therapeutic knowledges [savoirs] partaking in the pharmaka in which consist the artificial organs thus secreted. It is immediately social, and it is always more or less transindividuated in social organizations. Knowledge of pharmaka is also knowledge through pharmaka: it is constituted in a thoroughly organological way, but it is also wholly and originally internalized – failing which it is not knowledge, but information. This is why it does not become diluted in ‘cognition’: hence cognitive science, which is one such form, is incapable of thinking knowledge (that is, of thinking itself).
We must relate the organo-logical function of knowledge such as we understand it on the basis of Canguilhem, and as necessitated by the technical form of life, to what Simondon called the knowledge of individuation: to know individuation is to individuate, that is, it is to already no longer know because it is to de-phase.
Knowledge [connaissance], as the knowledge [savoir] that conditions both the psychic and collective individuation of knowing, ‘always comes too late’, as Hegel said, which means that it is not self-sufficient: it presumes life-knowledge [savoir-vivre, knowledge of how to live] and work-knowledge [savoir-faire, knowledge of how to do] that always exceed it and that are themselves always exceeded by technical individuation, which generates the technological shocks that constitute epochs of knowledge.
The socialization of knowledge increases the complexity of societies, societies that individuate and as such participate in what Whitehead called the concrescence of the cosmos, itself conceived as a cosmic process that generates processes of individuation wherein entropic and negentropic tendencies play out differently each time.
In the Anthropocene epoch, from which it is a matter of escaping as quickly as possible, the questions of life and negentropy arising with Darwin and Schrödinger must be redefined from the organological perspective defended here, according to which: (1) natural selection makes way for artificial selection; and (2) the passage from the organic to the organological displaces the play of entropy and negentropy.48
Technics is an accentuation of negentropy. It is an agent of increased differentiation: it is ‘the pursuit of life by means other than life’.49 But it is, equally, an acceleration of entropy, not just because it is always in some way a process of the combustion and dissipation of energy, but also because industrial standardization seems to be leading the contemporary Anthropocene to the possibility of a destruction of life as the burgeoning and proliferation of differences – as the biodiversity, sociodiversity (‘cultural diversity’) and psychodiversity of singularities generated by default as psychic individuations and collective individuations.
The destruction of social diversity results from short-circuits of the processes of transindividuation imposed by industrial standardization. We shall see in the conclusion of this work that anthropology understood as entropology is the problem that Lévi-Strauss succeeds in recognizing but not in thinking – he fails to pose this as the question of neganthropology, that is, as the question of a new epoch of knowledge embodying the task of entering into the Neganthropocene. This is what leads him to abandon the political dimension implied in any anthropology.
The Anthropocene is a singular organological epoch inasmuch as it engendered the organological question itself. It is in this way retroactively constituted through its own recognition, where the question this period poses is how to make an exit from its own toxicity in order to enter the curative and care-ful – and in this sense economizing – epoch of the Neganthropocene. What this means in practical terms is that in the Neganthropocene, and on the economic plane, the accumulation of value must exclusively involve those investments that we shall call neganthropic.
We call neganthropic that human activity which is explicitly and imperatively governed – via processes of transindividuation that it implements, and which result from a criteriology established by retentional systems – by negentropic criteria. The neganthropization of the world breaks with the care-less and negligent anthropization of its entropic effects – that is, with the essential characteristics of the Anthropocene. Such a rupture requires the overcoming of the Lévi-Straussian conception of anthropology by a neganthropology that remains entirely to be elaborated.
The question of the Anthropocene, which bears within it its own overcoming, and bears the structure of a promise, is emerging at the very moment when, on the other hand, we are witnessing the establishment of that complete and general automatization made possible by the industry of reticulated digital traces, even though the latter seems to make this promise untenable. To hold fast, that is, to hold good to this promise, means beginning, precisely, from those neganthropic possibilities opened up by automation itself: it is to think this industry of reticulation as a new epoch of work, and as the end of the epoch of ‘employment’, given that the latter is ultimately and permanently compromised by complete and general automatization. And it is to think this industry as the ‘transvaluation’ of value, whereby ‘labour time ceases and must cease to be its measure, and hence exchange value [must cease to be the measure] of use value’,50 and where the value of value becomes neganthropy. Only in this way can and must the passage from the Anthropocene to the Neganthropocene be accomplished.
Since 1993, a new global technical system has been put in place. It is based on digital tertiary retention and it constitutes the infrastructure of an automatic society to come. We are told that the data economy, which seems to be concretizing itself as the economic dynamic generated by this infrastructure, is the inevitable destiny of this society.
We shall show, however, that the ‘destiny’ of this society of hyper-control (chapter 1) is not a destination: it leads nowhere other than to nihilism, that is, to the negation of knowledge itself (chapter 2). And we will see, first with Jonathan Crary (chapter 3), then with Thomas Berns and Antoinette Rouvroy (chapters 4 and 5), why this automatic society to come will be able to constitute a future – that is, a destiny of which the negentropic destination is the Neganthropocene – only on the condition of overcoming this ‘data economy’, which is in reality the diseconomy of a ‘dis-society’51 (chapter 6).
The current system, founded on the industrial exploitation of modelled and digitalized traces, has precipitated the entropic catastrophe that is the Anthropocene qua destiny that leads nowhere. As 24/7 capitalism and algorithmic governmentality, it hegemonically serves a hyper-entropic functioning that accelerates the rhythm of the consumerist destruction of the world while installing a structural and unsustainable insolvency, based on a generalized stupefaction and a functional stupidity that destroys the neganthropological capacities that knowledge contains: unlike mere competence, which does not know what it does, knowledge is a cosmic factor that is inherently negentropic.
We intend in this work to show that the reticulated digital infrastructure that supports the data economy, put in place in 1993 with the world wide web and constituting the most recent epoch of the Anthropocene, can and must be inverted into a neganthropic infrastructure founded on hermeneutic digital technology in the service of dis-automatization. That is, it should be based on collective investment of the productivity gains derived from automatization in a culture of knowing how to do, live and think, insofar as this knowledge is essentially neganthropic and as such produces new value, which alone is capable of establishing a new era bearing a new solvency that we call the Neganthropocene (chapters 7 and 8).
The current infrastructure is rapidly evolving towards a society of hyper-control founded on mobile devices such as the ‘smartphone’, domestic devices such as ‘web-connected television’, habitats such as the ‘smart house’ and ‘smart city’, and transport devices such as the ‘connected car’.
Michael Price showed recently that the connected television is a tool for automated spying on individuals:
I just bought a new TV. […] I am now the owner of a new ‘smart’ TV […]. The only problem is that I'm now afraid to use it. […] The amount of data this thing collects is staggering. It logs where, when, how and for how long you use the TV. It sets tracking cookies and beacons designed to detect ‘when you have viewed particular content or a particular email message.’ It records ‘the apps you use, the websites you visit, and how you interact with content.’ It ignores ‘do-not-track’ requests as a considered matter of policy. It also has a built-in camera – with facial recognition. The purpose is to provide ‘gesture control’ for the TV and enable you to log in to a personalized account using your face.52
What will occur with the connected clothing that is now appearing on the market?53
In addition, Jérémie Zimmermann highlighted in an interview in Philosophie magazine that the smartphone has led to a real change in the hardware of the digital infrastructure, since the operations of this handheld device, unlike either the desktop or laptop computer, are no longer accessible to the owner:
The PCs that became available to the broad public in the 1980s were completely understandable and programmable by their users. This is no longer the case with the new mobile computers, which are designed so as to prevent the user from accessing some of the functions and options. The major problem is the so-called baseband chip that is found at the heart of the device. All communications with the outside – telephone conversations, SMS, email, data – pass through this chip. More and more, these baseband chips are fused with the interior of the microprocessor; they are integrated with the main chip of the mobile computer. Now, none of the specifications for any of these chips are available, so we know nothing about them and cannot control them. Conversely, it is potentially possible for the manufacturer or the operator to have access, via these chips, to your computer.54
For his part the physicist Stephen Hawking, in a co-authored article appearing in the Independent, stated that ‘AI may transform our economy to bring both great wealth and great dislocation.’55 The authors observe that, if we undoubtedly have a tendency to believe that, ‘facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome’, then we are wrong. And they invite us to measure what is at stake by considering one question: ‘If a superior alien civilisation sent us a message saying, “We'll arrive in a few decades,” would we just reply, “OK, call us when you get here – we'll leave the lights on”? Probably not – but this is more or less what is happening with AI.’ They point out that the stakes are too high to not be given priority and urgency at the core of research: ‘Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes.’
Referring to the work of Tim O'Reilly, Evgeny Morozov talks about ‘smartification’ based on ‘algorithmic regulation’ that amounts to a new type of governance founded on cybernetics, which is above all the science of government, as Morozov recalls.56 I have myself tried to show, provisionally, that in a way this constitutes the horizon of Plato's Republic.57
Morozov quotes O'Reilly: ‘You know the way that advertising turned out to be the native business model for the internet? […] I think that insurance is going to be the native business model for the internet of things.’58 Morozov's central idea is that the way we currently organize the collection, exploitation and reproduction of what we are here calling digital tertiary retention59 rests on the structural elimination of conflicts, disagreements and controversies: ‘[A]lgorithmic regulation offers us a good-old technocratic utopia of politics without politics. Disagreement and conflict, under this model, are seen as unfortunate byproducts of the analog era – to be solved through data collection – and not as inevitable results of economic or ideological conflicts.’
We shall see how Thomas Berns and Antoinette Rouvroy have from a similar standpoint analysed what they themselves call, in reference to Foucault, algorithmic governmentality – wherein the insurance business and a new conception of medicine based on a transhumanist program both have the goal of ‘hacking’ (that is, ‘reprogramming’) not only the state, but the human body.60 Google, which along with NASA supports the Singularity University, has invested heavily in ‘medical’ digital technologies based on the application of high-performance computing to genetic and also epigenetic data – and with an explicitly eugenic goal.61
Morozov points out that net activists, who have become aware of the toxicity of ‘their thing’, are nevertheless manipulated and recuperated by ‘algorithmic regulation’ through non-profit organizations that intend to ‘reprogramme the state’: ‘[T]he algorithmic regulation lobby advances in more clandestine ways. They create innocuous non-profit organisations like Code for America which then co-opt the state – under the guise of encouraging talented hackers to tackle civic problems. Such initiatives aim to reprogramme the state and make it feedback-friendly, crowding out other means of doing politics.’62 Morozov calls for the elaboration of a new politics of technology – one that would serve left-wing politics: ‘While many of the creators of the internet bemoan how low their creature has fallen, their anger is misdirected. The fault is not with that amorphous entity but, first of all, with the absence of robust technology policy on the left.’63 We fully share this analysis: the goal of this work is to contribute to establishing the conditions of such a politics through its two volumes on the neganthropic future of work and of knowledge as the conditions of entry into the Neganthropocene – where this is also a matter of redesigning the digital architecture and in particular the digital architecture of the web, with the goal of creating a digital hermeneutics that gives to controversies and conflicts of interpretation their negentropic value, and constitutes on this basis an economy of work and knowledge founded on intermittence, for which the model must be the French system designed to support those occasional workers in the entertainments industry called ‘intermittents du spectacle’.