Brucker finds it ridiculous for the philosopher to assert that a prince will never govern well unless he participates in the ideas. But we would do better to pursue this thought further, and (at those points where the excellent man leaves us without help) to shed light on it through new endeavors, rather than setting it aside as useless under the very wretched and harmful pretext of its impracticability.
Immanuel Kant1
Since the industrial revolution, the diversification of technical objects has continued to intensify. In the twentieth century, the war between technological innovations became the basic principle of ‘economic evolution’,2 in the form of ‘creative destruction’.3 And this has not ceased to affect social organizations and to create more or less disruptive states of shock, in which generally new economic actors seize ‘opportunities’. The positions attained by the social actors who had emerged from the preceding state of shock are thereby destroyed, and a state of fact ‘in advance’ of law – so to speak – comes to be established. In this way the technical system ‘disadjusts’ the social systems.
The state of shock provoked in 1993 by the creation of the web, spreading reticular writing throughout the solvent inhabitants of the earth, has immeasurably amplified the ‘shock doctrine’ that in 1979 asserted that there is no alternative to the destruction of public power, that is, to the unlimited expression of computational logic as induced and required by the total market.
With the constitution of automatized artificial crowds and the globalization of the industry of traces, and through the complete commodification of existence and everyday life, the dis-integration of the retentions and protentions of psychic and collective individuals has destroyed noetic faculties – and, with them, the theories that emerge from them, those theories whose end is pronounced by Chris Anderson.
Overcoming this state of fact through the constitution of a state of law of digital tertiary retention presupposes the reconsideration of the difference of fact and law itself.4 This is so inasmuch as this difference, as a specific regime of psychic and collective individuation, is the common origin of the state of law, philosophy and science insofar as they (philosophy at its origin merging with science) distinguish, originally and in principle, the theoretical and the empirical. And this distinction coalesces with the constitution of a positive law, that is, an experience of alētheia as the fruit of disputation, and no longer as oracular or divine utterance.
What must be reconsidered with respect to the juridical, philosophical and epistemological tradition is that the differentiation of law and fact is conditioned by tertiary retention – and it is theorized, precisely, starting from the shock (the challenge, the putting in question) resulting from the appearance of literal (lettered) tertiary retention, which opened up a crisis from which arose critique.
Maiandrios can put kratos en mesoi – put power in the centre5 – only because the knowledge constituting collective individuation as the creation of circuits of transindividuation has been literalized, and because this retention in letters of the common past corrodes it while assimilating the previous forms of the process of individuation, which therefore become archaic and feed into the mythologies of the origin. This becoming, which passes through the geometrization of space,6 results in the formation of what Gilbert Murray and E. R. Dodds called the ‘inherited conglomerate’.7
This inherited conglomerate both forms the preindividual funds of the political psychic and collective individuation process, emerging from the new retentional condition of which Hesiod and Thales were contemporaries, and enters into structural conflict with the new epistēmē. This is what Dodds described as a Greek Aufklärung.8
At its political origin, and as this origin, law (nomos), stemming fundamentally from a transindividuation in letters, such that it is alphabetically tertiarized, literalizes and spatializes the epistēmē, forming the rules (as the explicit circuits of transindividuation) on the basis of which conflict (polemos) can become the dynamic principle of a society within which History appears as such (as Geschichte and as Historie) – starting from the agora.9
This is why the skholeion is the primordial political institution: citizens are those who read and write these literal traces – which thus form new kinds of associated milieus, which, as we shall see, also constitute regimes of parities10 establishing regimes of truth.11 And this is why today, and as the condition of any new constitution of knowledge, the retentional condition of all knowledge must be thematized and elucidated, and particularly that of the apodictic forms of knowledge that nourish the Western experience of rationality.
It is in this sense that we should interpret Bachelard's phenomenotechnics as analysed by François Chomarat: ‘Writing would be the first phenomenotechnics, since it involves passing from a description to a production, by “inverting the axis of empirical knowledge”.’12 The theoretical critique of the state of fact that Anderson claims is insurmountable presupposes a critique of digital tertiary retention. This critique would itself be founded on a pharmacological critique of the history of knowledge in its totality, that is, founded on an epistemology that takes full account, through a general organology that is the condition of this pharmacology, of the role of tertiary retention in general in the genesis of work-knowledge and of theoretical knowledge, as in that of the law governing life-knowledge in political – and as such civil-ized [policée] – society. This new critique amounts to the programme of what we call digital studies – taking up here the perspective defended by Franck Cormerais and Jacques Gilbert in the journal Études digitales, which they have founded with Éditions Garnier.13
The desirable in general is what consists, and this consistence, in the psychic and collective individuation that constitutes the polis, presents itself as:
Consistence is experienced in desire: in the desire for any object in which this desire invests by diverting the drives, on the basis of which this object is expected, and presents itself by projecting itself towards another plane that does not exist, but consists. This consistence of the object is its infinitization. The drives amount to arche-retentions that are also arche-protentions, that is, primordial sensorimotor schemas that Simondon described as ‘driving tendencies’ [tendances motrices], founded on ‘a priori images’,16 and that the infinitization of their non-existent objects diverts, effecting singular – that is, negentropic – bifurcations.
Since the objects of such consistences are also those of theories, the latter presuppose the learning and apprenticeship of affective and erotic idealization. It is for this reason that Diotima relates all knowledge to the experience of desire.
Consequently, when the libidinal economy lies in ruin, so too does theory. Now that the dis-integration of individuals is liquidating desire itself in all its forms, as well as the infinite variety of idealizations in which all knowledge consists (knowledge of how to live, do and conceptualize), the theoretical enterprise itself seems to have reached its end. This liquidation of desire unbinds the drives, which cease to be the dynamic source of the retentions (as that which is contained and retained) and protentions of ideality, becoming on the contrary the source of so many evils emerging from Pandora's box.17
Technological automatisms, however, can in no way contain these evils (in the sense of controlling them – and which they contain also in the sense that they harbour them): these evils are expressions of the drives that are themselves biopsychic automatisms. Far from controlling them, technological automatisms can on the contrary only exacerbate them – because they intensify their unbinding by dis-integrating psychic and collective retentions and protentions. This is why hyper-control in reality leads to a total loss of control (well beyond the disaffection that I earlier described in Uncontrollable Societies of Disaffected Individuals18), that is, to an immeasurable catastrophe that is an entropic cataclysm.
Avoiding this catastrophe by bringing about the inaugurating katastrophē of a new différance: such is the challenge of a redefinition of the difference of law and fact in the digital epoch, and in the age of libertarian ideology, whose best representative is undoubtedly Chris Anderson.
Differentiating law from fact means engaging in a political struggle, but also an economic and epistemic struggle. And if it is true that the law is what conserves the consistence of the just, just as science conserves that of the true and art that of the beautiful and the sublime,20 the conception of a new state of law, which would also be a new age of theory as what distinguishes states of fact and states of law, is the cornerstone of any reconstitution of an economy of the drives and a rearming of desire – that is, of economy in all its forms.
Theory and law are notions inseparable from the processes of idealization wherein form objects of desire and, with them, consistences. The passage from the proletarianization of the mind by automatons as state of fact to a new stage of noetization rediscovering what Kant called the subjective principle of differentiation,21 and through that the difference of law and fact, or in other words the passage to automatic society as the constitution of a new age of mind and spirit (in Paul Valéry's sense), is the passage from a toxic stage in which the digital pharmakon destroys the social systems that engendered it, and, along with them, destroys the processes of idealization they make possible, to a therapeutic stage in which this pharmakon becomes the tertiary retention of new circuits of transindividuation.
This passage from the current toxic state of fact to a curative state of law requires both:
‘Why must we rediscover the difference of law and fact?’, some may ask. ‘Has this difference not been surpassed? Has not law itself become outmoded – just as people say, for example, that religious beliefs are outmoded?’
Besides the fact that this last suggestion in fact remains far from clear,23 and in any case does not belong to the same order of questions, given that the difference of law and fact conditions apodictic judgement, and therefore demonstrative judgement, I posit as starting principles for all the theses presented here (as in all my work):
No matter what happens. Inshallah.
The theoretical and practical capacity to make the difference between fact and law constitutes what Kant called reason.26 This is why law is ruined by the proletarianization of the life of the mind and spirit as a whole – which can lead only to barbarism.
There is no doubt that this is now clearly taking place, as drones combine with and utilize ‘big data’ in the service of the automatic detection of ‘suspects’, that is, people whose behaviour corresponds, according to statistically calculated correlations, to that of potential terrorists, whom these new automatic weapons, drones, can then eliminate, perpetrating a state violence that, adhering to no law of war, diffuses the lawlessness of a deterritorialized and blind automatic police. As Grégoire Chamayou observes:
The modern art of tracking is based on an intensive use of new technologies, combining aerial video surveillance, the interception of signals, and cartographic tracking. […] In this model the enemy individual is no longer seen as a link in a hierarchical chain of command: he is a knot or ‘node’ inserted into a number of social networks. […] [T]he idea is that by successfully targeting its key ‘nodes’, an enemy network can be disorganized to the point of being practically wiped out. […] This claim to predictive calculation is the foundation of the policy of prophylactic elimination, for which the hunter-killer drones are the main instruments. […] In the logic of this security, based on the preventive elimination of dangerous individuals, ‘warfare’ takes the form of vast campaigns of extra-judiciary executions.27
In relation to the current use of drones in Pakistan, Somalia and Yemen, Chamayou quotes Mary Ellen O’Connell: ‘ “International law does not recognize the right to kill with battlefield weapons outside an actual armed conflict. The so-called ‘global war on terror’ is not an armed conflict.” These strikes therefore constitute grave violations of the laws of war.’28
The thesis of the end of theory and the obsolescence of science proclaimed by Anderson is the counterpart of this liquidation of law by these new forms of automatic weapon.29 This liquidation of law is in turn directly tied to the elimination of sacrifice – without which there can be no warrior, and through which the conquering soldier receives his greater share [solde]: he is in principle glorious. His life, steeped in the test of mortal combat, is also, and, according to Hegel, firstly, one that knows consistence (this is the issue in the first moment of the master/slave dialectic in the Phenomenology of Spirit).30
Knowledge too, however, presupposes the capacity for self-sacrifice, no longer just the death that covers one in glory but an intermittent noetic sacrifice and a wrenching away that confers, like death, what the Greeks called kleos (of which the contemporary wish to be ‘known’, to have a reputation, which social networks claim to provide for their contributors, is the disintegrated version – a disintegration of the transindividuation networks themselves to the benefit of the ‘reputation’ of advertisers).
The experience of knowledge is the experience of what, in this knowledge, teaches, through being learned, the history of the ‘deaths’ and ‘rebirths’ of knowledge. Knowledge is par excellence what, opening knowing to what is not yet known, ‘kills off’ what had hitherto been considered knowledge – yet in this, and with the death of what had hitherto been considered knowledge, knowledge finds itself reborn, in the course of what was described by Socrates (in Meno) as an anamnesis,31 and by Husserl (in ‘The Origin of Geometry’) as a reactivation,32 which is the individuation of knowledge through the individuation of the knower. We shall see in Automatic Society, Volume 2: The Future of Knowledge that what we are here metaphorically calling the ‘death’ and ‘rebirth’ of knowledge refers to the acquisition of knowledge as automatization and to its rebirth as dis-automatization.
As for full automatization, made possible by digital tertiary retention through the integration of all automatisms and by short-circuiting all therapeutic possibilities for dis-automatization, it disintegrates in the field of battle as in the epistemic field this dual sacrificial experience of the struggle for freedom and the struggle against what is in itself, and not for itself, as Hegel said – as what prevents, as ‘well known’, the expression of what remains to come for the self and as its for itself (these are Hegel's terms).
Full automatization effects this disintegration, at once and in the same movement, for both warriors and scientists.
According to Anderson, this disintegration foreshadows the death of the scientist by proclaiming the end of science in a world where the ‘scholar’, who has been proletarianized just as has Alan Greenspan, no longer fights for anything or knows anything. He or she no longer fights for any consistence, or against any inconsistence: he or she no longer knows anything consistent.33 Such is the price of total nihilism, of nihilistic totalization, of the disintegration in which consists the accomplished nihilism that is totally computational capitalism, in which there is no longer anything worth anything – since everything has become calculable.
It is not necessary to know anything, writes Anderson, to be able to work with applied mathematics in an age where the petabyte becomes the unit of measurement of mass storage,34 and it is this uselessness of knowledge, whatever it may be, that, according to him, constitutes Google's power, and its fundamental ‘philosophy’:
Google's founding philosophy is that we don't know why this page is better than that one: If the statistics of incoming links say it is, that's good enough. No semantic or causal analysis is required. That's why Google can translate languages without actually ‘knowing’ them (given equal corpus data, Google can translate Klingon into Farsi as easily as it can translate French into German). And why it can match ads to content without any knowledge or assumptions about the ads or the content.35
In the epoch where digital tertiary retention is the medium supporting the use of applied mathematics to treat ‘big data’ in high-performance computing, ‘[f]orget taxonomy, ontology, and psychology’. And not only that, science now has little room for hypotheses, models or experiments: ‘[F]aced with massive data, this approach to science – hypothesize, model, test – is becoming obsolete.’
The result:
This final thesis is Anderson's – who today runs a start-up that manufactures drones and promotes a ‘new industrial revolution’ founded on ‘fab labs’ and ‘makers’.36
The automated ‘knowledge’ celebrated by Anderson no longer needs to be thought. In the epoch of the algorithmic implementation of applied mathematics in computerized machines, there is no longer any need to think: thinking is concretized in the form of algorithmic automatons that control data-capture systems and hence make it obsolete. As automatons, these algorithms no longer require it in order to function – as if thinking had been proletarianized by itself. Later, we will read the Grundrisse and return to the question of the general intellect, and the question of work, from the perspective of tertiary retention in order to clarify this possibility.37
A similar proletarianization is at work for a soldier who no longer needs to fight, who is therefore no longer a soldier, and who becomes the controller of those automated systems of remotely controlled murder that are military drones. This is also in some ways what Paul Valéry, Edmund Husserl and Sigmund Freud each foresaw in the development of knowledge after the First World War.38
Soldiers who operate drones disappear into the weapon that replaces them, just as workers disappear into the machines that turn them into proletarians. The machine becomes the technical individual, which hitherto workers had been qua tool-bearers, individuating themselves via practices that embody the knowledge derived from the practice of these tools, tools that are themselves thereby also individuated: ‘Humankind centralized technical individuality within itself, in an age when only tools existed.’ For Simondon, far from being a pure disintegration, the advent of the machine that replaced the worker qua tooled individual is an achievement that brings about a new age of individuation, in that ‘the machine takes the place of humans because the human had been performing a function that became the machine's – to bear tools.’39
Simondon does not reason pharmacologically. He affirms in principle and immediately the positive necessity of machinic becoming. This shortcut is for us a weakness, but we should also and firstly understand its force. If, for a threefold psychic, collective and technical individuation, the machine fulfils a need, it is because it is the bearer of a new possibility of individuation.
Does this becoming occur in the same way for the soldier and the scientist as for the worker? In the epoch of generalized automatization, the fully automatic weapon is totally detached from and independent of the bearer of the weapon, who hitherto was the warrior, just as, according to Anderson, fully automatic (if not absolute) knowledge would be totally detached from the scientist, that is, from his or her individuation.
We shall see that the disappearance of the worker, resulting from an initial automatization that gave rise to the proletariat, leads to the disappearance of the proletariat itself, with the new wave of full and generalized automatization, and that this new economic but also epistemic and moral rationality can and must lead:
Only a new jurisdiction for work in the service of an increase of negentropy and a reduction of entropy can lead to a legitimate extension of global security through the creation of new juridical conditions for peace – in what should become an ‘internation’, in the sense we can give to this term deriving from Marcel Mauss,41 and of which the web should become the contributory infrastructure. Such is our proposal in response to Tim Berners-Lee's initiative, ‘The Web We Want’, launched at W3C, to write a Magna Carta for the world wide web.
Kevin Kelly reformulates Anderson's observations in the following terms:
When you misspell a word when googling, Google suggests the proper spelling. How does it know this? How does it predict the correctly spelled word? It is not because it has a theory of good spelling, or has mastered spelling rules. In fact Google knows nothing about spelling rules at all.
Instead Google operates a very large dataset of observations which show that for any given spelling of a word, x number of people say ‘yes’ when asked if they meant to spell word ‘y’. Google's spelling engine consists entirely of these datapoints […]. That is why the same system can correct spelling in any language.42
Hence Google effects a statistical and probabilistic synchronization that on average eliminates diachronic variability.
This idiomatic standardization derived from ‘large numbers’, however, poses the same problems as Quételet's average man, as understood by Gilles Châtelet: ‘For Quételet, there is a certain excellence of the average as such […]: the most beautiful face is that obtained by taking the average features of the whole population, just as the wisest conduct is that which best approaches the set of behaviours of the average man.’43 If it is true that linguistic genius is this poiēsis that proceeds from gaps through which the singularly enormous transindividuates the norm, and if linguistic prescription makes law after the fact in ‘enormity becoming normal’ [énormité devenant norme],44 then this is the very possibility of a linguistic prescription founded on a poetic art – constituting as such a quasi-causal legitimacy of the becoming of language, opening its future as the necessity of the artificial ‘arbitrariness of the sign’ and the individuation of singularities. It is all this that the statistical analysis of average written expression eliminates in advance and as a process of automatic transindividuation that, as we shall see, in reality amounts to a transdividuation.
Given that a language evolves, as we have known since Humboldt, in what sense must it evolve? Must it do so according to the dominant practices, that is, in an entropic sense, or on the basis of exceptional interpretations of the preindividual funds in which it consists? Obviously the statistical prescription practised by the Google spelling engine leads to the reinforcement of dominant practices. In so doing, it eliminates at a single stroke all linguistic prescription put in place in the skholeion since the origin of civilization. And it invalidates in advance and eliminates in fact any linguistic prescription founded on:
It is all this that this statistical analysis of average written expression renders impracticable and unthinkable, by establishing a literally dis-integrated relation to language. And this clearly concerns not only the Google spelling engine but also Google Translate, the algorithmic ‘engine’ for translating between languages, in relation to which ‘Peter Norvig, head of research at Google, once boasted to me [Kevin Kelly], “Not one person who worked on the Chinese translator spoke Chinese.” There was no theory of Chinese, no understanding. Just data.’46 But what results from this is a generalized and global degradation of the processes of psychic, collective and technical individuation that are all the written languages on earth, and a destruction of the associated milieus47 in which these languages consist, individually and as a whole – through the work of those interpreters that good translators must always be.48
Speakers, who themselves are always in the final reckoning interpreters whenever in their interlocution they say something and are not content simply to engage in chatter, are in this instance short-circuited – and with them their transindividual relations of the constitution of diversified and idiomatic metastabilities. This is how I interpret Frédéric Kaplan's analysis of the business models of Google's linguistic capitalism,49 which reaches its limits in the dysorthography brought about by the auto-completion function of its spelling engine, and in the semantic impoverishment inevitably generated by AdWords.
These questions are crucial because they are generic for knowledge in general: they concern all forms of noetic life insofar as the latter rests on its analytical exteriorization – which enables the apprehension and the categorization of the exterior as such, and in its various aspects. As such, these questions concern techniques that are now being implemented in every field of scientific knowledge:
Many sciences such as astronomy, physics, genomics, linguistics, and geology are generating extremely huge datasets and constant streams of data in the petabyte level today. They'll be in the exabyte level in a decade. Using old fashioned ‘machine learning’, computers can extract patterns in this ocean of data that no human could ever possibly detect. These patterns are correlations. They may or may not be causative, but we can learn new things. Therefore they accomplish what science does, although not in the traditional manner.50
I have argued, however, that the understanding implements schemas so that, for example, the analytical fact of counting in thousands requires a passage through the outside, an exteriorization and a tertiary retention constituting a system of numeration on the basis of which it is possible to perform operations on symbolic exteriorizations (not on the things they ‘represent’). These are analytical operations that have been carried out mentally on the basis of interiorizations of the system of numeration,51 and been formalized and themselves exteriorized in the form of rules that may subsequently be utilized without having to make them conscious, that is, without having to again actively internalize them, without reactivating the neuromnesic circuits that lie at their origin. If this is true, then what Kelly describes here, commenting on Anderson, is what happens as a result of delegating such rules to analytical automatons as operations of understanding and cerebral dis-interiorization, that is, of mental dis-integration.
This is what results from short-circuiting the operation of organological integration on which the knowledge of a scholar or a scientist is based. By short-circuiting interiorization, such a delegation disconnects the understanding as analytical formalism and reason, the latter being, as we shall argue in what follows, the capacity for interpretation. Yet we have known since Kant that the analytical operations of the understanding are not self-sufficient, not just when it is a matter of producing theory, but also when it comes to producing truth.
Digital tertiary retention constitutes a new epoch of knowledge and of theories. If a theory is constituted by the possibilities it opens up through the retentional modalities of its exteriorization, its transindividuation and its re-interiorization, that is, its individuation in return, after the fact, always late, and if it is tertiary retention that enables this analytical discretization and this interpretative synthesis, then it follows that digital tertiary retention transforms knowledge.
New forms of knowledge arise from a new relationship between the understanding, transformed by a new tertiary retention (itself having always been made possible by a form of tertiarization and grammatization), and reason, as the interpretation of the play of retention and protention enabled by their tertiarization.
When Anderson claims there is no longer a theoretical model, Kelly responds that there is one, but that it is not known as such, as if it were generated by an automated analytical function. And he concludes that we are witnessing the beginning of a new epoch of theory rather than its end:
I think Chris [Anderson] squander [sic] a unique opportunity by titling his thesis ‘The End of Theory’ […]. I am suggesting Correlative Analytics rather than No Theory because I am not entirely sure that these correlative systems are model-free. I think there is an emergent, unconscious, implicit model embedded in the system that generates answers. […] The model may be beyond the perception and understanding of the creators of the system, and since it works it is not worth trying to uncover it. […] It just operates at a level we don't have access to.52
This inadequation between theory and comprehension, or the lateness of the understanding produced by tertiarization (for this is indeed what is involved), is what leads theory and its functional definition (as what Whitehead called the function of reason53) to again be called into question – and Kelly mentions here a remark attributed to Picasso: ‘In this part of science, we may get answers that work, but which we don't understand. Is this partial understanding? Or a different kind of understanding? Perhaps understanding and answers are overrated. “The problem with computers”, Pablo Picasso is rumored to have said, “is that they only give you answers.” ’54 Kelly concludes that in the epoch of ‘big data’ and correlative systems, the ‘real value of the rest of science then becomes asking good questions’. This is indeed how things seem to strike us when reading Greenspan's response to members of Congress on 23 October 2008, to which it is interesting to add this comment by Sean Carroll:
Sometimes it will be hard, or impossible, to discover simple models explaining huge collections of messy data taken from noisy, nonlinear phenomena. But it doesn't mean we shouldn't try. Hypotheses aren't simply useful tools in some potentially outmoded vision of science; they are the whole point. Theory is understanding, and understanding our world is what science is all about.55
I argue, however, by introducing the question of controversy and debate, without which there is no theory, that more than understanding, more than a comprehension, science seeks what I call a surprehension – what Socrates called aporia,56 a discomfiting and perplexing impasse [embarras] that would provoke in him and his interlocutors an anamnesis, and what Aristotle called a thaumasmos, a surprise.
The worker's loss of individuation as described by Simondon, deprived of his or her knowledge by the machine, seems to anticipate the scientist's loss of individuation, deprived of his or her knowledge by intensive computing.
That there is loss here does not mean that Simondon is hostile to the machine. It means that humankind must rediscover the site of its psychic individuality, and that, as tool-bearer, the human was itself only provisionally a technical individual, awaiting the transfer of this tooling to the more accomplished technical individual that is the machine.57 The power of these tools, then, is amplified in order to benefit the human being's own individuation, and not just the individuation of the technical system as what destroys psychic systems and social systems (but also geographical and biological systems) for the sole benefit of capital as entropic economic system.
The Simondonian perspective on the human and the machine means that the human, as psychic individual, and the machine, as technical individual, must constitute a new relation where thought, art, philosophy, science, law and politics form a new understanding of their technical condition, a new form of knowledge that Simondon called ‘mechanology’, which would allow ‘awareness to be raised of the significance of technical objects’.58 It is in this context that he referred to a process of concretization inherent in technical individuation and more generally in the creation of what he called technical lineages.
In the next chapter we will see how Jonathan Crary questions the relationship between technical becoming and social becoming – that is, the future. The analysis of these relationships is the precise goal of general organology, but at the same time significantly displaces them on several points.
Simondon himself never posed the question of pharmacology. And, probably thanks to the highly necessary critique he undertook of cybernetic metaphysics,59 but also because he failed to see that the question of the pharmakon is always the question of the automaton, he greatly underestimated the question of automatism and the consequences of automatization – as the integration of biological, psychic, social and technical automatisms, an integration carried out by digital retention, and carried out as the disintegration of psychic and social individuals.
The goal of the present work (and its second volume) is to project the conditions of a general organology and a philosophy of automaticity by taking up and displacing such Simondonian questions in the epoch of full automatization and of the proletarianization of the mind and spirit that results from it, and by posing as a first principle (in the wake of Simondon) that the issue is not the toxicity of algorithms (any technics being inherently toxic), but the absence of thinking, by philosophy, science and the law, about what makes this possible, namely, tertiary retention. Taking account of and overcoming this state of fact forms the programme of ‘digital studies’.60
Following Jean-Pierre Vernant and Marcel Detienne, I have repeatedly argued that literal tertiary retention is the sine qua non of the co-emergence of political nomos and of the epistēmē as rational knowledge reconfiguring all knowledge, which is thereafter conceived on the basis of the experience of truth, argued according to the canon of geometrical demonstration.61 Literal tertiary retention constitutes public space as such – that is, constitutes it as the space of publication, the space of expression via spatialized traces accessible to all those who form ‘the public’ (in the form of letters, in the Greek epoch) – and such that the law, as public political space and as in principle constitutive of the polis, is submitted to this new criterion, for every process of transindividuation that is alētheia.
It is only because of this common origin of the polis and the apodictic epistēmē that we can refer to scientific ‘law’. And this is also why, in proclaiming the end of theory and science, Anderson is in fact asserting the end of political psychic and collective individuation – which is not without an echo in the initiative of Peter Thiel, founder of PayPal:
[O]ne of Facebook's earliest backers […], he believes that artificial countries built in international waters on oilrig-type platforms could hold the key to humanity's future.
The aim is to create communities that would be run according to extreme laissez faire ideals […]. The first step will involve a pilot project, off the coast of San Francisco.62
The individuation of political public space is conditioned by the formation63 of each citizen in the skholeion, where citizens accede to the letter (that is, as we shall see,64 where they organologically re-organize their organic cerebral organ). Citizens, by forming themselves à la lettre – through this interiorization of the letter (by reading) and through its exteriorization (by writing), an internalization and externalization that require the acquisition of this competence as a new automatism written into the cerebral organ through learning, through an apprenticeship – can access the consistences that theoretical knowledge forms, and thereby dis-automatize automatic behaviours, whether biological, psychic or social: if consistence is what makes dis-automatization possible, it is accessible only on the basis of an automatization.
Tertiary retention is now digital, and literal as well as analogue tertiary retention have themselves been integrated and reconfigured digitally, thereby opening up another experience of both reading and writing. Given this, to think automatic society – that is, a digital automaton placed in the service not of calculations that disindividuate and disintegrate both the social and the psychic, but of a calculation leading the subjects of this calculation to dis-automatization65 by expanding the experience of individuation beyond all calculation,66 that is, always passing through a form of sacrifice67 – presupposes rethinking law as such. That is, it presupposes rethinking the constitution of the law, in the sense that we refer to the law in constitutional assemblies, and in the sense that a constitution constitutes the basis and the promise of a new process of psychic and collective individuation.
The hyper-industrial state of fact takes what Deleuze called control societies, founded on modulation by the mass media, into the stage of hyper-control. The latter is generated by self-produced personal data, collected and published by people themselves – whether knowingly or otherwise – and this data is then exploited by applying intensive computing to these massive data sets. This automatized modulation establishes algorithmic governmentality in the service of what Crary calls 24/7 capitalism.68
Automatized modulation and algorithmic governmentality are an issue for debate in all manner of forums and under many other names and in very different terms, whether it is the Organization for Economic Cooperation and Development, the European Union or in France at the Commission nationale de l’informatique et des libertés or the Conseil national du numérique, and so on. Parliamentary commissions have been set up in Germany and France on these subjects. Many themes emerge, problems that arise one after another from the growth of the data economy, crowdsourcing and automatized artificial crowds, amounting to so many problems posed by hyper-control.69
The coming integration of traceability and robotization remains greatly under-anticipated, and its global consequences remain seriously underestimated, despite recent studies that show that
robots could take over half of our jobs […] within two decades […]. According to two researchers from Oxford, Carl Frey and Michael Osborne, […] 47% of American jobs are in the next two decades likely to be occupied by robots […]. Most workers in transport and logistics, as well as most administrative jobs and factory workers, are likely to be replaced by machines. This is also the case […] for a substantial proportion of jobs in the service sector […]. [For] Jeremy Bowles, in Belgium, 50% of jobs could be affected. The percentages are similar for our European partners.70
Indeed, Jeff Bezos, founder and head of Amazon, announced in May 2014 that his company would be filling its warehouses with ten thousand robots, manufactured by Kiva Systems, which it acquired two years earlier.
Despite these facts, and the declarations of Bill Gates, whose statements on 13 March 2014 to the American Enterprise Institute led to the headline, ‘People Don't Realize How Many Jobs Will Soon Be Replaced by Software Bots’,71 which the International Business Times chose to refer to as the ‘Robot Apocalypse’,72 we in France and Europe continue to reaffirm unemployment reduction targets – an omerta on reality that can lead only to political discredit, to the collapse of the confidence that is considered so precious, and to serious social conflicts.
Given that the consequences of robotization in the digital context of full and generalized automatization have not been properly anticipated, the risk is that these things may occur very quickly and in disastrous conditions. The Pisani-Ferry report in particular is highly negligent on this level – despite the warnings from Le Journal du dimanche of the likelihood that in France three million jobs will be destroyed in the next ten years.73
By revealing the illegal practices of the National Security Agency (NSA), Edward Snowden has exposed a global and subterranean logic and put into perspective the imperative need for an alternative, by sounding the alarm about this situation, dominated as it is by lawlessness. He has thus placed into its true context the question of traceability and of the automatization that it requires, and that it immeasurably amplifies. But if with these revelations the state suddenly returned to the forefront of consciousness – an American federal government that appears to systematically operate outside of any law – they also and immediately functioned as a smokescreen concealing the true situation and the real question of law.74
More than the surveillance conducted by military and police authorities, the fundamental issue is the privatization of data: this is what makes it possible to give up on the rule of law in the name of reasons of state. The NSA network surveillance scandal was possible because there is no law that truly governs the treatment and management of digital traces by platforms, and because in the final analysis it is law in general, in all its forms and as distinct from fact, that is dis-integrated in its very foundations by full and generalized automatization – which the issue of big data makes clear.
These questions cannot continue to be avoided and evaded for long, because full and generalized automatization is leading in an inevitable way to a decline in employment and to the effective collapse of the model of economic rationality that was established by Roosevelt and Keynes in 1933 on the base of the technological shock that was the Taylorist and partial automatization implemented by Ford in the early twentieth century.
Generalized disintegration – of knowledge, power, economic models, social systems, basic psycho-relational structures, intergenerational relations and the climate system – is engendered by the automatized integration of the technical system, which is now thoroughly digitalized via compatibility and other standards, exchange formats, data formats, bridges, plug-ins, platforms, and so on, and including the whole set of elements of the hyper-industrial milieu, via radio-frequency identification (RFID) tags and other identification, tagging and auto-traceability systems.
The digital enables the unification of all technological automatisms (mechanical, electromechanical, photoelectrical, electronic, and so on) by installing sensors and actuators, and intermediating software, between the producer and the consumer. Computer-aided design (CAD) systems produce simulations and prototypes via computer-generated imagery and 3D printing on the basis of cognitive automatisms; robots are controlled by software that can handle separate parts tagged by RFID75 technology; design integrates crowdsourcing just as marketing is founded on network technologies and network effects; logistics and distribution have become remotely controlled systems operating on the basis of digital identification via the ‘internet of things’;76 consumption is based on social networking; and so on.
It is this complete integration of the technical system, via the digital, that enables the functional integration of biological, psychic and social automatisms – and it is this context that has seen the development of neuromarketing and neuroeconomics. This functional integration leads on the side of production to a total robotization that disintegrates not just public power, social and educational systems, intergenerational relations and consequently psychic structures: it is the industrial economy itself, based on wage labour as the criterion for distributing purchasing power, and for the formation of mass markets capable of absorbing the ready-made commodities of the consumerist model, which is in the course of dis-integrating – becoming functionally insolvent because fundamentally irrational.
All this seems overwhelming and hopeless. Is it possible, nevertheless, to invent, on the basis of this state of fact that is total disintegration,77 an ‘ars of hyper-control’ – for example, by updating Deleuze's position in terms of his support for an ‘art of control’?
In ‘Optimism, Pessimism and Travel’, Deleuze's letter to Serge Daney about control societies and the new powers of control that make them possible, he stated that we must ‘get to the heart of the confrontation’.78 This should consist in an inversion: ‘This would almost be to ask whether this control might be inverted, harnessed by the supplementary function that opposes itself to power: to invent an art of control that would be like a new form of resistance.’79
To invent or to resist? We shall return to this hesitation. The supplementary function involved in this statement by Deleuze comes from Daney's reference to the Derridean notion of the supplement, applying it to television.80 And this is indeed a matter of a politics and a philosophy of the supplement, which, as we will try to show in the second volume, also passes, and now more than ever, through a philosophy of automaticity as well as of invention – much more than of ‘resistance’.
Invention is in Deleuze what creates an event and bifurcation starting from a quasi-causality that inverts a situation of default.81 Here, it is art that invents – an ‘art of control’. Quasi-causality is clearly a kind of therapeutics within a pharmacological situation: the ‘supplement’ capable of ‘inverting’ through its ‘quasi-causal’ logic is obviously a pharmakon. And I have recently recalled that Difference and Repetition thinks in terms that are fundamentally pharmacological.82
Invention is today the challenge of the struggle against the state of fact that is a total dis-integration of the ‘dividuals’ that we will have become, a situation about which Edward Snowden sounded the alarm, according to Glenn Greenwald, who declared on 27 December 2013, at a meeting of the Chaos Computer Club of Hamburg:
It is possible that there will be courts that will impose some meaningful restrictions [on NSA and American espionage conducted via digital networks]. It's much more possible that other countries around the world who are truly indignant about the breaches of their privacy security will band together and create alternatives […]. Even more promising is the fact that large private corporations, Internet companies and others will start finally paying a price for their collaboration with this spying regime. […] But I ultimately think that where the greatest hope lies is with the people in this room and the [technical] skills that all of you possess.83
Greenwald is declaring, then, that the answer lies in the hands of the hacktivists. This assertion is only partially right. What is absolutely correct is to posit that today the question of organological invention constitutes the categorical imperative common to all those struggles – juridical, philosophical, scientific, artistic, political and economic – that must be carried out against this state of fact and for a state of law, and where this is a matter not of resisting, but of inventing. What is not correct is to believe that this organological question is an issue only for hackers – even if it is clear that they have in this regard been pioneers and important contributors.
In addition, there is no doubt that art has a distinct role to play with respect to invention in relation to the organological in general. But Deleuze is far from clear about this, and he conceives this art of control much more in terms of resistance than of invention – where the latter is always, in the aesthetic field, in one way or another, organological. In other words, it is always a question of inventing technically or technologically, and not just artistically. But Deleuze is generally unconcerned with thinking technicity, or cinema, or the pharmakon, and even in his reading of Foucault he has little interest in technologies of power.84
In the field of art, and of cinema, of this industry of dreams that art becomes with cinematography, it is with Jean Renoir that we must think organological invention – for example, when he analyses in the history of cinema the artistic meaning of the passage from orthochromatic film to panchromatic film, concluding that ‘artistic discoveries are almost a direct result of technical discoveries’.85 The question of cinema and of its technicity is a jumbled patchwork, in a way typical of the metaphysical traps that arise whenever technics is in question – including in Deleuze and in Derrida.
Cinema is a becoming that is obviously technical. But cinema is also the very possibility of dreaming: it is an essentially psychic becoming.86 And socialization always presupposes the convergence and projection of psychic desires via technical inventions, that is, via new forms of tertiary retention, which form stages in a process of grammatization – of which full automatization is a bifurcation of unprecedented scope.
These questions are those of arche-cinema,87 which lie at the heart of my interpretation of the allegory of the cave.88 As Jean-Louis Comolli recalls, André Bazin stated that ‘the cinema is an idealist phenomenon’,89 referring both to L’invention du cinéma by Georges Sadoul, a Marxist and materialist film historian, and to Plato's Republic: ‘The concept men had of it existed so to speak fully armed in their minds, as if in some platonic heaven.’90
Now, this is totally confused:
This metaphysics, which is both Marxist and Platonic, relates to what Daney called photology.91 It results from a complete misunderstanding of what is at stake in grammatization and, more generally, in organological becoming, a question posed for the first time in The German Ideology – a text that must be revisited in relation to Derrida's theory of the trace. But the Derridean theory of the trace is not itself sufficient to effectively and concretely think organology and the supplementary invention from which it proceeds as the historical becoming of a threefold individuation that is inextricably psychic, technical and social.
Cinema is what, as arche-cinema, begins as soon as technical life starts to assemble gestures, which are already signifying chains, gestures through which it metastabilizes, by what it makes, ways of doing things, forms of savoir-faire, work-knowledge – whatever is thus made being a tertiary retention, that is, a mnesic exteriorization that operates through the spatialization of the time of a gesture.
It is obvious that this assemblage or montage is anticipated in dreams (and this is what so clearly disturbs the thinking of both Bazin and Sadoul). Arche-cinema is the inaugurating regime of desire, consisting in a dis-automatization of instinct that, becoming detachable – like technical organs, which are themselves always in some way fetishes – is generated from the drives contained by desire. This is so in two senses of this word ‘contain’: the drives are borne within desire as its dynamic element, but their completion is curtailed or limited by being ‘diverted from their goals’.
What occurred in 1895 was the culmination of this primordial technology of the dream that is arche-cinema, and the analogical continuation of a process of grammatization that has not yet come to completion, which probably began in the Upper Palaeolithic, and which has allowed (from 1895 until today) this arche-cinema to enjoy a new epoch, that of the so-called cinematographic art and of the film industry – which then shapes the twentieth century, as seen and reflected upon by Jean-Luc Godard in his highly organological studio.92
What is missing in Deleuze, as in all philosophy, epistemology and most of so-called aesthetics, is an understanding of the stakes of tertiary retention, that is, of technics. But this absence is also found among jurists, and is yet more common among economists – and even anthropologists. It is towards conceiving the role of tertiary retention in the formation of knowledge, and doing so starting from the crucible constituted by total dis-integration, and towards thinking the quasi-causal inversion that all this requires, that we must now devote ourselves – and sacrifice (time).