6
THE CONCEPT OF AN ARTIFICIAL PHILOSOPHY
FROM ARTIFICIAL INTELLIGENCE (AI) TO THE IDEA OF AN ARTIFICIAL PHILOSOPHY (A PHI)
LET’S now approach the problem from the side of philosophy.
We define a trajectory here: the one that goes from AI and cognitivism to an A Phi. AI exists; it must be taken as such beyond the often premature, restricted, or insufficiently radical philosophical critiques that were leveled against it. On the other hand, A Phi does not exist, except as a problematic Idea. Under what conditions would an A Phi be possible? Taking AI, which has precise theoretical claims, as a point of departure, if not as a model, we seek to establish A Phi’s theoretical conditions of possibility or, rather, conditions of reality. It is probable that this trajectory is not simple, that it is not a matter of effecting a transposition, a continuous transfer of the same technologies toward a different object, of continuously relaying the programming of psychological operations through the programming of philosophical operations. Two reasons are opposed to this. On one hand, the object to be simulated or reproduced (or even simply to be experimentally known) represents with respect to cognition a qualitative leap—a Decision—for which AI, in its very foundations, is perhaps not prepared. And, on the other hand, a reason in the opposite direction makes us think that AI is not—in its most intimate telos—what philosophy, what philosophical resistance, believes it detects within it in order to easily reduce it: a simple technological use of science. It may be that, under the name of AI, the scientific use of technology is also concealed, and this might change everything if science’s essence, as we argue here, is wholly different from technology’s.
Instead of proceeding to a simple extension and a transfer of technologies from Cognition to the philosophical Decision, a procedure that is only too obvious and too obviously “ideological,” the only method is to isolate the various parts at play and in struggle within AI and Cognitivism. Before we recombine everything, a rigorous description—which is founded in a radical conception of the autonomy of scientific thought—requires us to proceed critically: by dissociating, at least provisionally, the heterogeneous types of knowledge that divide the field of AI. None of these strands (science, philosophy, technology) is reduced to the other, especially not the strand of science. This method is the only one that allows us to evaluate what a machine can do from the perspective of thought. What can a technology do about thought? This question will find a rigorously grounded response only if we begin by excluding some concepts that are less problems than fantasmic and premature solutions, like the concept of “technoscience,” which refuses to analyze the respective roles of the scientific, the technological, and the philosophical. If, in opposition to this ideological way of proceeding, we “isolate” for example science in its real cause, within chaos in its solitude, then it is the economy of science’s relation to technology and to philosophy that will be transformed by this “nonepistemological” thesis about science. AI’s internal landscape will in turn undergo an earthquake that will remodel it quite differently and will make visible new potentialities of this discipline. There are, at any rate, several possible interpretations of AI in terms of the definition given of science and of its essence. So we can hope to pass from an AI to an A Phi—and in a way that is not a simple spontaneous generalization or a sterile and probably falsifying natural extension—only if we first take the trouble to reevaluate the respective relations of the philosophical, the scientific, and the technological.
Instead of immediately attempting a transfer from AI to A Phi and a simulation of philosophy, we will therefore elaborate A Phi’s most general theoretical conditions, its inescapable conditions. We will explore its vicinities, its contours, and its possible models. This problem completely remodels the traditional relations between the machine and thought; it requires the calling into question of their specular fascination. This is particularly true if AI, as it seems, has to introduce a scientific break—scientific rather than epistemological—within a tradition, and if, on the other hand, thought ceases to be defined in a cognitive or cognitivist way in order to acquire its properly philosophical or transcendental form or dimension.
We must, first of all, clarify the “stake” of the (still problematic) project or Idea of an A Phi, which serves as our guiding thread. This stake differs completely from the uses that philosophers have for a long time been making of informatics. The current uses do not bear—as they do here—on the very Idea or Whole of philosophy taken globally (which is possible only on the basis of science; this isolation of the Decision and its invariants as objects of science grounds the possibility of an A Phi). Far from putting philosophy in its very essence in question and in play, they bear on objects it has in common with other forms of knowing (the text and its manipulation). Thus they bear on local or restrained objects of the field of philosophy, not on philosophy as such; and at the same time on overly general objects, on generalities. Philosophy’s uses of informatics are both too restrained and not sufficiently specific. In fact, up to now, philosophers have used informatics for general tasks, which have not been specifically philosophical. Not even for a “textual hermeneutic” in the strict sense, i.e., the essential reelaboration of the philosophical concepts of hermeneutics and of the textual, but for textual givens that are deemed peripheral to thought. On this plane, the new tool makes it no doubt possible to partially eliminate the traditional theoretical dereliction in which the majority of philosophers—if not the most recent among them—have abandoned their own texts. But the intervention of informatics seems to be reduced to an interpretation aid, more than it serves to modify the very concept of interpretation. Aid to philosophical textuality: just an aid, and nothing more than an aid to the text.
More ambitious attempts to demonstrate or simulate the ontological argument, for example, or to resolve the defined problems were pursued. But these attempts to mechanically demonstrate philosophical thoughts are not related to the very essence of philosophy. In contrast, we postulate here the existence of this essence. In general, we can offer a few remarks on the concept of A Phi and its finalities:
a. Such a discipline, bearing on the essence or Whole of philosophy, on the philosophical Decision, can no longer be an intraphilosophical activity (founded on restrained presuppositions) that conveys and reproduces poorly elucidated ontological presuppositions. It can no longer tackle local problems exclusively. It must begin by calling into question what is understood by “philosophy,” “science,” “technology,” and so forth. This critique presumes that we can isolate philosophy’s essence in some way and dominate it through a form of knowing and technique that would not depend in its turn on philosophy.
b. The concept of A Phi requires then that we no longer have prefixed models drawn from AI (models of tasks, of procedures, of knowledges, etc.) and that we invent them gradually. In general, we maintain that if we are searching for an A Phi (and under this condition), then we can no longer posit, for example, the problem in terms of competition, performance, simulation, and reproduction of already fixed and finite tasks. This is AI’s fundamental posture or, rather, the most general interpretation given of it. In this case, one would take a philosophical problem (the ontological argument, the dialectical Hegelian synthesis, etc.) and would provide a logical analysis or version of it, either algorithmic or heuristic, in order to program it. This impatience leads one to proceed tautologically by simply tracing AI’s tasks and presuppositions. To reject this, we have to have a nonanalytical and nonpositivist idea of philosophy. We have to reject the existence of atomic or quasi-atomic problems that are susceptible to an algorithmic solution independent of their relations to others.
We should thus be wary of transposing AI’s tasks and performances as such to philosophy. Otherwise, we would postulate the homogeneity of philosophical problems to those of cognitive psychology and those of logic. It may be that the entry of philosophy’s essence on the scene will subvert the very nature of the tasks that one would assign to an A Phi. We should not be obsessed from the start with questions of the type: what purpose should an A Phi serve?
•    aid to the reading of texts (which reading?);
•    aid to the interpretation of texts (which interpretation? which textuality?);
•    aid to the philosophical decision (what can this mean?) and in the diagnosis of “philosophy” that deals with such a statement;
•    simulation and production of philosophical quasi systems (how does one recognize a system as “philosophical”?);
•    finally, “rigorous” demonstration (or invalidation) of some famous metaphysical argumentations…, etc.
This position of the problem (what purpose does an A Phi serve, which aid—demonstration of arguments, creation of systems, etc.—to the philosophical Decision?) must be suspended. In fact, the only perspective that will authorize this suspension and, at the same time, respect the autonomy of the philosophical Decision, without imposing an empirical reduction on it, will be the perspective of a transcendental science whose principles and conditions of reality were posited earlier, a science acquired by non-philosophical paths and thus capable of being a science of philosophy. The Idea of an A Phi is a milestone on the path that leads to this science.
To all these questions, only philosophy can in some sense respond, especially if they were elaborated and posed on philosophical presuppositions, as is inevitable, in a sense. In an A Phi, philosophy cannot be a simple passive object, since it intervenes, in any case, in the position of these tasks and objects. We will not say that these tasks must not be proposed to an A Phi. We will only say that the transfer of procedures, of levels of tasks and the very idea of task, the specular context of competition, competency, and performance, postulate the reduction of philosophical problems to cognitivist models of tasks, while the former already act in and are broader than the latter. An A Phi that reaches the height of philosophy, of its complexity and of its essence, can only transform the very tasks that one proposes to it and transform them in terms of the invariants of the philosophical Decision. It will generally refuse to proceed by analogy.
What legitimates at first sight the hope of an A Phi is AI’s increasing orientation toward the problems of life, everyday experience and common sense, toward problems of obscure, blind, and even subconscious reasoning. These problems of common sense, good sense, ordinary practice, and everyday usage are very close in their complexity to the problems of philosophy, much closer, at any rate, than logic, at least so long as we do not have an a priori “scientific” image of philosophy, which almost always means, in this case, logical, atomistic, and behaviorist and “rationalist” rather than real or transcendental. In this direction we can imagine an expert system whose goal would be to simulate and use, to assist, above all, the philosophical knowing, which, even when it is not the knowledge of an expert in philosophy, always has (in itself, at least) the nature of an expert knowledge (concrete, overdetermined, intuitive, quasi unconscious). This is why it is particularly difficult to formalize and represent it in its philosophical concreteness.
Thus we only have a guiding thread at our disposal: the Idea of A Phi, which leaves the situation open. We cannot constitute an A Phi by addition, accumulation, transfer, or analogy. We will have to invent it and not to trace it from what already exists. This is why we have to engage in the process of elucidating its theoretical bases, a process with multiple phases. And perhaps the final result—what we called an A Phi in a slightly “decalcomaniac” way—will hardly resemble what we imagined at the start.
ELABORATION OF A PHI’S CONTENT AND TASKS
This formula covers a triple program: 1/ the inventory of the traditional critiques of philosophy against AI; 2/ the description of the spontaneous philosophies that sustain AI; 3/ the—problematic—extension of AI toward philosophy, the Idea of an “artificial philosophy” (A Phi). What grounds this program, which is inscribed in the vaster program of a science of philosophy?
Instead of describing AI’s codified practices, we will seek its intimate goal, its telos, in view of extending up to philosophy what is only outlined within it. This telos seems to be the following: AI corresponds to a scientific “rupture” or “revolution” in the problem of a science of thought, an experimental science with a technological basis. A whole other thing, by consequence, than recipes for simulating thought. This rupture has precise historical and mathematical conditions, in particular the invention of new logical, mathematical, and technological means that allow thought to be reduced to reasoning and reasoning to arithmetic. This rupture defines an upstream and a downstream. Upstream: the old philosophical and fantasmic project of a (specular) simulation of thought by the machine as well as a simulation of the machine by thought.
The equation thought = machine has to be understood as a restrained mode of the great founding equation of metaphysics and of the philosophical Decision in general: thought = the real; logos = being. It would be easy to demonstrate that the equation thought = machine underwent, on the plane of its most general presuppositions, exactly the same vicissitudes and the same history as its metaphysical matrix; that it is capable of receiving the same interpretations; finally, that it implies the same specular and circular fascination between its terms, which are more or less deferred or delayed depending on whether one moves toward contemporary interpretations and “deconstructions” of this equation. The dyad thought/machine can be deconstructed exactly in the same way as any other dyad. Generally, the reduction of the problem of the thought/machine relations to the matrix and invariants of the philosophical Decision is a preliminary task, which is absolutely necessary for starting to pose this problem correctly and to discern the inevitable Greco-occidental aporias within it.
Yet AI is announced as the chance of finally breaking this circularity, in which no term manages to determine itself and oscillates in an aporetic and amphibological way between its identification with the other and its repulsion. This is an attempt—more scientific or more technological in spirit, this must be examined carefully—to place the problem of these relations on a controllable and experimental terrain and to put an end to the game whereby thought and machine are mirrored in each other. With AI, this is at least the project of putting a science in philosophies’ place and thus changing the terrain, establishing a new type of rigor and a new type of relation to the real. Philosophers have to take this attempt seriously, if only because it proposes to remove from philosophical legislation one of its most fundamental objects or, better yet, the most general condition of a relation to the objects: intelligence or reason. In effect, under the name of “general intelligence,” of “intelligent action,” Cognitive Sciences (even beyond AI) propose nothing short of constituting a science of reason and perhaps even a science of thought; an experimental and technically armed science. This is their long-term ambition, their telos, and, from this perspective, they intend to assume the whole heritage of Human Sciences and to present themselves as their crowning. In the face of what is necessarily a danger for them, philosophers should stop believing that AI contains more artifice than intelligence. To believe that every intelligence can only be artful and Greek before being artificial and informatic is not perhaps the best means of fending off this danger.
This is why AI’s downstream—the scientific downstream—is more important than its philosophical upstream, provided we consider this scientific downstream in the new relations it can establish between itself and philosophy. If we extend AI’s and cognitivism’s telos a little further, and if we return to what they merely outline, it seems that their project can be radicalized, transformed but enlarged. We could, for example, consider them as sections of a cone whose base would from now on be philosophy itself, the philosophical Decision and its invariants, and no longer the simple cognition that is probably (at least in one of its aspects) nothing more than a restrained and transcendent concept of philosophizing reason, and whose opening angle or peak would no doubt be science, but, on one hand, nothing but science instead of the current blends of technology and science and, on the other, the Idea of science instead of the blends of particular sciences like logic, neuroscience, or cybernetics, particular sciences that in any case—and this is why we raise this objection against them—are treated as the paradigm of every thought and function surreptitiously with the transcendent claims of an essence of philosophical reason. Under the name of A Phi, which serves as our guiding thread, we thus try to clear out a possible path that would go from AI as it exists to an authentic science of the most deployed thought, i.e., of philosophy. Philosophy, at any rate, would only be one dimension in a science of thought whose pivot would be a science of transcendental, albeit non-philosophical science. A science of philosophy: such a project rules out that it is a new (Husserlian, Cartesian, etc.) avatar of the philosophy of philosophy, i.e., of the project that is partially realized in the history of philosophy.
We will be wary of unilaterally critiquing AI as philosophers (especially continental ones) often do. On the contrary, we will take it as a symptom to be analyzed and displaced, rather than as a ready-made model to be “transferred” or dogmatically and unduly extended to the philosophical Decision.
How should we proceed? In two ways. We will treat AI and Cognitivism first as compromise-formations between philosophy itself (its essence and its most general invariants) and some poorly elucidated philosophical presuppositions that serve as their basis. So much so that we will seek not only to critique, but to affirm them and to show that they deal with the essence of philosophy more intimately than they themselves imagine.
We will therefore treat them as another compromise-formation—in a new sense that remains to be determined—between their essence-of-science (the science (of) science presumes that science has an essence) and their own understanding of science, their auto-interpretation as scientific disciplines, their will or desire to present themselves as sciences.
Put differently, the strategy we pursue to “pass” from AI and Cognitivism to A Phi consists in opposing twice (to their autocomprehension and to their desire to be philosophy and to be science) what is considered here as the radically and rigorously thought essence. This essence is presented in two heterogeneous forms: the essence of philosophy and the essence of science.
1. To their sedimented philosophical presuppositions, unrecognized as such, and thus to their restrained autounderstanding of philosophy, we will “oppose” the essence of philosophy in order to transform these disciplines from this perspective and to render them coextensive with this essence, to deliver them from their ever illusory and restrictive (empiricist and rationalist) philosophical autointerpretations, which misjudge or repress the deployed essence of the philosophical Decision. The procedure consists in making visible, from inside and outside AI and Cognitivism, philosophy’s full and specific requirements, in particular its transcendental dimension of “decision.”
2. To their autointerpretation as sciences, in which they are thought as mixtures of empirico-rationalist philosophies and of particular empirical sciences (logical and mathematical science, neuroscience, theory of information, etc.), we will “oppose” a more radical concept and, in any case, the “thesis” of the transcendental consistency and autonomy of science-as-thought. Given its immanence (the criteria of scientific thought are nothing-but-transcendental criteria rather than empirical or rational), this concept, which is not acquired by philosophical means, but originarily from itself, will constitute the most difficult ordeal [épreuve] for them and will allow us to evacuate the surreptitious mixtures that are formed from philosophical empiricisms and/or rationalisms and from the essence of science.
We have in fact shown that there is an essence specific to science; a quasi “ontology” proper to sciences—at any rate, a transcendental thought that does not fall under the authority of philosophy and does not engender an epistemology.
What we will create will therefore be neither a practice nor an epistemology of AI. We will ask: under what conditions can AI become a rigorous science of intelligence deployed in its ultimate possibilities, i.e., a science of philosophy? We thus propose is to inventory the conditions of theoretical production of a science of philosophyor of a (full orenlarged) science of reason, starting from restricted models of AI and of Cognitivism.
The fundamental condition is to restore science’s autonomy vis-à-vis every epistemological recuperation and thus to proceed to something other than an epistemological “rupture” or “revolution” in relation to science. AI suffers in its development from overly limited and encysted theoretical bases, as much on the scientific as on the philosophical plane. The passage to an A Phi entails disrupting AI’s internal economy (sciences, philosophies, technologies) and, above all else—this is the condition of everything—the economy of the relations between science and philosophy.
The implementation of this strategy assumes that AI and Cognitivism are treated in a complex and differentiated way.
• AI is apparently, and initially, treated as a model (perhaps a not yet elaborated but extant model) of a technical and scientific discipline of intelligence. In effect, it is necessary to yield first to appearances, i.e., to take AI and make it take another step, to extend AI to philosophy itself, to proceed experimentally and to see in any case—even if this is not enough—the irreducibility that subsists in the philosophical Decision that is subject to the informatic test. This extension of AI’s procedures toward the ultimate mechanisms and operations of the philosophical Decision (beyond a simple simulation of local reasonings and abstract statements) risks constituting a radical trial for philosophy (something like the heaviest weight). But it is inevitable for two reasons. First, it is better for philosophers themselves to intervene in the management of this experiment, which concerns them directly, but provided they know how to ascend to the last requisites of the philosophical Decision and do not confine themselves to shouldering once more, and naively, this or that “system,” which is adopted without prior examination. Second, it is inevitable that AI does not check its ambitions to simulate natural language, the most general mechanisms of reasoning, and, lastly, the specific knowledges of experts. A last ambition necessarily looms beyond “expert systems”; we must term it Artificial Philosophy (A Phi). Moreover, this ultimate extension of AI is not necessarily a matricide, even if it has the aftertaste of a matricide. It would for example be a way of experimentally verifying Heidegger’s thesis about the affinity of “metaphysics” and “technology”; A Phi represents in some sense the absorption in itself of occidental philosophy, which recovers itself through its ultimate technical possibilities. The fundamental problem of this first operation is to fix at its right level—i.e., at the highest level, that of the philosophical Decision—the trial to which AI is for its part subjected, in other words: the exact nature of philosophical thought’s operation, which it should be able to simulate or reproduce. From this point of view, there will be no dearth of naive individuals and hasty enthusiasts who, failing to push the analysis of what a philosophical Decision is sufficiently far and confining themselves to a superficial description of philosophy, will believe they have finally invented the philosophical robot…It is probable that the principle of these illusions is the following: one stops the analysis of the philosophical Decision too early, fixing it on the system of logical rules it can contain and constituting them into “negative” conditions of philosophy’s exclusion. On this basis, it becomes easy (even for a child) to establish a program of rules that prescribe the rejection or the avoidance of this logical code. But if the philosophical Decision is the rejection or avoidance of something, it is not primarily the rejection of logical rules; it is the rejection of ordinary and everyday language. And it is much more difficult to formalize the negation of everyday language (for example, of common sense) than the negation of logic’s rules, especially when this negation is no longer merely logical but also real or transcendental, as is the case in every philosophy that rigorously thinks its essence. It is tedious to have to remind each new premature attempt of this kind that the philosophical Decision does not obey (or reject) some rules without destroying or rejecting others (or without inventing others). It is this transcendental mechanism that AI should be able or unable to reproduce. All the rest is a phantasm.
• But this first operation is still an appearance, even if it is objective. If we want to pose the problem of a massive intervention of informatics, massive and principial [principielle] in the most fundamental mechanisms of the philosophical Decision, in the very operation of philosophizing or thinking, then AI has to be taken, not as a model to be incrementally extended and generalized in an abusive or “ideological” way, but as a symptom to be analyzed and displaced. No doubt as an index of tasks and a reservoir of procedures and inventions, but perhaps also as a symptom, that of the relations between informatics (i.e., technology combined with science) and philosophy (which is simply presupposed in this case). Truth be told, if we exhibit AI’s philosophical presuppositions (this exhibition is, at any rate, necessary in view of an A Phi), we should also exhibit its scientific presuppositions and flatten all these components. The path toward A Phil will be arduous and complicated.
A PHI’S STAGES
Under the name A Phi, we can imagine several practices that we have to select from—three concepts: A Phi I, II, and III.
1/ A Phi I: a simple extension of AI (of its procedures, its finalities, its concept of tasks, its autointerpretation in general) to philosophy. A Phi is then an extension and an outbuilding of AI. This is a pre-dominantly techno-logical, rather than scientific interpretation of A Phi and, to be sure, of AI before it. Techno-logo-centrism is not exclusive within this interpretation; it is dominant. This extension of AI is spontaneous and stems from its autointerpretation as technoscientific mixture. It is a rule that AI and Cognitivism, left to themselves, would interpret themselves starting from science and technology and would consider them united in a new discipline. From our perspective, this autointerpretation of AI and its spontaneous extension to philosophy are specular theoretical effects of the technoscientific mixture in the mode in which it is ordinarily practiced, i.e., of theoretical interpretations that stem from its dominant technological side. In other words, the technoscientific mixture is reproduced in the theoretical mode as mixed or determined essentially by its technological side and understood in a nearly exclusive way through it. This mixture is not then rigorously analyzed—we should say “dualyzed”—on the basis of the inequality of science as determinant and technology as dominant. AI’s immediate and spontaneous extension to philosophy stays within the symptom and is confined to reproducing it without taking the trouble to analyze it. We may recognize here the majority of attempts at A Phi that have already been undertaken or will be undertaken. The result is a “restrained” A Phi, in which the devalorizing models of existing AI are extended to the Philosophical Decision (PhD) and falsify it; and, as an effect of this technologization of the PhD, we will have not a scientific, but a technological limitation of the possibilities of the new discipline.
2/ A Phi II: an extension of AI, no longer as a technoscientific mixture that interprets itself more as technological, but as uniquely technological. In a sense the mixture begins to be shattered or dualyzed, but for the benefit of the most affirmed techno-logo-centrism. It is philosophy, in its Nietzschean or Deleuzean form for example, that attempts or could attempt this philosophical autointerpretation of AI’s technoscientific mixture as purely technological (the reduction of science to technological connectivity) and attempt to extend this interpretation to the PhD. Obviously the result would be tautological. If the pure technological syntax is conflated, as it seems, with the PhD’s syntax in the Nietzschean regime (finished or autoaffirmed metaphysics = finished techno-logy), then the simulation by AI, understood as absolute technology, produces no gain. And, conversely, every the PhD is already an artificial autosimulation: PhD = A Phi; every philosophy is artificial. This result can also be read as a generalized A Phi in the technologocentric mode. In general, philosophy is no longer an empirical machine, but is exalted as an overmachine.
3/ A Phi III: an extension of AI whose mixture is now “dualyzed” on the basis of the primacy of its scientific side, which is no longer understood technologically and/or philosophically but starting from itself, or as the real basis of the technophilosophical superstructure that belongs, at any rate, to AI. This is the longest path. It no longer corresponds to a spontaneous, uncritiqued, and “ideological” extension, which leads to a technological overinvestment of AI. Instead, it leads to a transformative extension of AI itself before constituting an extension of the PhD. This transformation is complex and has been sketched out. It produces the most innovative and radical A Phi. Nevertheless the result will be less an “artificial” philosophy—it is a difficult not to think of this expression as a simple metaphorical extension, i.e., still under the technophilosophical authority—of AI than a science of philosophy, which will use the technological representation of philosophy as a scientific and no longer technologocentric ingredient. What will thus be founded is less a philosophical avatar of AI than a pre-dominantly technological science of the PhD.
This third solution thus makes science intervene twice and no longer a single time, as is the case in AI’s autointerpretation with the help of the concept of “technoscience”:
• A first time in the use of “reduction” exercised on the PhD, a use that is still philosophical in nature. Science is not exhausted in this function, but we can always consider and draw from it a particular use, the one it acquires when it is technologically realizable, programmable, or usable within the limits and under the domination and anticipation of the technological a priori. This technological side of science (not its technical apparatus, but its use or its investment in technologies) can be easily interpreted in philosophical terms, in terms of the PhD (for example, the Cartesian, Hegelian, Nietzschean…techno-logics), as the technological itself. Moreover, it can be used for the purposes of reduction (“transcendental” or not) in the same capacity as any PhD vis-à-vis empirical phenomena. Applied from this angle to the PhD itself and as such, this use of science is the equivalent of the PhD’s autoheterocritique. It quickly reaches its limits and slips into tautology, since the power of science is then the very power of philosophy. The philosophy of philosophy and the technology of technology extract only their own invariants; they autoextract themselves as invariants. This use of science in terms of a (techno)philosophical reduction is not very fertile, but it is the only use authorized in A Phi I and II, and especially in the latter.
• A second time in a scientific-transcendental use, which is always exercised on the PhD, but such that the type of science’s transcendental truth is no longer, as we have seen, the one that philosophy knows. Already, this “nothing-but-scientific” use no longer represents on its own a first reduction or a first technophilocentric use of science. Furthermore, science in this use, which is nothing-but-transcendental and is also not “meta-physical” or techno-logical as in the preceding case, can act on the PhD without insidiously exploiting its procedures and operations, without simultaneously borrowing its essence. Its nothing-but-scientific efficacy can be formulated as an actual or already complete reduction at the moment when the PhD is affected by it and attempts to resist it by thinking it in its own way.
In its practice and its current theoretical bases, AI is a technologocentric autointerpretation of science, a technophilosophical requisition of its essence, a science that is required under technoexperimental conditions and manipulative finalities, an attempt to manage thought, which is founded on a prior management of science. AI responds to the criteria of science, of scientific thought, only through the criteria of technological production—it is a philosophy disguised as a science by way of engineering.
With A Phi III, we are dealing with a whole other thing: instead of applying to the PhD a science that is already interpreted by technophilosophy and that already presents itself in more or less restrained and dogmatic or enlarged and “thinking” autointerpretations, we will apply to it a science reduced to its essence of science and its transcendental power to manifest the real through the production of knowledges.
Whereas A Phi I and II are subproducts of AI, i.e., of science’s technological requisition, A Phi III corresponds to a real surpassing of AI toward a science of the PhD, a science of philosophy as much as of technology, and in any case a founding discipline of every subsequent attempt to simulate or artificialize philosophy and no doubt “cognition” as well. From this point of view, the science of philosophy will be to A Phi what science is (really, outside every technologocentric autointerpretation) to AI.
A science of the PhD will thus pass through the two typical phases of A Phi III. In relation to an explicitly transcendental philosophy like Husserl’s, what we call science or the nothing-but-transcendental no longer needs to proceed through an operation of reduction. Only the philosophical use of science turns science into such an operation. So much so that if we distinguish, as in Husserl, two levels of science’s intervention, they will not have the same content and will be shifted a notch in their relation to the levels of “Phenomenology.”
Whereas in Husserl science, requisitioned as philosophy, culminates in a transcendental reduction (limited to certain philosophical positions and not yet extended to every philosophical positionality and decisionality), here science, which is no longer philosophically requisitioned, begins from the start through the efficacy of an already actual reduction. And, even as “nothing-but-science,” it no longer proceeds through an operation (decision-position) of reduction. A science of philosophy (and therefore an A Phi founded on this science, i.e., an A Phi that uses at the interior of scientific representation the technological representation of philosophy) is no longer, properly speaking, a classical reduction of the PhD; it is its “dualysis,” its in-differentiation, its unilateralization. A reduction would remain circularly caught in what it must reduce—this is the common lot of the PhD—but science has at its disposal an other-than-philosophical causality on philosophy. Philosophy, as the simple material of a science, is included in the “real-object” or subjected to the “objectivity form” peculiar to science; it ceases to be considered only in an empirical way, from itself in this case. To this material (the PhD or a particular PhD), a technological simulation can already belong, an “artificial” representation or production, local and perhaps even global, but it will no longer be determinant of the very essence of scientific objectivity as is the case in A Phi I and II.
Seen from A Phi III, these three solutions represent stages in the path that goes from AI to a radical science of philosophy and of cognition. The point of departure is the technoscientific mixture of current AI, an autointerpreted mixture in a technophilosophical and thus not-yet-dualyzed mode. A Phi I’s attempts and “results” can eventually be included as materials in this point of departure.
The second stage begins with the analysis of the mixture or with the dismembering of AI into its components. But this analysis is carried out from the perspective of technology alone and presupposes its authority over the whole whose unity—a technological or unitary unity—is reconstituted at science’s (now complete) expense. Here again, A Phi II can be integrated into the materials and is no longer but one stage in the production of A Phi III’s concept.
Finally, the last stage dualyzes AI’s mixture by constituting science, no more as one “side” of the mixture, but as the real basis of the technophilosophical “side,” the basis that founds a science of this side. A Phi III (what we call non-philosophy, in other theoretical and more purely philosophical contexts, or the New Technological Spirit in theoretical and more purely technological contexts) is not the truth of A Phi I and II, the stage that would recapitulate them in a superior unity. It is, on the contrary, their radical “analysis,” their dualysis, their manifestation as symptoms, and this manifestation is enough to denounce them as symptoms or mechanisms of the self-defense of philosophy and technology against science. This dualysis of AI amounts to reinscribing it into the essence of science, into their special “relation,” which is the relation of determination-in-the-last-instance of the AI by science.
AI appears then as a restrained, autointerpretative—and thus predominantly technophilosophical—form of A Phi III. In order to access A Phi III, we have to pass through the science of philosophy/technology, i.e., we have to first resolve the crucial problem of the unity (of the type of unity) of technological causality and the specifically scientific causality (of science as such, not of this or that local knowledge) in a “real” or concrete machine. We can only resolve this problem by giving a determination-in-the-last-instance of technology by science, rather than a technologist interpretation of science.
The general strategy of this attempt resides in a double restitution of the essence: return from the restrained concepts of the technological and the philosophical to their essence and return from the technophilosophical interpretations of science to the essence of science. A Phi III is then the newsynthesis of technophilosophical causality and of scientific causality. Is it a matter of a special “machine” (in the intratechnological sense) capable of this combination? Or, rather, is it a matter of a structure that combines the real basis of science and the most powerful machine that exists, the PhD as autoaffirmation of the machine or as “overmachine,” which uses (beyond informatics) information that is neither continuous nor digital, but the synthesis of the two? Such a machine could “simulate” the technologico-philosophical as such, and therefore every local, less universal, machine or the PhD—this would in fact amount to an autosimulation. But it can only do so on condition that it and its operation are included in a scientific representation of the technological or of the philosophical and that their autosimulation is subjected to the enterprise of their rigorous scientific description.
This double extension must be understood as follows. On one hand, the rational is represented by the technophilosophical side. A machine is rational; it does not necessarily have a universal rationality, but it has a local and restrained one. The only truly universal rationality is the philosophical and/or the technological. There are thus multiple and distinct a priori forms of rationality, which do not necessarily coincide: rationality of a machine with logical programming, rationality of the brain, rationality of games, etc. The simulation is not necessarily possible from one rationality to the other, from one region to the other or from one mode to the other of the PhD unless it passes again through the PhD itself as the invariant essence, which is the most univocal rational.
On the other hand, we have the side of the “real” or the real basis of A Phi III: it is science, which relies on reason, but is not reduced to it by its essence, which uses the technophilosophical by determining it in-the-last-instance. A Phi III is therefore not an arbitrary machine. It is what a machine that simulates thought becomes when its power of simulation is radically potentialized until it turns into a general autosimulation and when its scientific basis is phenomenally exhibited as such. Instead of a machine or even an overmachine that simulates thought, A Phi III is a science of thought that uses the technophilosophical autosimulation of thought as a simple procedure of scientific knowledge. It is less a machine or a technology of thought than a pre-dominantly technological science of thought. It can only be controlled technophilosophically—rather than logically—by its “superstructure” of technophilosophical representation. By its real basis, it does not pertain to an operation of control because it is transcendental or index sui. Once more, A Phi III can only be reached on the basis of the rigorous dissociation, the dualyzation of the technophilosophical and the scientific as two heterogeneous essences: that of ontological objectivity and that of transcendental reality. The concept of “rational machine” expresses their confusion as much as their positivist flattening, in the same way as the concept of “technoscience.”
These clarifications constitute the grounds and limits of the concept of A Phi in its most rigorous and most complete form. We have to maximize AI’s grip on the PhD and on the whole surface of philosophy’s objects and operations; we have to potentialize its technological and philosophical procedures while knowing that, from this first point of view, A Phi encounters a de jure yet mobile limit that it can only displace without destroying. This limit is internal: the PhD itself as relative-absolute limit or as invariant (tendency-limit). But AI’s extension and potentialization into A Phi discovers—outside it this time, but inevitably—an absolute limit, i.e., something other than a limit: a determination-in-the-last-instance, a causality of science on the technophilosophical side. Once the philosophical critique of AI is deployed, once it is extended to the essence of the technophilosophical, it is then necessary to extract AI’s real-transcendental or scientific kernel and to renew it in its essence as well. This double operation, which is asymmetrical by its procedures and its finalities, founds a rigorous science of philosophy and of cognition.
We still have to show that Generalized Fractality allows us to resolve the problem of a science of philosophy and thereby of an Artificial Philosophy.