Roberto Diodato
Following Jean-Paul Sartre’s analysis, we can say that any consciousness of image is consciousness in its nonrealizing function, which has its noematic correlative in the imaginary. Therefore the “image” is clearly ontologically distinguished from “perception.” In reversal, in the virtual environment, the sense of “having an image,” as distinguished from “having a perception” is not self-evident, as it not clear what can be called an image: the virtual body-image, so far as it is intentionally constituted, is structurally ambiguous. In a virtual environment, it is not true that what is conventionally called the image is immediately given as such to reflection, but what happens is that perception and imagination rather mysteriously fade into one another, because the percept, faced as perceptual intention, is not properly external, it is neither in consciousness nor in the world; it is external-internal, it is itself an image, which is not a thing-image of the world (a painting, a photograph, a cinematic image, a digital image like TV images), but a body-image, constituting itself in the very interaction.
Differently from the traditional spectator of the pictorial or filmic device, what is given to the subject-spectator-actor as induced by any virtual device, is an incentive of motion rather than its suspension; it is a supercommitment of kinesthetic functions, a feeling of inclusion within the scene and in relationship with the characters involved, a feeling intertwined with a feeling of distinction, because the scene would not exist without his/her action. The voyeuristic drive is replaced by a drive of insertion, of limitless intervention, of omnipotence inhibited only by the limits of the program run. Thus the metapsychological regime of immersion, the processes of identification, of imaginary constitution and distinction of the self, change radically. We should think of aesthetic fruition as deprived of that distance, which has been a condition of possibility of an artistically relevant form; we should try to think of the function rather in the form of a backwash; that is, of the interpenetration of the body of the user into the body of the work and vice versa the work into the body, or the imaginary. This implies an emphasis on the pathic and panic dimension of the relationship: being one with the work, which undergoes the effect of my presence and modifies, through the change caused by that undergoing, my feeling.
Saturation seems to be typical of the virtual environment: a filling effect, a presence effect, as if there were no absence and no distance: the spectator-actor doesn’t present himself/herself as being in a state of lack (virtual only in the sense of potential): there is no effect of suspension of reality, but rather a presence of another reality, its very enhancement, rather than a protection from the real. What is cut off is the regression and suspension of reality, and after all the possibility—always plural—of any identification with the represented object. This is especially true in the narrative arts, through the represented characters: the immersion in them is made too strong to allow any projection or identification: the virtual spectator cannot identify with the subject of his/her vision-action. Now, this cut of a primary identification implies a cut of processes of secondary identification, which especially deal with the different temporality of the virtual.
On one hand, virtual simulation, although it may deal with ludic or commercial aspects, gives the virtual artwork the possibility to disconnect from reproduction as simulation of “real” experience, or experience of “reality.” Artistic production implies accomplishment of the imaginary, which is always “unreality” and implies the openness of that split in existence that digital technologies allow. On the other hand, we are confronted with a deep novelty, as the virtual artwork is not reproductive nor reproducible: being interactive, it embodies in a new way the user’s action, because the user’s action is the very being of the artwork, its ontological structure. This notion of interactivity does not imply a subtraction but an enhancement of unrepeatability and uniqueness of the artwork (which is no more an “object” or an “event,” but rather an “object-event”) and makes the notion of authenticity more complex. Actually, the specific virtuality of the virtual body stresses the fact, implied in its very definition, that the digital interactive body-image never turns itself into an actuality in its algorithmic matrix; the virtual, contrary to the potential, configures itself as a problematic complex, a net of tendencies which requires a process of actualization.
Absolutely not; the question here is rather the overcoming or at least the questioning of the distinction between aesthetic and noetic, because the virtual body-image is both aesthetic and noetic: an emotional noesis, a thinking aisthesis.
Certainly the ontology of this strange object-event, and of its relationship to space-time, is to be tackled. (What are the conditions of identity for a virtual body? What are its limits? In which sense does it have borders?) Also, its specific temporality and its connection to human and computer memories are arguable dimensions that would deserve an analysis, as they directly affect an ontology of the virtual body, the difference between the virtual and the possible, and the human body–virtual body relationship. But if the specific character of the virtual is, as we have said, to be an intermediate entity between object and event, between thing and image, then virtual bodies represent a hybrid, interactive world, which can be visualized as a synthetic image, an immersive hybrid engaging the corporeality of the user and merging with the virtual body’s image; this hybridation, between body of the spectator-actor and the virtual space in which it is immersed, is difficult to define in its singularity. It is a world in which the relationship is prior to substance, a world in which the ontology of states of things is not tenable: interactivity does not properly mean (only) interaction, or action between two given substances, but intervention or modification of the matrix allowing the virtual work to exist, it has also an impact in the aesthetic-noetic dynamics of the technologically hybridated body of the user. This “fundamental” aspect of the interaction has introduced both in the work and in its fruition an element of imponderability, unpredictability, and potential risk (both for the aesthetic success of the work and its possible effects on the user), in such a way that the digital, or numeric, that is “programmatic” essence of the work introduce indeterminacy.
This is a big issue. My answer is tentative: I think that the ontology of the virtual body reveals how the fundamental category is the relationship (not the relation between given polarities but rather what constitutes any polarity), even more than substance, but I could not say whether this holds good for all entities in the world. In my opinion we can show how this can be seen in the analysis of other entities: the ones studied by quantum field theory, or what we call “person,” or what we call “artwork,” or divine Trinity (which I think offers an ultimate model of comprehension of reality).
* * *
In our age of new-digital technologies, novel relations between humans and technology emerge and consequently marginalize the role of humans. The new-digital technologies are in fact “no longer extensions or implants . . . but extroflections of the basic human functions that gradually tend to become autonomous and self-operating.”[1] Therefore the new-digital technologies, that are based on numerical outputs, lead us out of the dual paradigm that has dominated the reflections on technique: the technique as a means for compensating the adaptive deficiencies typically human. Such theses, as classically expounded by Arnold Gehlen[2] and many others purport technology as a “natural prosthesis,” as originally connected to “human nature,” in itself hybridized with the artificial as the well-known thesis by André Leroi-Gourhan.[3] New-digital technologies go beyond the human, and rather place the human in a need to reconfigure herself or himself through them and substantially into them, as if around a center of gravity equally shifting and multiple. With reference to such shift, Jean Baudrillard[4] has introduced the concept of “videosphere” as the overall meaning of the new-digital technologies, hypermedia, and computer systems. Such a concept is all together rather simplistic and potentially misleading, since the new-digital technologies do not only evolve around television culture and technology but rather and most importantly transform processes of analysis and synthesis of and on the notion of appearance. In other words, the message is clear: new-digital technologies, rather than being instruments or explanations of the “human nature,” seem to constitute on their own a new tendency of autonomous flows; perhaps the first expression of a previously hidden aspect of the physis, involving in their processes—through structuring and unstructuring—human cognition with its ways of perception, emotions and desires, social exchanges, and the whole body-mind complex. These technological tendencies result in their own increasing autonomy and thus make the human constituents eccentric to them. And in new both crucial and critical ways such tendencies do not submit themselves, as traditionally thought of technology, to the power of will and plan, as laid by human constitution. We are forced then to rethink the synthesis physis-techne in a new way, as the place of ethos, of the human living and not as a lineal extension of human volitions as tools. In order to understand the new-digital technologies and what they brought forth as a novel ontological perspective of “telematics,” we need to focus not only on new-media technologies within their own technical capacity, but most importantly examine the concepts of virtual and virtual body as involving human elements. On the one hand, virtual and virtual body are in general the conceptual basis for new information and digital technologies; and on the other hand, they function also as the basis of artistic productions. The artistic productions are thus equally infused with technological and virtual elements which are neither immediately medial nor have a specific communicative purpose. Instead, they produce the sense that the appearance-communication can compete with any artistic production as traditionally or classically construed in respect to human cognition and intentionality.
By “virtual body” I mean first of all a digital interactive image, the phenomenalizing in the interaction with a user of an algorithm in binary format, a writing operation that in its sensible appearance hides and exposes at the same time the project rendered in the computational operations that constitute it.[5] Apparition of a grammar, such an image-language implies a characteristic spectrality affecting the visible-invisible relationship and structures the modalities of the use. From this point of view the digital image which can be multisensory, is not simply an image-of, not only mimesis of an identifiable or not, thing or image, and so it does not have a simulated essence. On the other hand, it is not even an icon or an original image; it is rather a genetic relational form belonging to a multiple rendering system. The digital image is not, we could say, exactly an “image,” but rather a body-image, because it is made up of ordered sequences of binary units, or otherwise character strings developing on different levels of a syntax which builds the coincidence of these strings and the sensible appearances, at present mainly sound or visual, but generally perceptible. Now we know that in the discrete sequences continuous undulatory phenomena are also rendered so, a thin body of a noncontinuous world, a discrete world of data points that show themselves as fluidity and density and saturate the perception, informationally or in formal terms programming language, the virtual body is for sure an electronic body, and so (to use another metaphor), an atomic aggregate, but the digitalization process makes it peculiarly light: a complex of sometimes remarkable quantities of extraordinarily fast transmittable data (the highest speed permitted by the physical limits) it can assume various incarnations, both structurally identical and phenomenally different as hybrid entity, body-image, and its appearance, its existing as image is essentially interactive. This is a delicate point to which we will have to return, but at least it allows us to exclude from the notion of virtual body those simply photographic or television digital images, which permit a passive action that does not affect the properties of their appearing and, most of all, on a different degree of the meaning of virtual body, do not permit a retroactive interaction on the structure of the computer memory, in other words an incision on the matrix. The number of the different degrees of interactive operativity is obviously very large, and relatively the meaning of the notion of “virtual body.” In detail we will approach the strong notion of virtual body that concerns us here for the novelty of its ontological statute.
We will now approach the specific notion of “virtual body” clarifying preliminarily the qualities of the experience of virtual; we will then define the concept of “virtual” and eventually the concept of “virtual reality.” First of all, the experience of virtual reality is multimedial and interactive. Multimediality indicates a peculiar representational richness of a mediated environment, that can be conceived as made up of two factors: wideness (quantity: number of senses which are simultaneously involved) and deepness (quality of perceptions, or sensory information); interactivity designates “how much the users interact in the modification of the form and the contents of a mediated environment.”[6]
There are then different levels of multimediality and interactivity, and as these levels get higher, the virtual-reality experience becomes more and more immersive. We can then claim, with Oliver Grau, “virtual realities . . . are in essence immersive,”[7] but at the same time we have to consider the paradoxicality of this statement, since physical and mental immersivity implying the suspension of disbelief, together with the identification of the body with the medium does not coincide with the simulation, and, in some ways, it is in opposition to it. In other words, I think that “virtual reality,” because it is immersive, should not and must not be confused with a tendentially perfect simulation of reality, with a simulation that cancels similarity transforming it in identity (and so that cancels itself as such) or with a teleologically definitive transparency of the medium: immersivity can happen, and happens, but as the quality of an experience that cannot be confused with the one that we call “real.” To justify this position, the same question can be examined from a genetic-constitutive point of view: Generation of “virtual reality” means generation of the possibility of experiencing an environment (characterized as “environment” by a set of “virtual bodies” which are not bodies belonging to the environment or in the environment but bodies coinciding with it) which is able to produce perceptive experiences in the user. By “virtual-reality generator” we can then mean a machine that is able to make the user experience such an environment, to render the environment in a situation. So a virtual-reality generator could be conceived as a generator of possible sensory perceptions, and precisely as a generator of sensory perceptions (visual, auditive, tactile, olfactory, and so on) which are able to simulate an environment-situation that we could prephilosophically and with sufficient accuracy define as “real”: In other words, a virtual-reality generator should simulate that “perceptive faith” that seems to be presupposed in our daily dealing with the world. But I believe I have just given a restrictive and after all not very interesting definition of environment and, relatively, of virtual body, because it tends to equalize virtual reality and simulated reality, considering virtual as a part of simulation or of a mimetic project for the following reasons: an environment that we define as virtual because of its capability to simulate a real situation, turns out to be accurate inasmuch as it is able to answer the desired way to each possible action of the user, its accuracy then does not only depend on the experiences that users actually perform, but also on the ones they could perform. The evaluation of the “sufficiency” of this accuracy is a problem: can a reality with no differences be simulated? Can a “perfect illusion” be built? Supposing the user is able to make free choices, in the sense of liberty of indifference, simulation is impossible, because such choices are not computable. If we stick to other metaphysical hypotheses, supposing, for simplicity, that choices are the result of an endless causal series (where the idea of series itself is reductive and inadequate), the simulation of reality will be efficient the more the elaborator is able to calculate the possible actions and reactions of the user, and consequently to pre-build potential interactions among the virtual environment-bodies. So the more such computation tends to infinity and relatively such algorithm is phenomenalizable, the more virtuality simulates reality. From this point of view, the virtual environment is an imperfect Spinozian machine, a set of relationships that constitutes a tendential coincidence of liberty and necessity. Such coincidence could be possible only in an essentially uncomputable, endless, causal network. So, the virtual environment tends to produce the experience of a pervasive and persuasive immersion, but at the same time such an experience is relatively aware of its own particular ontological statute: it appears as a tendential simulation, and not as a perfect reproduction, and in my opinion it is actually this limit, void, lack, that opens the artistically relevant possibilities of the virtualization of imagination.
Its particular interactivity is an evident characteristic of the virtual body, which defines it among the other types of digital images: the virtual body is an entity that phenomenalizes itself in the interaction. In some ways, interactivity is something the virtual body has in common with any other body, but in some other ways it is a peculiar condition. To understand such peculiarity, and so to approach an ontology of virtual, we must think over the concept of virtual and over the difference between virtual and possible. As a matter of fact, in a general sense virtual is one of the states of reality, and not its opposite. Nevertheless, the concept of virtual can be better defined moving from the concept of “possible”: while possible is conceivable as a constituted entity waiting to be carried into effect, virtual configures itself as a problematic complex, a junction of tendencies imposing an actualization process. From this point of view, obviously the virtual-actual process does not correspond to the realization process of possible, if this is conceived as simple giving matter to a pre-existing form, and in this sense as the constitution of a substance, even though dynamic. Anyway, the opposition with the idea (though a little trivialized) of possible, helps to clear up the interactive quality of virtual: since the virtual environment develops itself in the interaction with the user, virtual means a dynamic configuration of forces with an intrinsic tendency to actualize themselves in forms which are not totally preconstituted.[8]
The virtual environment we are speaking of, together with all its perceivable qualities (the set of the colors, of the sounds, of tactile densities, and so on), to say, the environment in which I have the physical sensation of being immersed, is nothing but the actualizing of the content of a digital memory, the mise-en-scene of a binary-system elaborated algorithm. This leads us to consider the relationship between aesthesis and noesis. We are actually facing the possibility of a reduction of aesthesis (as sensory perception) in computational terms, though this does not imply a reduction of the secondary qualities to the primary ones, and not even a possible reduction of the world to a number. We are speaking of an original and reversible solidarity between aesthesis and noesis, which expresses itself in an operative space of this sort: at one end there is a digital description in a computer memory, and at the other end a body outfitted with technological prosthesis, with nonorganic extensions of the senses: The body of the user of a virtual environment is a complex structure, a subject-object coming out of a technological project; it is a nearly-cyborg body, similar to the ones that were pondered and experimented by several artists, a body rendering itself in a phantomatic and eminently active entity. There is a lively discussion concerning these themes among the theoricians of virtual reality: in a certain sense, you can get to know a virtual environment only sensorially by an eminently bodily gaze, but at the same time a virtual environment is, as we said, a mathematization of space, and its images are actualizations of algorithms. It is a paradoxical situation: the identity, the “self” of the user is at the same time de-corporized and ipersensibilized: to meet a “thin” body, you have to be outfitted with a “heavy” body; the abilities of the organic body must then be technologically increased.
But, in my opinion, it is actually this heaviness of the disembodiment that does not let us reduce the perspective of the virtual vision to a subjective shot directed by ourselves, and together, once again, immersivity to simulation: the human body–virtual body relationship does not restrain corporeity giving rise to a disembodied mind-gaze which is able to experience mental products that appear sensible only through technological prosthesis. On the contrary, the virtual environments with their “heavy” bodies interrelating with “thin” bodies tend to exalt the difference and the awareness of the difference from the usual body-environment relationships. The user is then conscious of perceiving an imaginary space; he does not have the sensation of experiencing a dematerialized reality. The user experiences a reality felt as “other,” different, and some way similar to a product of the imagination. The possibility of manipulating one’s own perspective making it become a place of experience, matches the possibility of learning by immersion until consenting, also here on different levels, to the appropriation of points of view belonging to other users. It implies radically and generally the crisis of the stability of the abilities of one’s own body, and their redefinition through the relationships among technological prosthesis and virtual bodies. This way, at least an outlook on the conceptualization of an all-changing incarnation of self is provoked, which can be affected by the evolution of technology and of the programming languages: the rethinking of the figure of self as a hallmark of its movements, of its residual integrity as medium of its transformations, of its possible borders in the passages of actions that constitute the virtual space.
This leads us to study even more in detail the ontological nature of virtual, with Philippe Quéau reminding us that “the techniques of virtual representation are essentially numerical. Differently from the techniques that are mainly analogical, they do not participate directly in reality,”[9] they take part in it indirectly through the digitalization process, which is circularly made possible by those techniques. So the virtual bodies must not be understood as representations of reality, but as realities that are built in a way that is essentially different from the way the others are, since they are constituted by the circular participation of the living body with the world. Thanks to the perception-vision, the world goes through the body and becomes an act, a movement of the body, which might be mediated by instruments of analogical reproduction, and so an image. Virtual bodies are rather “artificial windows giving access to an intermediary world.” Now, in what sense is a “body” a window, meaning by this an internal-external passageway? It may be that the metaphor of the window can work if understood in a nontrivial way: it is not about passing through Alberti’s window, for the virtual environment is not (is not only, and is not essentially) a simulated reproduction of reality. The virtual body is rather an environment-window, a sui generis place where the internal-external relationship changes according to the different standards, acquiring a relevant power. I will soon come back to the question from an ontological point of view, for the moment let us take a famous quotation by Kandinsky: “Each phenomenon can be lived in two different ways. We cannot choose these ways, for they depend on the phenomena—they are derived from the nature of phenomena, from two of their properties: Internal-External.”[10] As we know, this is true for our own body, but also for everything that appears to us about the way our body shows itself: a phenomenon can be lived, some way, from a certain distance, it can be perceived as other, it can be world, but the same phenomenon can differently be part of our life, it can engrave itself in it, accomplish itself as its pathos and show in this way, in visibility, its invisibility. All this corresponds to common experiences, which are just as selective as ordinary: something we perceive might engrave itself in memory and affectivity, it might become part of that primary and indemonstrable inside and might go back to the light of the common world through different means. Something else, the mass of the perceived phenomena, at least consciously, might not. But Kandinsky did not only claim that the phenomenon could be lived in two different ways, internal and external, but that this can happen because external or internal are properties of the phenomenon, of the same phenomenon: since being together internal and external is part of its nature, the phenomenon can be lived as world or as pathos. Now, I do not know if this position can be held with what we generally call “reality,” but it works well for virtual bodies: in a virtual body-environment, in which space itself is the result of an interaction, the world does not happen as if it was standing back or taking some distance. It happens in the way of a sense-feeling of the immersion, and the body, since it is perceived as other, realizes the sense of its reality, of its effectuality as a pathic and imaginary incision, as a production of emotion and desire, to the point where the sensation of reality transmitted by the virtual environment depends mainly on how efficiently the emotions are provoked in the user. From this point of view “virtual reality can provide its own, self-authenticating experience,” but exactly as reality, as something different from the user, as an environment in which it can interact, as bodies it can manipulate. So the virtual body-environment acts as an intermediary not only as mediacy between the computer model and the sensory image, but primarily between internal and external, the strange place where the border becomes territory, of which the ontological structure must quickly be articulated.
As we know, one of the most discussed questions in contemporary ontology is the distinction between thing and event, and relatively the distinction between concrete and abstract. Now, in a virtual environment, what the user perceives as a thing is actually an event, the temporary actualization of something virtual existing only, in its actuality, as an interactive relation function. This leads us to think over the necessity of considering in an articulated way the concept of relation, and over the notions of thing and event as relational bonds, without this implying a drift, because the virtual has an actuality of its own, even beyond interaction (it is real exactly because it is virtual). What we have to do is to articulate, at least briefly, the question of the object-event relationship, in order to point out a typical ontological mark of the virtual body as it is here defined. Such a relationship has been mainly conceived (if we decide, in order to ease our work, to overlook dialectic or neo-idealistic positions) as a form of relationship with two values: the event is an object (or more than one object) that changes. An ontology that admits events, often conceives them as changes of an object, thing or substance provided with some permanence; so it conceptualizes the event as relative to such becoming, even when the becoming object is not clearly identifiable. All in all, it is a conveyance of Aristotle’s ontology, which conceives substance, with its intrinsic dynamism, as a main category. If we start from this assumption, we will have to face the question of the symmetry of event and object, and of the possible conceptual dependence of the category of event on the one of object, even if we conclude that the categories are not conceivable separately. Now, the virtual body, even though not reconductable to a representation, does not exist as body if not in interactivity, it is an interaction, an object-event: an action (relation of interactivity) which is a body (virtual body) because it owns the characteristics we usually ascribe to bodies. As time goes by, the virtual body, if certain conditions concerning its interactive nature are given, persists to mutations of position, size, shape, color, but also to certain conditions that concern its interactive nature. Virtual bodies are then (like, perhaps, bodies simpliciter) (relatively) monotonous events, but only if certain conditions are given. Pondering such conditions in the field we are interested in brings to transform questions like “do things like changes exist?” into questions such as “what are the conditions of possibility of changes which are things?” and so we are brought to consider the typicality of the virtual body. (Leaving out for the moment the question about the ontological difference between the virtual bodies and the so-called real bodies). In the case of a virtual body, the event is something unrepeatable, a concrete but thin individual (meaning by this an integrated system), constituted by the interaction of a human body (so of a body-mind complex) outfitted with technological prosthesis, with an electronic elaborator, implemented by an algorithm (in turn rendered in a programming language). Does this concrete but thin individual occupy exclusively one place? If so, what place? Several parts of my technological prosthesis, several sensitive surfaces of my body, a certain part of my brain, a computer memory? In any case, it is a body which allows other bodies in its place, it allows, for instance, my body to pass through it, and if a virtual environment is a virtual body which is qualifiable as a structured surfable set of virtual bodies, then a virtual body can contain virtual bodies in its own body, like shadows, rays, angels, phantoms. . . . Now, a virtual body occupies, supposing these words have an intuitive sense, a certain portion of the space-time, but not in an exclusive way, because the virtual body happens in the space-time of a non-virtual body; moreover it multiplies its temporal form: what is its time? Of course it happens at the moment of the interaction, but among its conditions of possibility, or better, in its being a real body, there is the fact of being previously written or engraved on a material base, in a memory.
So a virtual body is and is not itself in place and time, since its eventualization depends on the interaction with the user: can we now claim that reality is interactive in the same way? About this question, David Deutsch writes:
What may not be so obvious is that our “direct” experience of the world through our senses is virtual reality too. For our external experience is never direct; nor do we even experience the signals in our nerves directly—we would not know what to make of the streams of electrical crackles that they carry. What we experience directly is a virtual-reality rendering, conveniently generated for us by our unconscious minds from sensory data plus complex inborn and acquired theories (i.e. programs) about how to interpret them.[11]
What Deutsch affirms is nothing but another form of trascendentalization of the empiric: what psychophysically affects our “direct” experience of reality, by this meaning the constitution of a fully sensed environment. Facing this situation would obviously imply to state a theory of knowledge and an ontology. Here we only want to point out that the virtual body seems to have at least one quality which is different from the ones of the bodies we usually call (on a common sense level and with the languages of theories) real. I would say that reality is not interactive the same way as virtual reality, and that “real” bodies are not events the same way as virtual bodies, since the virtual body avoids the dichotomy external-internal in a much clearer way compared to the so-called real bodies. Because of its discrete and interactive nature, the body coincides with its story, it is a process, but not only a sum of numerically different phases, for the weft of the body depends on the interaction. It takes place as an action making sense for a subject and from such interaction it acquires identity. But it is a revelative identity, and consequently fluctuating since it is dependent. Certainly any body, if perceived by my body, is in a situation of interaction; but it seems that being an object, and being an external object, it has the proper characteristic of not being able to be amended. This means that I cannot, through a simple act of volition, make an object not be so and so, make it not be what it is: the external world would then be the world of the unamendables, to which all perceivable objects can belong (those in which we are now interested), but also the nonperceivable ones. From a theoretical point of view, the situation is different for a virtual body: even supposing that that it is possible to separate a “simple volition” from a movement or a perception, considering that it will be possible, through sophisticated prosthesis, to connect virtual bodies directly to the seats of nerve impulses, nothing will forbid to amend a virtual body through a simple act of volition. Once this is said, we have to ask ourselves if such an act is possible in its specificity only within the noninfinite conditions previewed by the matrix, or if it is possible to implement algorithms permitting a retroaction on the matrix, that is to say a type of very strong interactivity, and if so, in what sense: a program that learns, modifies, and develops in its relationship with the user. Given the interactive nature of the virtual, I do not see the theoretical impossibility of it happening, and so I do not see the impossibility of producing a kind of intersubjective communication mediated by the computer memory which would become, starting from a programmed base, a memory of experiences. If we ignore this possible development of the problem, still we have to consider that if the unamendability is a characteristic that is necessary to objects belonging to the external world, then the virtual body does not belong to it. On the other hand, the virtual body does not belong to the internal world: the object-event which it is, is not something I have imagined or dreamed of, it is an environment which is surfable, by me or by other users, produced by a technology, and of such an environment I preserve the awareness of the difference from what I usually call “reality” (of which, as we have seen, a perfect simulation is not possible). Finally, I would say that the virtual body is neither internal nor external; it is, if you wish, external-internal, considering that synthesis is not a sum, but another thing, a proof of the ontological novelty of the virtual body. This leads to consider it as an object-event, which can be interpreted, ontologically and with the related consequences, as a strange, relatively monotonous event allowing other bodies in its space-time, or as an object-event protracting itself in time according to a four-dimensionalistic conception, or as (in a partially Spinozian way, though this supposes that time is an entity of reason and that the relationships among objects-events—not between objects and events—are one form of immanent causality) a succession of entities-instants. This last position is interesting, for in accordance with it, the permanence of the object in dynamism is a cognitive illusion. This induces to suppose that virtual bodies can be conceived as discrete not only in space, but also discrete in time, as temporal segments which are numerically different, and that their diachronic identity is potentially discontinuous.
The virtual body, in its appearance, namely, as virtual, sums up its story, the history of its phenomenalizing through a series of relationships. These relationships constitute a virtual environment, that includes a human body endowed with certain prosthesis. The virtual body-image is an ontological hybrid, a thing of the world that is jointly natural and artificial, a strange object-event which fits rightly in space opened by the famous, powerful, and paradigmatic distinction made by Aristotle’s Physics, when he says that of things that exist, some exist by nature, some from other causes.
“By nature” the animals and their parts exist, and the plants and the simple bodies (earth, fire, air, water)—for we say that these and the like exist “by nature.”
All the things mentioned present a feature in which they differ from things which are not constituted by nature. Each of them has within itself a principle of motion and of stationariness (in respect of place, or of growth and decrease, or by way of alteration). On the other hand, a bed and a coat and anything else of that sort, qua receiving these designations i.e. in so far as they are products of art-have no innate impulse to change. But in so far as they happen to be composed of stone or of earth or of a mixture of the two, they do have such an impulse.[12]
The virtual body entails a digital appearance that prevails only in the interaction, is certainly artificial, produced by technique, and also has, or rather it is, an “innate” tendency, that is independent in its “natural” components but contingent on its own “nature,” to change, and for structurally being an event. In other words, we encounter a hybrid artificial-natural, a quasi-organic system of “living”—what we can conceive in the language of physics as a “dissipative system.”
1. Mario Costa, Dimenticare l’arte (Milano: F. Angeli, 2005), 44–45.
2. Arnold Gehlen, L’uomo nell’era della tecnica (Milano: Sugar, 1967), 10–11.
3. André Leroi-Gourhan, Il gesto e la parola (Torino: Einaudi, 1977), 283–84.
4. Jean Baudrillard, Videosfera e soggetto frattale, in Videoculture di fine secolo, L. Anceschi, ed. (Napoli: Liguori, 1989), 30.
5. For further reading please see Roberto Diodato, Aesthetics of the Virtual (Albany: SUNY Press, 2012).
6. Jonathan Steuer, “Definire la realtà virtuale: le dimensioni che determinano la telepresenza,” in La comunicazione virtuale. Dal computer alle reti telematiche: nuova forme di interazione sociale, C. Galimberti and G. Riva, eds. (Milano: Guerini and Associates, 1997), 75.
7. Jonathan Steuer, “Definire la realtà virtuale,” 75.
8. See Pierre Levy, Il virtuale (Milano: Cortina, 1997), 130.
9. Philippe Quéau, “Les vois virtuelles du savoir,” in Costruzione e appropriazione del sapere nei nuovi scenari tecnologici, A. Piromalla Gambardella, ed. (Napoli: CUEN, 1998), 19.
10. Wassilij Kandinskij, Punto-Linea-Superficie (Milano: Adelphi, 1968), 7.
11. David Deutsch, La trama della realtà (Torino: Einaudi, 1997), 110.
12. Aristotle, Fisica Libro 11 (192b8) (acura di R. Radice, testo greco a fronte, Bompiani, Milano, 2011), 181–82.
Domenico Parisi
Robots as scientific tools are useful because they make it possible to understand “evasive” phenomena in precise terms. A robot is a theory of behavior that is not expressed in words or mathematical symbols but is expressed by using a theory as a blueprint to construct the robot. If the robot behaves like a human being, the theory incorporated in the robot is a good account of human behavior. And it is not an “evasive” account because we can always open the robot and examine what happens inside the robot. However, robots can be “toys” which do not tell us much about human behavior and the human mind and, to avoid constructing “toy robots,” one and the same robot should be able to reproduce as many different empirical phenomena as possible concerning human behavior and the human mind. Art is one of the most important manifestations of the human mind. Therefore, in order to really tell us what the human mind is, robots should have art.
To understand what is art by constructing robots, we must construct both robots that create works of art (robotic artists) and robots that expose themselves to works of art (robotic publics, robotic audiences). Robots are physical entities, which can be either physically realized or simulated in a computer. We can learn about the human mind from both physically realized and simulated robots.
What is “known” about art is that some people use their (precious) time to produce acoustic, visual, and linguistic artifacts with no practical value and other people use their time to go to concerts and art exhibitions and to read poems and novels. Can we construct robots which, spontaneously and not because they have been programmed by us to do so, produce artifacts without practical value and expose them to these artifacts? All human behavior ultimately has adaptive value—that is, it leads to an increase in the individual’s survival and reproductive chances. What is the adaptive value of art? Our robots should help us to answer this question.
Robots are useful scientific tools because they allow us to “operationalize” the meaning of the words we use, that is, to translate these words into things which we observe, count, and measure. A robot can be said to have motivations if the robot does different things that lead to an increase in its survival and reproductive chances and, since the robot generally cannot do more than one thing at a time, the robot’s brain must “decide” which motivation to try to satisfy at any given time. All animals—even the small worm C. elegans which has a “brain” of only 300 neurons—have motivations, and they must “decide” which motivation to try to satisfy with their behavior at any given time. But, in addition, human beings have a mental life in the sense that, unlike the brain of nonhuman animals, their brain is able to generate its own sensory stimuli, both linguistic (talking to oneself) and nonlinguistic (remembering, imagining). And human beings may self-generate sensory stimuli and respond to these self-generated sensory stimuli when choosing which motivation to try to satisfy with their behavior at any given time.
Everything is categorically dynamic and open-ended, except mathematical entities.
An emotional circuit is not metaphorically intended because nothing in robotics can be metaphorically intended. An emotional circuit is a piece of the robot’s brain (an artificial neural network made up of neurons connected by synapses) which has specific characteristics with respect to other, more cognitive, neural circuits (for example, it is based on neuromodulation rather than on neurotransmission) and whose function is to make the motivational decisions of the organism more correct, rapid, and adaptive. For example, it can be shown that a robot with an emotional circuit in its brain is better able to shift from looking for food to escaping from a predator or to approaching a robot of the opposite sex when the predator or the robot of the opposite sex appears, compared to a robot whose brain does not have the emotional circuit. The adaptive nature of art, the reason why art exists, is that it makes it possible for human beings to have experiences that cause them to have more effective emotional circuits.
But human beings are highly social animals and for their social interactions it is useful for an individual to know the emotional state of another individual. What we call “expression of emotions” is changes in the external appearance of an individual’s body that are caused by the activation of the individual’s emotional circuits and which can be perceived by another individual. It is not clear if the expression of emotions is a simple by-product of having emotional states or if it is so perceivable, articulated, and differentiated to let other individuals know one’s emotional state. This may have implications for art. Art induces a sense of “feeling together” which is adaptive for highly social animals such as human beings.
From a robotic point of view (and also from the point of view of real human beings) it is not clear what is “intention” and what is “intentional constitution.” But our robotic artists should be able to “create” Duchampian ready-mades, because human beings “create” them. As I have said, the adaptive function of art might also be linked to sociability, empathy, and feeling together. A robot might react to something “found” and not “made” because the context tells the robot that someone else, the robotic artist, is communicating its emotional state through the ready-made.
Duchampian ready-mades pose other interesting problems beyond their conceptual framework. Human beings have “artistic” reactions (reactions which lead to improvements in their emotional circuits) not only with respect to artistic artifacts but also with respect to natural objects (such as a flower or the sea), which also are “found” and not “made.” Robots should help us understand how our reactions to artistic artifacts are similar or different from our reactions to “beautiful” natural objects and why some people (or perhaps, all of us) think that “someone” is communicating with them through nature or that nature is “animated.”
If we construct robots that have art, this does not mean that art is “reduced” into a “simulated” procedure. As I have said, robots are not really constructed by us but they construct themselves because, like us, they evolve and learn. The ultimate aim of human robotics is to “construct” robots that are indistinguishable from us so that it is not clear which is the original and which is the copy (the simulation). But, of course, in so far as robots are not exactly like us, they may create artistic artifacts which we like and which may be different from the artistic artifacts created by us. And we may like to “construct” robots which are not exactly like us but create artistic artifacts which we like. (What these artifacts will tell us about ourselves?)
Art is the product of an animal (Homo sapiens) with a given body and a given sensory and motor organs, a given brain, and a number of different histories (biological evolution, development, learning, cultural and social history) through which it has created itself and continues to create itself. Robots will let us understand why and how such an animal creates art in a way that (correctly) appears to us to be so distant from “explaining,” “defining,” and “predicting” in the sense in which science explains, defines, and predicts everything.
* * *
What is the phenomenon of art? Although philosophers, art historians, psychologists, anthropologists, and sociologists all have attempted to answer the question, we still do not seem to really grasp all of art’s nature and vicissitudes. So the question “What is art?” remains evasive, at least if we are interested in a scientific account based on theories and models that are formulated by using objective, well-defined, and operational terms that generate unambiguous empirical predictions to be compared with intersubjectively observable and possibly quantitative facts.
In this chapter, I propose to develop a scientific model of art by constructing robots that have art. What are robots? Robots are physically realized or simulated artifacts that resemble living organisms and behave like living organisms. Why construct robots? Robots can be constructed with two different goals in mind. They can have a variety of useful applications with commercial value, and this justifies the money and effort needed to construct them. Or robots can be constructed as a new way of expressing scientific theories of behavior. Theories of behavior and mind tend to be expressed in words, but words have ill-defined and often ambiguous meanings. Furthermore, theories expressed in words tend to be unable to generate specific and noncontroversial empirical predictions—which, given our definition of scientific theories, is exactly what scientific theories should not be. Robots are a new way to formulate scientific theories about behavior and mind. The theory is used as a blueprint to construct a robot, and the robot’s behavioral patterns are the prediction derived from the theory. And yet in order to avoid circular logic, the theory must be well defined and operational for the robot’s behaviors to be objectively compared with known empirical facts about behavior and mind.
Imagine that we want to construct robots to understand the behavior of human beings. If our robots behave like human beings, then we can suppose that the theory we have used to construct the robots is a good theory of human behavior. Based on these assumptions, we must know human behavior in advance and our theory confirms what we already know. Being human and having art are intertwined. And since art has such a constitutive role in being human we will never be right to claim that we have built human (not just humanoid) robots unless our robots have also art. The robots that we have in mind are not robots whose behavior is programmed by us but rather are robots that, like real organisms, autonomously evolve at the population level and develop or learn at the individual level whatever behavior they possess. Therefore, it would be critical for us to identify what sort of functions are to be introduced in relation to artistic artifacts in our robots’ lives so that if they wanted to create artistic artifacts and expose themselves to them they would do so. If we install into robots a sense of fascination and they start making beautiful objects, we can than assert that art deals with fascination.
So, when will we be entitled to say that our robots have art? There are many possible answers to this question but I think that an important one is that robots that have art must be robots that spend a significant part of their time (and, possibly, money) listening to sounds, looking at visual displays, and reading written texts produced by other robots, with no easily identifiable practical purpose on the part of both the robots that listen to the sounds, look at the visual displays, and read the written texts (the public), and the robots that produce these sounds, visual displays, and written texts (the artists). Let us start with a presupposition, that art deals with emotions: to construct robots that have art in the sense just defined we have first to construct robots that have motivations and emotions. So we start by describing robots that have motivations and emotions.
Many robots have been constructed that use their time to acquire various types of resources which are important for their survival and reproduction. Imagine a population of male and female robots living together in an environment containing food tokens. The robots have a certain amount of energy in their body that is consumed by some quantity at each time step and, if the energy goes to zero, the robot dies. Thus, in order to survive the robots must be able to approach and eat the energy-containing food tokens. However, survival is not enough. To reproduce they must also be able to approach and mate with a robot of the opposite sex. The robots’ behavior is controlled by a “brain”—a neural network artificially made up of units (neurons) and connections between units (synapses). Such a neural network has visual input units that encode what is in front of the robot at any given time (food tokens and other robots), other input units which encode the current level of energy in the robot’s body (hunger), and motor output units that control the movements of the robot in response to the input. The input is further elaborated by internal units within the neural network and a motor response is generated in response to the input. How a robot responds to the input is determined by the quantitative value of the connections that link the units (synaptic weights). The robots of the initial population have neural networks with random connection weights and therefore these robots are generally unable to eat and to find mates. However, since each robot will behave somewhat differently from all the other robots, for purely chance reasons some robots will be somewhat better than the other robots at eating and finding mates. These lucky robots live longer and mate more than the other robots. Their offspring inherit the connection weights of their parents with the addition of random changes (genetic mutations) that may result in offspring that are better adapted than their parents. This goes on for a certain number of generations and, as a result of the selective reproduction of the best robots and the constant addition of new variability, the robots will progressively evolve an ability to both approach and eat food and to approach and mate with the robots of the opposite sex.
If we observe the robots in their (simulated) environment on the computer’s screen we see that the robots divide appropriately their time between looking for food and eating and looking for a robot of the opposite sex and mating. We say “appropriately” and, in fact, an appropriate use of time is a critical requirement for our robots—and for real animals. Behavior has two levels, a motivational level and a cognitive level, and both robots and real animals must be able to function well at both levels if they want to survive, reproduce, and live well. Our robots have two distinct motivations, eating and mating, and at any given time they need to decide whether to look for food and ignore the robots of the opposite sex or to look for the robots of the opposite sex and ignore food. This is the motivational level of behavior. Once an appropriate decision has been made concerning which motivation to pursue, the robots must be able to generate the appropriate behavior that allows them to satisfy the motivation decided at the motivational level. This is the cognitive level of behavior. It is clear that a robot must function appropriately at both levels. If a robot is good at approaching and reaching food but it ignores the robots of the opposite sex and never mates with them, the robot will live a long life but it will leave no offspring (copies of its own genes) to the next generation. On the other hand, a robot may spend most of its time approaching and mating with the robots of the opposite sex but it may die because its bodily energy reaches the zero level. This robot too will not leave many offspring because it will have a short life. The best robots will be those that both make the appropriate motivational decisions and know how to satisfy their current motivation. The first ability may be more important than the second one because wrong motivational decisions may more directly compromise a robot’s (or an animal’s) survival and reproductive chances—the currency of biological evolution—while a robot which is not particularly good at approaching and reaching food or mates may still survive and reproduce. Evolution and genetic inheritance are not the only determinants of behavior since behavior changes during the life of an individual because of learning and the individual’s experiences. But we believe that biological evolution is crucial if we want to explain behavior and, in particular, art.
As we have stated, a prime requirement for our robots, and for real animals, is that they must be good at choosing which motivation to pursue at any given time. Their motivational decisions must be correct (look for a mate if you are not very hungry, otherwise look for food), must be fast (immediately stop looking for food as soon as you see a mate), must be persistent (do not stop looking for food if you are very hungry even if you don’t see food), and must not be compulsive (stop looking for a mate if there is no one in view). If a robot does not function well at the motivational level, the robot will die or will not have offspring even if it functions well at the cognitive level—that is, if it is able to generate the appropriate behavior that satisfies the chosen motivation. In fact, functioning well at the motivational level is so important for survival and reproduction that evolution has created a special circuit in the brain of animals that helps them to make more appropriate motivational decisions. The states of this neural circuit are called emotional states or, more directly, emotions. Emotional states change the current strength of the robot’s current motivations so that the robot is able to make better motivational decisions.
We can construct robots endowed with neural networks, which include an artificial emotional circuit made up of special units. The circuit is activated by various inputs from either the external environment (for example, seeing a mate or seeing a danger) or the robot’s own body (for example, feeling hungry) and it sends its outputs to the main circuit of the neural network and in this way influences the robot’s behavior. Consider a robot with an excessive tendency to look for food rather than for a mate, and which, therefore, risks having a long life but no offspring. Would the emotional circuit of this robot be activated by the sight of a mate? Would this increase the chances that the robot will stop looking for food and start looking for a mate? Or, consider another robot which is very hungry but, this notwithstanding, tends to pursue the motivation to mate, in this way risking its life. The emotional circuit of this other robot would be activated by (strong) hunger and it would change the robot’s motivational choice causing the robot to look, more correctly (adaptively), for food rather than for mates. An emotional circuit in these circumstances would be “anxiety.” We can also construct other robots that have other emotional states. An excessive emotional state of “fear” would be activated by the sight of danger, (say, a predator), and it would cause the robot to choose to pursue the motivation to avoid the danger over any other motivations. The robot’s offspring may be unable to survive unless they are fed and protected by dangers, so that the sight of one’s offspring, or of one’s permanent mate which may also feed and protect one’s offspring, may evoke in the robot the emotional state of “love” which increases the probability that the robot will choose the motivation to feed and protect its offspring or mate over other motivations. These are simple instances of emotional states for our simple robots. Humans have a much larger number of different emotional states corresponding to their much more complex and articulated repertoire of motivations and, furthermore, in humans learning during life plays a more important role in shaping an individual’s behavior and also the individual’s motivations, compared to other animals.
As we have proposed so far, the emotional circuit may be activated by input from both the external environment and from within the organism’s body. But the emotional circuit involves the organism’s body in other ways. The emotional circuit does not directly help the brain to react appropriately to the input but it also activates the body’s internal organs and systems (the heart, the stomach, the respiratory and circulatory systems, the hormonal system) and the muscles controlling the movements and postures of the face, the sound-producing organs, and other parts of the body and, in turn, it receives inputs from these internal and external parts of the body. Emotional states are “felt” states because of these inputs from the body due to the interactions of the emotional neural circuit with the rest of the organism’s body. But what interests us here is that the changes which occur in the external shape, color, sound-producing organs, and movements of an individual’s body as a consequence of the activation of the individual’s emotional circuit can be perceived by another individual. In this way emotional states are not only subjectively felt but they are also expressed and made accessible to others. The external expression of emotions can be just a by-product of the states of the individual’s emotional circuit and its interactions with the rest of the individual’s body but, for complexly social animals such as humans which live in a social ecology made up of other humans rather than simply in a natural ecology, the expression of emotional states becomes an important mechanism for social interaction. Through the expression of emotions an individual can know the emotional states of another individual, and this is important because, by knowing the emotional states of the other individual, the first individual can better predict and influence the behavior of the other individual and can adapt its own behavior to the other individual. In turn, expressing one’s emotional states can be useful to let other individuals know one’s emotional states and in this way be helped by them to satisfy one’s motivations. Neuro-physiological research has shown that human beings socially resonate emotionally in the sense that an individual can share the same emotional state of another individual if the first individual perceives the external expression of the emotional state of the second individual. This sharing of emotional states may become a collective phenomena which involves many individuals and which lets these individuals coordinate their behaviors for some socially shared goal and expect mutual help.
Humans create artifacts: that is, they modify and shape the environment and construct objects that are added to naturally existing objects. Most human artifacts are practical artifacts, that is, artifacts that allow humans to better satisfy their practical needs. Let us go back to our robots that need to eat and to mate. Eating in these robots is very simple: they approach a food token and when they touch the food token, the food token disappears. It is considered eaten, and the robot’s bodily energy is increased by some fixed amount. But consider another robot that has a somewhat more complex adaptive pattern and needs to construct tools in order to eat.
The next step is to connect constructing artifacts and having emotional states. As we have seen, external stimuli can evoke emotional states that help our robots to make better motivational decisions. The sight of a mate may evoke an emotional state that makes it more probable for a robot to choose to pursue the motivation to mate rather than other motivations. The sight of a predator can evoke an emotional state that increases the probability that the robot will try to escape from the predator instead of continuing to look for food. The sight of its offspring or permanent mate can evoke an emotional state in the robot that causes the robot to feed and remain in proximity of its offspring or permanent mate instead of doing some other thing. Now imagine that our robots construct artifacts meant only for stimulating its senses, causing an emotional reaction state in both the creating robot and other robots that come into contact with the artifacts. These “emotional artifacts” are the artifacts of art: they can be bi-dimensional (pictures) or tri-dimensional (sculptures or buildings) visual objects, sounds (music), spoken and written texts (poems, novels).
Why should our robots construct artistic artifacts? Why do they need to stimulate the senses? It is important to answer this question because if we want to actually construct robots that have art, our robots should have an overall adaptive pattern in which making artistic artifacts is adaptive for them. Thereby, we must start with a population of robots who do not construct such artifacts but evolve and learn to create them. The robots we have described at the beginning of this chapter are purely practical robots. Why should they construct artistic artifacts? What is the “added value” of artistic artifacts that may explain why humans began making artistic artifacts so early in the history of the species?
Our research strategy is to generate a series of hypotheses on the possible function(s) of artistic artifacts for humans, to build robots based on these hypotheses, and to see if the behaviors exhibited by humans with respect to artistic artifacts can be reproduced in the robots. Here are some hypotheses:
Artistic artifacts allow the robots to objectify and make explicit for themselves their own emotional states, and this can be useful for the robots to know themselves better, which in turn can allow the robots to generate more adaptive behaviors.
Artistic artifacts allow the robots to experience strong emotions, and to familiarize with them, without the dangers associated to strong emotions.
Artistic artifacts remind the robots of pleasurable experiences and therefore induce pleasurable experiences.
Artistic artifacts create shared emotional states in groups of robots that allow the robots to better coordinate among themselves and to make the behavior of the other robots more predictable.
Exposing oneself to artistic artifacts makes a robot emotionally more sophisticated—that is, more responsive to its own emotional states and to the emotional states of others.
These are all hypotheses about the origin of art in humans and, of course, they are not mutually exclusive hypotheses. To test these hypotheses we should construct robots that incorporate these hypotheses and see if our robots develop the behavior of constructing artistic artifacts and of exposing themselves to these artifacts. The general idea behind all hypotheses is that artistic artifacts cause the emotional states of an individual to better regulate the motivational decisions of the individual, and we have seen that making appropriate motivational decisions is a critical requirement for survival and reproduction. So our robots should spontaneously begin to construct artistic artifacts. If they do not, our idea should be revised or abandoned.
We have sketched an approach to constructing robots that have art. Our robots are very simple and we have described only some basic ideas for constructing them. But our method is clear. Unless our robots develop a tendency to spend some of their time producing artifacts with no practical use and exposing themselves to these artifacts, we will not be entitled to claim that we have constructed robots that have art.
But even after we have constructed robots that have art, many important questions remain open, and the validity of our approach should be measured in terms of its capacity to provide answers to these questions. We conclude the chapter by briefly describing these questions:
Our robots that have art should also tend to expose themselves to natural objects such as flowers, natural landscapes, and individuals of the other sex as they expose themselves to artistic artifacts, and with an attitude that shares similarities to the attitudes with which they expose themselves to artistic artifacts. Furthermore, their practical artifacts should have properties, such as their particular shape or color and do not have a practical purpose but trigger in the robots the same responses generally caused by artistic artifacts. And, finally, the robots should exhibit other types of human activities, such as playing and entertaining oneself, that show some similarities to reactions to artistic artifacts.
Artistic artifacts appear to evoke one particular type of response on the part of the individuals who expose themselves to them, a response which focuses on the perceptual properties of the artifact and does not consider its possible use or other practical implications. These properties are sometimes referred to as “form,” and they explain why the notion of art is historically linked to the notion of form. It is interesting that something similar also occurs in the case of science. A scientist responds to some particular phenomenon or fact by focusing on the observed or otherwise known phenomenon or fact without considering the practical implications of the phenomenon or fact. In this sense, both art and science are “speculative.” However, it is also important to find how art and science differ. The differences may be many. One might say that an artistic artifact is processed by both the cognitive half and the motivational/emotional half of the mind while a fact or phenomenon is processed only by the cognitive half of the scientist’s mind. Science needs emotional detachment while art needs emotional participation. Or one might say that a fact or phenomenon is processed by the scientist’s mind for at least one practical purpose, that is, the purpose to be able to predict other facts or phenomena, while artistic artifacts do not even have this practical implication. However, one has to consider that predictability can also play some role in artistic artifacts and, in fact, “beauty” is sometimes explained in terms of symmetry, where symmetry implies predictability.
We have talked about art in general but art varies as a function of society, epoch, and its relation to other human activities such as religion and power. Therefore our robots should be able to reproduce not only art in general but also the different ways in which art manifests itself in different societies, epochs, and in its relation to other human activities. If we want to understand humans by constructing robots that behave like humans we should be concerned with, and reproduce with our robots, not only the phenomena studied by psychologists and neuroscientists but also the phenomena studied by social scientists. In this case we should be able to reproduce not only art but also the place of art in human societies. And we should be able to address two interesting questions about today’s art: (1) If art is based on the introduction by the artist of variations in inherited schemes, can art exist today if inherited schemes tend to be rejected? (2) Why does art today tend to become marketing, that is, the production of artistic artifacts to “sell” them rather than to express something with them?
Another important research question is whether different types of artistic artifacts, for example, music versus visual displays versus literary works, play different functions for our robots. Although all robotic and human cognition appears to be action-based, music may more strongly imply physical movement on the part of the listener to follow and anticipate rhythm and melody when compared with visual displays and literary texts. Architectural works are experienced differently than other artifacts (for example, by entering into buildings) and may evoke specific feelings of enclosure, protection, respect, and community.
A final research question in the study of art is: How are human artistic artifacts related to the specific characteristics of humans: their body (size, shape), their sensory and motor organs, their adaptive pattern (sociality, mental life, etc.). Will robots that have different bodies—different sensory and motor organs and different adaptive patterns—develop artistic artifacts different from human artistic artifacts? For example, what types of artistic artifacts would be constructed by robots that do not have human-like hands; robots with less sophisticated visual capacities and, say, more sophisticated smelling capacities than humans; robots that displace themselves in such media as air and water rather than on surfaces; robots that are much smaller and less heavy than humans? If we are able to construct robots that produce artistic artifacts that are unlike human artistic artifacts, the artistic artifacts produced by the robots may suggest new types of artistic artifacts to humans.
Peter Lunenfeld
Unimodernism describes the ways in which modernism in all its variants and historical strains comes together with the networked cultures of electronic unimedia.
Unimodernism assumes that which we archived as early modern fervor, high modern sophistication, and postmodern pastiche will now coexist co-temporaneously in global networks, accessed at the whim of the downloader and deployed as the user sees fit to be uploaded yet again in an ever-increasing blur of style churn.
“Tool fatigue” is a term I first heard in the late 1980s to describe the difficulty people had then in dealing with the rapid advance of digital technologies. Obviously, that was before the absolute explosion of the digital worldwide in the past quarter century, so my objective answer would have to be no.
What happens is that each new generation adopts some strain of the digital as its own, often letting earlier and still vibrant technologies languish. Thus you’ll have “pro-am” users defining themselves not by any capacity to create social media tools—as hacktivists have long maintained is vital—but instead as “content providers” first and foremost. Perhaps the next generation will look at these contemporary “users” with disdain and spark a DIY, programming-centric movement.
The issue here is less “creativity” than it is distribution and audience. The fact that food is the new rock and roll, that young people aspire to be chefs in the way that they used to want to be lead singers, demonstrates that there is a huge demand for the “real.” It just so happens that the food truck, the pop-up store-front, and the home-based restaurant are the “new” venues for the nonvirtualized and the artisanally produced. That digital technologies are used to support and promote these kinds of ventures simply demonstrates that they are indeed in and of their own, networked moment.
I see no evidence of the Web reducing consumerism. It may eliminate whole categories of shops (witness the decline of book stores and the devastation of video stores in the United States, for example), but global networks introduce new ways to meld leisure time with shopping—everything from commercial pop-ups in on-line television (click to purchase what the starlet is wearing) to the emerging marketplace for virtual goods (buy a magic axe from a professional Chinese gamer to outfit your World of Warcraft avatar).
I do not think a purely symbolic interpretation of culture is even possible, and I know that it would be inadequate. Markets and networks are both means of distribution, and, as in your earlier question about “creativity,” I think that cultural production cannot be separated from modes of dissemination and use. Unimodernism became for me, and I hope will become for my readers, a way of thinking in, through, and with networks about the cultures that they were simultaneously downloading and then uploading.
No cultural innovation comes without cost, and complaints about art and media flattening experience are older than Plato. Every era has a different ratio of the sublime to the banal. What that balance will look like, and how the future will assess the unimodern contribution, will be up to contemporary makers and users, artists and audiences, readers and writers, downloaders and uploaders.
* * *
The visionaries of the early twentieth century transformed the look and feel of culture, not supplanting the past’s bracings of oak and edifices of marble so much as adding the sheen of industrial materials like concrete, glass, and steel. By the 1920s and 1930s, the audience for modernism was equally an audience for the machine aesthetic: the hard, unembellished lines of El Lissitzky’s graphic design; the clanks and atonality of Alban Berg’s opera Wozzeck; the sensuous curves of Marcel Breuer’s chromed steel tubing in his “Model B32” chair; the assemblages of tubes, pistons, and levers that compose the Fernand Léger painting “Nude on a Red Background”; the severity of Rudolph Schindler’s untreated wooden beams intersecting with unadorned canvas-covered sliding door frames; and the comic yet sinister factory where Charlie Chaplin works in the film Modern Times. Regardless of media, artists sensed the change, and filled their work with the sights, sounds, and even smells of industry, figuring the machine as central to the cultural ground of the twentieth century.
While industrial machines popped a hundred years ago, information has emerged as the key figure for this new century. There are historical parallels between the emergence of the machine aesthetic in the first decades of the twentieth century and the nascent aesthetics of a digitized, unimodern culture in the twenty-first. The second half of the nineteenth century developed a market economy that produced and consumed machines. The early decades of the twentieth century saw artists, architects, and designers responding to this fever of material production by figuring the machine in their art, architecture, and design. The second half of the twentieth century, in turn, became an ever-accelerating feedback loop of information.
Thus, we should not be surprised that the past few years have seen our culture machines producing information-based art, architecture, design, and media; a digitized, interconnected society produces objects and systems that deal with software, databases, and the invisible flows of communications technology and computing algorithms. The great-grandchildren of those obsessed with Victoriana in the 1920s may look back with bemusement on their forebears’ archaic tastes, but they are the ones flocking to modernist emporiums like the Conran Shop and Design Within Reach to purchase the highest expressions of the machine age at the very moment that the info-aesthetic is on us.
If we accept that the digital computer is our culture machine, we can understand the ways in which information has popped to the forefront of our consciousness by using figure/ground relationships to analyze how electronic databases have transformed our expectation of stylistic “progress” and warped our cultural memory. When image, text, photo, graphic, and all manner of audiovisual records are available at the touch of a button anywhere in the wired world we experience not multimedia, but in fact unimedia. In a unimediated environment, the ordered progression through time is replaced by a blended presentness—what literary theorists would refer to as the triumph of the synchronic over the diachronic. One reason our faith in progress has waned even as the future continues to manifest itself on our desktops and in pockets stuffed with smart devices is that this blending produces what I call a state of permanent present, which impairs the facility to appreciate the present, much less produce a new, better future.
This sense of permanent present affects not only the present and the future, it also has an impact on the ways we remember the past. Contrast our moment with Classical Rome, where a senator like Cicero could become famed for his mnemnotechnics, or the practice of memory. Twenty years after the fact, he could recite, word for word, speeches that he had heard on the senate floor. In a period before the wide availability of paper for taking notes, a trained memory was of inestimable value in governance and commerce. Print transformed this situation, and by the Enlightenment, the arts of memory were already obsolete. If anything, the culture machine allows for even the outsourcing of our memories, with audio files, image banks, and video storage added to the archive. The effects of all of this storage go well beyond the memory of personal experience to encompass our memories of mediated experience as well. The universal database transforms the direct linkage between the object in time and the actual memory of that time. It creates a co-presentness of eras, with a predominance of the modern. It is for this reason that I say we are in a unimodern, rather than post- or late-modern period.
Unimodern production follows an arc first traced by Duchamp and his ready-mades. After Duchamp’s Fountain (1917), it is the presentation of the object that defines that object’s function within culture, with the shaping and molding of context come to the fore. This is not news, and in fact, those defining the differences between the high modern moment and what followed it hinged their definition precisely on this elevation of context to parity with the text itself. It has only been within the past decade that the combination of computers and communication networks has been robust enough to contribute to the creation of context. This context takes many forms, especially in relation to popular media, from the preplanned marketing of tie-ins from music CDs, television spin-offs, and lunch boxes, to the efflorescence of discursive communities generated by fans. In certain cases, these all combine to create something far more interesting than the backstory and more complicated than synergistic marketing. This is the “hypercontext,” a dynamic, interlinked communicative community using networks to curate a series of shifting frames and content. The addition of greater levels of information to an object or system is not simply an additive process, it is a transformative one. It transforms objects by augmenting them and situating them in vastly larger hypercontexts, and when done in the proper spirit it makes them exemplify what we can now see as an emerging, global uniculture.
Being able to tell figure from ground in this environment of hypertrophied, transtemporal bricolage becomes a vital part of negotiating the use of the culture machine. When the whole of popular culture from the last hundred years is finally brought under the disciplinarity of the universal database, it all becomes ground, and the refiguration of its parts becomes a veritable economic necessity. Those who are capable of refiguring in a way to attract an audience become fantastically powerful wealth generators—from hyperstylized director Quentin Tarantino to hyperintellectualized architect Rem Koolhaas, from Japanese Superflat artist Takashi Murakami to U.S. lifestyle guru Martha Stewart. I have mentioned artists, designers, and directors here, but being able to flip between ground and figure is central to everyone’s use of the culture machine. What we all, from world-famous designer to weekly blogger to occasional taker of digital snapshots, need is a catalog of strategies to help us understand what we download and contribute to what we upload. The ways that we figure words, sounds, images, and objects from the ground of information will define how and what we are able to produce with the culture machine. The key is to understand that we are constructing a uniculture from unimedia with unimodernism as the aspiration.
What follows here catalogs some of the unimodern strategies that these unimedia follow in this unicultural era.
We are now so deeply entrenched in the era of word processing that we have forgotten how revolutionary the development of dynamic text was for the production of literature. That the culture machine can reformat your work while you are typing it, that you can grab chunks of it and rearrange them, that you can search for terms and replace them, and that the process of adding and editing is essentially one of unfinish—these are all the modes under which we work, so instantly ingrained that we have forgotten just how new they are. When you add in hypertextuality, the ability to link and jump from one section of a text to another, or from one text to an entirely different one, you have one of the defining qualities of the unimodern culture machine.
Hypertext showed the way by making the link integral to the construction of the meaning. The creation of meaning via juxtaposition is ancient, of course, but the modern era’s refinement of collage in still images and then montage in the cinema elevated the status of the meanings produced through these processes. The televisual era introduced a randomness to the juxtapositions. If Soviet filmmaker Sergei Eisenstein’s dialectical montage was about the deliberate production of effect through cinematic editing, channel zapping on television was closer to the experimental “cut-up” fiction of Brion Gyson and William Burroughs in the late 1950s. Gyson, a painter, and Burroughs, a novelist, created texts and then literally cut them up into pieces, reassembling the fragments at random, giving up a large measure, though not by any means all, authorial control.
The earliest attempts at hypertext tried to marry the randomness of the cut-up technique to a restricted universe of potential connections, thereby establishing the technoliterary equivalent of a forced card in magic. You had choice as a user/reader, but your choices and paths were often predetermined by the author. The advent of the World Wide Web broke open these closed text worlds, creating the freedom to jump around with “real” randomness. One of the earliest net.art text pieces understood the new environment perfectly, linking every word on a Web page to a domain that contained that word—a far more inventive concept in 1996, when there were thousands rather than billions of pages in ether space. What is new in the world is that text more and more becomes something that is linked to anything, words become the building blocks of augmentations, the whole world develops labels like those at museum exhibitions, and each label links to another one describing, advertising, or commenting on another text, another image, another object. The hyperlinking that starts with text as far back as the 1940s’ experiments of Bush and Turing becomes the default mode of figuring “meaning” in the world. What happens with text moves on to sounds, then to images, and finally to physical objects.
For the culture machine, it is as though everything that happens in the realm of the visual happens years before, first with text and then with sound. Sound is cheaper and easier to store, manipulate, and upload than images, and so it has been that digital technologies have transformed not only the media that the music arrives on but also the very aesthetics and content of that music. The shift from analog to digital is about much more than the shift from vinyl albums to CDs, and then to free-floating file sharing. The proliferation of cheap synthesizers and editing suites enabled by digital technologies spread this meme to musicians and producers worldwide, and the music itself began to change. By the time the culture machine eventually simulated and subsumed these sound-generating and sound-organizing modalities, an entire generation of listeners were creating sampled, remixed, digitally processed, digitally accessed music. From the now-quaint “You’ve Got Mail!” AOL voice-coder greeting, to the advent of audible interfaces and game soundscapes, to the popularity of pop snippets as personal audio identifications in cell phone ringtones, there has been a proliferation of audio cues within work, play, and mobile environments.
The unimodern soundscape owes a huge debt to hip-hop culture. The origins of hip-hop are to be found in the analog arena. In the 1970s, disc jockeys in the Bronx cut back and forth between turntables with vinyl records on them, mastering their ability to “drop samples” and use the turntables themselves to generate new sounds—the ubiquitous “scratching” of that era. But within a decade, the culture machine started to absorb and simulate these analog techniques, and the digital sample became the music’s building block, and remixing became the aesthetic strategy of choice. Hip-hop and high tech are inextricably bound together, offering a sterling example of the street finding its own uses for technology.
One place to see the hip-hop collage aesthetic collide with post-Napster file sharing is the phenomenon of the mash-up. Mash-ups meld two or more recordings into a new entity, famously done by Danger Mouse’s mash-up of the Beatles’ White Album, a defining work of the 1960s’ rock era, with Jay Z’s rap epic The Black Album (2003) to create The Grey Album (2005). The result was widely distributed because of the Web, file sharing, and the proliferation of sound and image editing tools. The ability to download vast archives of music, whether accessed legally or (more likely) illegally, allowed for an explosion of mash-ups. The fad, for it was a fad, eventually died down, as Web-driven phenomena frequently do, but mash-ups were proof that huge audiences were playing with their culture machines, mixing, matching, pasting, and then getting that unimodern material out into the unimodern world.
What happened in text and sound inevitably spread to the realm of the image. The explosion of cultural production that mash-ups reflect has in turn transformed our understanding of the meanings of words like “print” and “publish.” We print much more than text these days. The first major shift came in the era of desktop publishing. In the digital realm, text and image are just strings of ones and zeros, indistinguishable as information, and made manifest only by the medium in which they are eventually released. So an image could be fluid in an animation, printed on paper as a screen, encompassed in a resizable window with surrounding text, or blended in a graphic with those same typographic elements, which could themselves be animated as a motion graphic.
In his book Lifestyle (2000), the designer Bruce Mau refers to “Postscript World” when he discusses the radical transformation that the culture machine brings to our visual environment. With the development of “page description languages” like Postscript from Adobe Systems, there is “no longer any distinction between text and non-text, image and non-image.” Surfaces are “now described in one language. Everything is now image.” Postscript World announced itself with the desktop publishing phenomenon, in which the image on the monitor looks like the page that the printer will produce, and vice versa. This was the software/ hardware combination that brought us the acronym WYSIWYG, for What You See Is What You Get.
The previously independent realms of word and image were now brought together under the sovereignty of Postscript World. What had once been the realm of obscure pasteup artists, burly press operators, and black-clad design gurus became a commonplace at every office worldwide. In 1970, only the most design savvy knew what people meant by the term “font”; three decades later, second graders talk about their favorite letterforms with a passion formerly reserved for toy trains and paper dolls. When images and words are both expressed in the same code, the distinction between them erodes, and people speak with images and paint with type. As the Postscript World came to embrace the mutability of Photoshop as well as the development of animation and motion softwares like embedded digital video, centuries-old distinctions between media forms dissolved in turn and created unimodern unimedia, the digital soup that the networked culture machine pumps worldwide.
From words to sounds to pictures to moving images, the networked computer has transformed the production of culture. The next new thing that is in fact already here is the “printing” of objects. The Postscript World of image/text printing has become part of an even larger system of computer fabrication, or “fabbing,” in which what was once restricted to two dimensions is extruded in three. WYSIWYG, “What You See Is What You Get,” is being followed by an era of what I call WYMIWYM, for “What You Model Is What You Manufacture.” Just as WYSIWYG allowed new freedoms to graphic designers and two-dimensional image makers, the WYMIWYM era of computing allows architecture and industrial design to play with form and iteration, and make complex extant forms easier to manufacture profitably. In other words, what the computer did to the flat, two-dimensional fields of painting, photography, and graphics is now happening in the three-dimensional realms of sculpture, industrial design, and architecture, as artists, designers, and architects develop forms on the computer, and then fabricate them with three-dimensional printers.
An architect like Greg Lynn can use three-dimensional printing to do everything from creating maquettes, or small-scale models, of buildings to making prototypes for designs for a line of flatware commissioned by the Italian design manufacturer Alessi. When the fabbing specialists at the design collaborative Machine Histories worked with artist Pae White to create a complex bedframe for an exhibition, they worked with solid Corian, usually a surfacing material in kitchens and bathrooms. The object, “untitled” (2006), was so intricately worked by Machine Histories’s unique tool paths that it felt airier than one would ever expect a headboard to be. The deft carving and intricate detailing went beyond what handwork could have accomplished, and serves as a reminder that expertise in three-dimensional fabrication will indeed bring on a new material culture for the twenty-first century. This is all the more true because art, design, and architecture students are getting exposure to 3-D modeling tools along with large-scale 3-D printers, extruders, and other computer-aided manufacturing in school now, and you can bet that they will fill their own studios and ateliers in the future with the smaller, cheaper 3-D printers that are already in development by the manufacturers.
These WYMIWYM objects obviously figure informationalism in their production process, but as they themselves become linked into larger networks, through the incorporation of sensors, transmitters, and augmentation, they begin to attain autonomy. From mute objects and closed spaces, they become nodes in the network, aware of their place and time, and capable of communication from the minimal to the maximal. The incorporation of radio frequency identification devices (RFIDs) and microcontrollers into formerly quotidian objects enlivens them in an almost magical way. Like the animated brooms in Walt Disney’s Fantasia that come alive when Mickey Mouse accidentally enchants them as the Sorcerer’s Apprentice, there is a glamour, in its magical rather than fashionable sense, inherent in these new, augmented objects and spaces.
The explosion of WYMIWYM objects and spaces will bring about an efflorescence of style, just as WYSIWYG publishing did. Much of it will be excruciatingly bad, worse even than bad desktop publishing because it will have more dimensions to fill with its awfulness, but this is to be expected and embraced. Much that is wonderful will also be discovered, and perhaps some of what makes us wince will eventually earn at least grudging respect for its exuberance. But the ability to follow a program, in the architectural sense of an overarching vision, that the WYMIWYM era allows can engender the opposite problem from that of too much unstudied pluralism: it can also allow for the figuration of information in too perfect a form.
Karl Kraus, a Viennese modernist in the early 1900s, once complained that art nouveau living spaces were so fully integrated that they allowed their inhabitants no “running room” for the imagination. In the emerging clusters of entertainment design and experience design we see the resurgence of the totalizing impulse. The Disney World model of complete design integration from food to signage to people mover to thrill ride to collectible souvenir moves centrifugally outward from its Orlando home, becoming the de facto model for new experiences within entertainment capitalism. One factor contributing to the rise of entertainment and experience design is the computer itself, which allows for an unprecedented merging of design disciplinarities along with a sharing of communication and information across design groups, participating companies, and geographic space.
The impact of these intersecting design and technology schema are to be found everywhere from the branding overkill of themed resorts like Paris, Las Vegas, to Jean Nouvel’s seamlessly integrated galleries of indigenous art at the Musée du quai Branly in Paris, France. Here, as in so many other hyperdesigned spaces around the world, interface and object, building and Web presence, as well as commodity and brand identity all swirl together in unimodern, digitally enabled Postscript documents and WYMIWYM environments.
The figuring of informationalism into form has been our preoccupation in this section, and these forms—as words, sounds, images, objects, and even spaces—serve as semantic building blocks for the syntactic ways with which we will “speak” with these media. The secret war between downloading and uploading is predicated on the idea that the message and its meaningfulness need our full attention as well.
People’s willingness to embrace unfinish differs by age and class—that is to say, by who can afford it in the first place. Sometimes the adults who design systems can forget how much younger users are invested in finding ways to fill their downtime. Television, music, and video games can all be seen as preemployment time fillers for adolescents, and even those self-styled “rejuveniles” who are choosing not to abandon the games and pastimes of their youth. But those with the desire and access to the culture machine can kick-start their own do-it-yourself (DIY) movements. There are deep desires to categorize and annotate one’s own life as well as the lives of one’s friends and community. This moment is not about professional narratives so much as the development of new tools to create letters, diaries, photo collages, and home movies.
At its best, these DIY archives transform lived experiences not into commodities sold back to us but instead as realized memory traces that we construct ourselves and communicate to communities of interest. These actions indicate that the desire for the personal rather than the professional archive is ever expanding. From the mimeograph machine, to the advent of videotape, to fax technologies, to public access cable television, each new communication technology brings with it a new potential for participation. Think of the copier machine, which was a huge boon to the punk era, when fans produced zines (the small magazines and fan letters that were created out of a sense that Rolling Stone and the other major magazines would never “get” punk). The computer has encouraged the growth of new forms of DIY, hacktivist, and even craftivist culture.
Take, for example, the crafting Web site etsy.com. It is composed in almost equal measure of three apparently unconnected concepts: an enthusiasm for alternatives to mass-produced objects, e-commerce capacities inspired by the success of eBay and Amazon, and the gestalt of a summer craft fair in Vermont. Etsy has grown by attracting a young, primarily female user base that is interested in making, selling, and buying handmade objects. The site’s rhetoric and design schema are carefully considered to attract just such a demographic, of course, but there is also a sense that Etsy would and could not exist without the authentic excitement of its users for a space that could not have ranged as widely before the Net provided the affordances for such a community. One of the interesting evolutions of the site has been the growth of the “buy local” option that allows members to develop place-based networks as well as national and international ones. Etsy’s users want to create a different relationship to their material positions, carve out a space in which makers can communicate and trade, and build what essentially become microeconomic relationships that are personal rather than corporate.
MAKE, a magazine, Web site, PBS television series, book line, and succession of public “Faires” takes DIY concepts and makes them available in an ever-expanding set of interrelated media. Mark Frauenfelder, MAKE magazine’s founding editor-in-chief, brought a great deal of credibility to his publishers when he proposed a concept for engaging with the remarkable explosion of objects made by and with the culture machine. Frauenfelder had been involved the cyberpunk print fanzine Boing Boing. After migrating to the Web as boingboing.net, it grew into a huge “directory of wonderful things,” as Boing Boing says in its masthead. The site’s studied eccentricity, the indefatigable energy of the four principle bloggers, and the bloggers’ worldwide network of interesting collaborators exposed both Frauenfelder and his boingboing.net readers to everything from long and serious discussions about culture jamming to a prototype for a polite umbrella that contracts to avoid poking other people in the eye.
Frauenfelder’s next move was to create a separate entity to concentrate on the making of this kind of culture—a twenty-first-century hybrid of Popular Mechanics and Martha Stewart Living. MAKE magazine’s first issue came out in 2004, and since then it has covered everything from crafting interactive fashion to creating personal lighter-than-air dirigible flying robots.
The emphasis is on producing new and networked objects, and the response was strong enough that Frauenfelder and his coworkers decided that they could expand into producing live events to bring together their community, offering demonstrations and workshops, and growing the number of people interested in these new DIY phenomena. The resulting events, called MAKER Faires, drew from other communities, like the DIYers who have been such a huge part of the Burning Man festival in the Nevada desert, and became social spaces that blended consumption and production, fan and maker, and online interaction with real-life excitement. The point here is less the commercial success and long-term viability of the Etsy and MAKE DIY communities than the ways in which their very existence points toward a future of blended real and virtual communities devoted to the material production of culture along with its integration into more open spaces of commerce, trade, and exchange.
The ease with which people can build a like-minded community combines with the ability to share component software as well as report on process and results. There are knitters using networks to expand their discussions about their craft, the open-source software and hacker communities, and then interesting hybrids like “modders,” as those doing electronic modifications call themselves. These people take mass-produced objects and change or modify them in a way to “make personal” the products of an advanced technological society. The sheer amount of craft and obsession that went into the process of remaking an iPod out of hardwood, including a working jog wheel, boggles the mind, but it is a quintessential mod. This is a physicalized metaphor for remix culture—taking something, adding one’s own spin, and putting it back out into the world (with mods, it is often just pictures of the object and its production process). But the more bit-driven realms of remix culture differ in that the remixes are then sent back out into the world to be remixed again themselves in a recursive and ever-unfinished loop.
Certain media are either emboldened or diminished by the expectation that “in the future” they will become somehow that much more than they already are. Games, for example—like comic books, or “graphic novels” as the recent rebrand would have it—have long been in just such a situation. Although there is no area in which the computer as culture machine has come to so dominate, games are still seen in many quadrants as forever on the verge of crossing over into a realm of deeper meaning and greater cultural impact. Part of this tentative embrace of the gaming medium is that the worlds that games create have steep entry costs—not so much in terms of money or even access, but rather temporally. To master the skills required to play proficiently enough to enjoy gaming itself is merely the first investment of time. The next, and perhaps most serious in terms of this discussion, is the time needed to simply explore the game space sufficiently to see it as more than a fragment. This can be ten, twenty, forty, or even eighty hours of commitment. That strikes committed gamers as a fine value for the money invested in the purchase of the game, but the sheer time demanded tends to deter the uncommitted or “casual” gamer, much less the bystander who might be interested in the experience, yet cannot justify such an expenditure of time. In this, gaming is quite different from the cinema, where a 90-to-150-minute commitment is all it takes to be part of the “experience.”
One way to understand this divergence is to realize that for all their narrative conventions, games are not best understood as interactive stories. To get a feel for what matters in gaming it is worth revisiting their earliest history, before gaming’s visuals came to rival the realism of cinema and television. Although there was a tic-tac-toe game and a tennis simulator in the 1950s, it was really Spacewar!—developed by students at MIT in 1962 for their own amusement—that stands as the urtext of gaming. With two armed ships shooting at each other while spiraling down a gravity well, Spacewar! established a few conventions of gaming that remain powerful today. These include conflict, time limits, and graphic interaction.
The game itself was a useful way to gauge the speed and accuracy of the Digital Equipment Corporation’s PDP minicomputers, and the company began to ship later units with the game in the core memory. This ensured an ever-growing group of users, who would go on to create later pioneering games for arcades and the growing home market, including Pong, Space Invaders, and Pac-Man. Arcades, consoles, computers, and handhelds—these and more were the material substrate of gaming. Over the years, designers have configured their games for single players, for a few players arranged around a television, or for millions spread out worldwide on the Net in massive-multiplayer configurations. What has not changed, no matter what the era or configuration, is the importance and specifics of game play.
There is no question that games have become a fantastically successful part of the culture machine’s impact. For their players, there is no denying that gaming brings a level of enjoyment equaling sport and a level of immersion that comes to rival architecture itself. The power of gaming to involve the committed, then, is hardly worth discussing. The longer-term issue is whether those gamers will in turn affect the culture as a whole or whether the ludic experience will be restricted to its own, hermetically sealed world. As haptic and other interfaces become more widespread in the wake of Nintendo’s success with the Wii system, whether or not those casual players become more involved with other forms of game play remains to be seen.
Two other arguments tangential to play itself have dominated discussions about gaming. The first is the effect of violence in the game space on violence in the real world, and the second is about the influence of gaming’s twitch culture on cognition. The first is an argument about content for the most part, and while it has a great appeal for parents concerned about exterior influences as well as the politicians who cater to these voters’ concerns, this is a contention that holds less and less interest as “shooters” become more and more a specific genre of game rather than an overarching category. The neuroscience and cognitive science studies on gaming are still coming in, and critics, depending on their preconceptions, divide into two camps, either bemoaning the splintering of attention that video games bring in their wake, or lauding the response time and multitasking skills that games engender in their most avid players. These are all serious issues, spanning the range from the sociological impact of repetitive actions to the neural conditioning that distinguishes gaming from other media. In the context of the assertions offered in the rest of this chapter, however, I would say that the pressing issue is whether individual games or games as systems can accrete in such a way as to create what one could call ludic stickiness.
One game that was indeed sticky involved players running around a huge and unconventional map of the world, working together to deploy resources and innovative technology to make not just their team but rather the whole globe a better place. More than a generation ago, the polymath futurist and designer R. Buckminster Fuller (of geodesic dome fame) proposed this multiplayer “design science process for arriving at economic, technological and social insights pertinent to humanity’s future envolvement [sic, a signature Fuller neologism] aboard our planet Earth.” Originally called the “great logistics game” and then the “world peace game,” it was best known simply as the “World Game.” Inspired in part by the war gaming that planners engaged in to prepare for the hot battlefields of World War II and the colder, yet protracted, conflicts with the Soviet Union that followed, the World Game was a revamping of these strategies to think about how best to use resources to ensure planetary happiness.
Often laid out on the unfolded polyhedron of Fuller’s own Dymaxion map, the game used a synergistic rather than competitive play strategy to determine ways to best harness the natural resources of the planet. Fuller’s map gives a better sense of the relative sizes of the continents than the usual Mercator projections, and even more subversively does not have a natural “up” or “down” that de-privileges people’s usual expectations of maps and the sense of space that they project. Fuller maintained that the goal was to “make the world work, for 100% of humanity, in the shortest possible time, through spontaneous cooperation, without ecological offense or the disadvantage of anyone.” The World Game was a product of postscarcity thinking and 1960s utopianism, played without benefit of networks and computer simulations, but its essential message—that humans working together have the potential to craft a better world—resonates, and more than ever looks like a prototype for the networked effects of simulation and participation.
Simulation and participation drive everything from figuring information to the fabbing of WYMIWYM objects; they make possible the mixing and mashing of open-source sound and imagescapes; and they shape the ways that we work as well as the ways that we play. It is my hope that the detailed listing of all these manifestations of the computer as culture machine in aggregate proves the existence of the unimodern unimedia posited at the start of this chapter. In keeping with the spirit of this project, I hope to not simply identify unimodernism but to point toward ways in which its unimodern unimedia might deepen meaning and engagement with the world, art, and each other. What we need to confront is the explosion of information that computer networks engender.
Understanding the changes wrought by computer-inflected technologies points to the huge difference between processing data and designing its output. This conceptual clarity will also help us to categorize what kind of culture we are actually constructing in the twenty-first century. If we divide the last century into early modern, high modern, and postmodern strands (roughly 1900–1919, 1919–1973, and 1973–2001, respectively), the culture machine’s ubiquity has braided all three (and more) into what we have already identified as unimodernism. The twenty-first-century culture machine’s modernisms exist simultaneously in an ever-present database, ready to be deployed or redeployed in the cultural equivalent of just-in-time production.
The single most important issue is to ensure that the uniformity of substrate that the computer brings to culture does not produce a stultifying sameness of content. To do so, it is worth revisiting Karl Kraus’s concept of running room. In the original German, the word is Spielraum, the roots being Spiel, or “play,” and Raum, or “space.” So whether running room or play space, the concept brings with it a sense of exploration, imagination, and engagement with the unexpected. The sheer productive capacity of unimodern unimedia can and should be able to carve out this Spielraum. Running room is different from the touted benefits of diversity, however, because diversity is often another way to describe the offerings in a bazaar. If the diversity that is being offered is simply in the realm of consumption, it remains just that: consumption.
The play space I am discussing will be located within twenty-first-century capitalism, but it has to offer the choice not to buy and especially the option to make. That is one reason that the open-source community is so crucial to the future of running room. Free culture as a gift exchange offers a real challenge to the inherited affordances of market economies. The generosity of online communities serves as a way to access the powers of the always already available archive of the unimodern culture machine without falling prey to the notion that the market defines everything and that the imagination must be tied to its precepts.
We have already seen how unimodern unimedia has exploded access and content in our cultural archives. This expansion has in turn led to more opportunities for collaborative multi-authorship. This kind of unsigned multiple creatorship is reminiscent of the Greek myths and the Great Wall of China. Both the myths and the wall took centuries to build, and thousands of people contributed to their effort. We build multiple author works as well, but now we call them Linux, Wikipedia, Flickr, Tumblr, and communal bogs. These are the cultural forms that show us a future in which we could all potentially contribute to the creation of things and systems vastly larger than ourselves. This has frequently been the effect of religious devotion, of course, and those who have been to a barn raising have experienced similar kinds of emotions.
We know how the memes of simulation and participation competed as well as built on each other: simulation enabled functionality, and participation brought that functionality to ever-more people. This was the promise of computing, and the cultures it has engendered differ radically from those we inherited from a half-century of television viewing. The previous regime offered and continues to beguile us with an ever-increasing plentitude of narrative entertainment (again, it does not matter whether that entertainment was called a situation comedy, the nightly news, a shopping channel, or a reality show—it was and is all entertainment); it creates habits of mind and modes of consumption.
The development of ever more complicated and intertwined systems of delivery, the downloading syndrome, can lead to a proliferation of meaning-lite, if not outright meaningless, content. That is why, even when setting out to celebrate the best of the culture machine and its products, there is an underlying fear of unexplored avenues that will shut down in the face of an inexorable yet barely perceptible pressure to do less rather than more. The arguments here shuttle between the past, present, and future, and one of the fears it deals with is the concern that no matter what they want, people may end up getting a machine that emulates their televisions, but with a cell phone and credit card shopping grafted onto it.
Combine stickiness and unfinish, however, and what you create are ever-enfolding and expanding interconnections of hypercontexts. Those who want to do new work with the culture machine must ensure running room for the imagination as well as playful space for mindful downloading and meaningful uploading. This is the unimodern dream—less grand than its predecessors perhaps, but no less worthy.
Alessandro Lanni
The new platforms on which we consume information—in such a wide sense that goes far beyond the daily news—and on which we produce content ourselves, are still quite recent resources and prone to constant changes; it is these changes that make them still obscure. Furthermore, if I think of two social networks as Facebook and Twitter, they apparently undergo the plastic push by the users’ masses on one side, and on the other side by the business companies that run them. Sometimes these two pressures can converge in a common direction; other times, the one will affect the other one, in such a way that either user will condition the companies or vice versa, the companies will impose their own orientations to the users. In short, the in-progress nature of the new sources of information does not allow us to propose any profitable strategies for an adequate solution. As with any solution, it depends on us to choose the most convenient tools. That said, surely both familiarity with the tools and competence with the topics we are concerned with, are a necessary basis for improving the use of platform 2.0.
When Clay Shirky speaks about information overload as a flaw in the filter, he is pointing to a theoretical comprehension and a practical way out. But in order to learn how to select, we need at least to have clear ideas: entering the actual Harvard Library and not knowing how it works, causes the same effects of displacement that we experience on the Web.
Independence of thought is essential both in the analogic and in the digital mediatic scenarios. I do not think that things are different now and the Kantian phrase “Sapere aude” continues to be a useful compass, even after more than two centuries later. Surely, if we use the instrument more and more often to get and produce information and knowledge, our responsibilities will be increased. Huge agencies of knowledge, school for first, will have to cope with this disintermediation.
I do not know whether the dynamics of the network is self-regulating. There are cases in which central governments can manage to control social networking platforms. Certainly, the top-down mechanism of mass media communication must confront competitive instruments today, which follow different rules. From a general point of view, when we enter the dimension of Twitter, for example, all of us start from scratch, but once inside, the paths diverge, and the forms of power which count are different from the ones which counted in the traditional mass media, while both still being a matter of power.
The reference to the re-tribalization determined by what McLuhan called the electric media is by now an obsolete image, within the reflection on the media. The disintermediation determined by the Web, the social media and the increasing portability of the connection tools, is in any case an epochal novelty, which contributes to redefine the roles and relationships between producers, owners, certifiers of knowledge, and simple users.
Kant criticizes the eighteenth-century idea of an objectivity of beauty, as traceable in almost explicit rules. Rather, the Kantian beautiful is grounded upon the Urteilkraft, which attributes to aesthetic judgments both subjectivity and universality, or at least the very aspiration towards them. All the same, in Twitter, generally speaking, it is difficult to find a shared background that is also objective. Everyone determines his/her own timeline flow, depending on interests, tastes, preferences, and all that is totally subjective.
The filter is a decisive notion for many of the arguments we share about the Web today. Who filters? How does one filter? On what grounds? How does the construction of knowledge, whatever it is, change, if the filter is not certified anymore from the outside objectively? All these are issues that should enter philosophical discussions by virtue of the change in the constitution of knowledge, that is, the way it is fed and ratified.
I would say exactly the other way round. Twitter is a space in which sense can be construed (though not necessarily) in a free way. Although for sure there are structural ties—like the 140 characters—nonetheless, the possibility to test new paths in which a transcendental—that is collective and no longer individual—subjectivity may be recognized—a cooperative subjectivity, I would say—is always there.
The way in which sense is determined on a platform such as Twitter makes me think of Pragmatism: more than by virtue of an external reference, sense is determined by virtue of a communitarian trigger. For sure there is an oscillation between empirical and transcendental that is bypassed when using Twitter.
* * *
One of the major topics (and problems) within the transformation of our information diets—a reaction to the digitalization of knowledge at first, and to the circulation of this knowledge through the Web later on—is the filter. How to select information in front of the constant and ongoing assault we are subjected to from several devices, through which we shape much of what we currently know? How to distinguish between quality and cheap information? How to sort out the so-called information overload? What’s important for us and what’s not? The discussion around these questions is growing at a speed that, beyond the quality of single contributions, is surely a sign of the centrality of the topic within the common doubts and frenzy awakened by the transformation in progress, which we are all going to face whether willing or not.
Nevertheless, the purpose of this chapter is not to add a puzzle piece to the scenario as much as to focus on the philosophical content of many of the questions above. I want to stress that the issue of the filter like we experience it, for example, on Facebook and Twitter, is a philosophic issue at heart. On top of that, I’d like to emphasize the way this issue has been at the center of philosophy since the origins in the West about 2,500 years ago.
First let us consider how we decide what keeps plurality together. How do we categorize and sort out the chaos in which there is no organizing principle a priori. If we go over the texts that try to rethink the nature of the Web and knowledge in the Web age, the great interrogations in philosophy are apparently always behind the corner.
We are not interested in deciding whether Clay Shirky[1] is right or wrong when he says, “It’s not information Overload. It’s filter failure,” but we realize that the terms of the discussion haven’t gone, for example, much further than Kant’s “transcendental schematism” when the philosopher was searching a principle to keep intellect and experience together. The principle that, in the third Critique, would be developed into the capacity to judge, Urteilskraft, signaled a great aesthetic and epistemological escape to which the story of schematism eventually came to an end.[2]
Now, to prove how much of philosophical depth lies within mechanisms that seem utterly removed from it, let’s take an ever growing social network like Twitter, the microblogging Web site used by millions of people around the world, which defines itself as follows: “Welcome to Twitter. Find out what’s happening, right now, with the people and organizations you care about.” To sign in it’s as easy as it gets: a username (for instance, @alessandrolanni), a password, and you’re in.
But “in” what, exactly? Nothing. When you sign in, you’re into nothing. Or better, you’re inside something that can potentially become something else but hasn’t a shape yet. Our TimeLine is literally empty because we haven’t started to follow what will be our source feeds that will compose the plurality of knowledge filtered by Twitter. Sure, there are some suggestions (follow that actor or singer, follow this important magazine) but we are basically alone and it’s up to us to build a vision on the world sub specie Twitter.
It’s from this kind of tabula rasa that our vision of the world starts, according to the blue bird’s social network. While information inputs run one after another in front of our eyes, depending on our selection, a personalized stream of knowledge takes shape. There are no a priori categories to prescribe the way to bring multiplicity back to a concept. On Twitter, our ever contingent and changing interests are the ones to dictate the rule of our knowledge. So, a kind of “historical a priori” that keeps knowledge and individual interests closely together comes into play.
When you sign in you have an empty box that you fill with your own prejudices, your own choices and so on. An absolute TimeLine is therefore impossible; no user faces the same stream of tweets. The multiplicity that we find on our computer screen or on our smartphone is always pre-interpreted; it’s a multiplicity for sure, but according to a grid that we imposed on our own, one that avoids what we are not interested in and admits only what we believe may be relevant. There’s no view from nowhere, there’s no absolute knowledge in Twitter. The information descending from the social network is necessarily partial: what I know is what I choose to know.
The filter then. The greatest problem of the horizontal, open, and participated Web. Who’s to filter? On what basis? Why do we agree that my filter is better than yours a priori? Can we even say that? Maybe not, since the filter has its own unavoidable historicity and contingence. There’s no best selection principle, it’s simply not possible. At the same time, if everyone acts on one’s own and is perfectly at ease in this solitude of choice, what permits one to compare a twitter-based knowledge to another? If everyone chooses his or her own way to tweet, if anyone has a finite dimension without external rules imposed by an arbiter at disposal, if everything changes, how can you critique, although failing, the use of Twitter? Where to start? Ultimately, if Twitter doesn’t exist as shared knowledge, does this mean that “anything goes”?
If there’s no shared space, if the sense is unstable and precarious, it’s hard for the critique to get through and decide about the right or wrong, about the “best” or “worst” use—if we don’t care about “correct”—of the social network. If Twitter does not exist, who’s to criticize? The risk is to move from knowledge to power; the one who’s more persuasive—and not always just for the sake of reasonable or at least reasonable arguments—will impose a stronger vision.
Apropos the preoccupation around information overload, David Weinberger cleverly wrote, “We shouldn’t freak out about information overload because we’ve always been overloaded, in one way or another.”[3]
What does it mean to recognize that we were always overloaded by information, by data assaulting us everyday? In our view, it means to bring up an issue about which the most advanced minds on the revolution of the Web are at each other’s throat: the question that crossed the whole of Western philosophy, from Plato’s Sophist to Kant’s Critique of Pure Reason and to find answers also in the twenty-first century.
Weinberger focuses on the question in this way: “If our social networks are our new filters, then authority is shifting from experts in faraway offices to the network of people we know, like, and respect.”[4]
In sum (and abruptly), in the 2.0 era the great problem of the Metaphysical Deduction elucidated in the first Critique is blown away. When Kant made an effort to ground the constitutive categories of objective knowledge on the sturdy pillars of logic (Aristotelian logic at that time), he was dealing with the very guarantee of authorship that is being currently divided among single individuals. Reputation on the Web is earned and maintained (or lost) every day: there is no absolute criterion to certify the quality of the filter.
The Filter Bubble is a book published in 2011. Author Eli Pariser is an activist in his early thirties, one of those guys we are able to imagine even though we don’t know them, thanks to all the profiles in the magazines over the last years. He didn’t write a philosophy book, it is rather a complaint about the change occurring due to the revolution of personalized information diets and behaviors online.
Because of the freedom to select contents and knowledge arranged by the Web, one may think that our freedom horizon is wider but, as Pariser states clearly, it’s not like that. It is true, to some extent, that we are media to ourselves, we are filters free from principles set elsewhere, the only choosers responding to our immediate interests is us. Yet the founder of Move On explains why it’s not like that. Web moguls like Google have been working so far to “direct” this freedom. As of today, many companies struggle to guess what we like in each area (from food to politics) to profile an information filter targeted for marketing, that is to say to sell us information and products we are supposedly interested in.
If in the 2.0 era, the a priori of knowledge categories is nullified and each a priori, each selection principle of the multiplicity is contextual historical and dependent on our interests, which are obviously contingent, there is a “pole” that works to prepackage this multiplicity. If Google results are preshaped, prechewed, knowledge itself is under some form of external control. If the search engine of Mountain View does not play only as “experience form” (like space and time in Kant) but also as a facilitator that understands our partial and immediate interests (previous surfing, enquiries, online shopping) like absolutes, something in the new mechanisms fails.
New generation filters—Pariser is considering Google and all the tools that select information on the basis of our preferences—“they are prediction engines, constantly creating and refining a theory of who you are and what you’ll do and want next”[5] (emphasis added). In a few words, here’s the renewed proposition of the philosophical core of the filter issue, just like philosophy addressed it 2,500 years ago. The filter tells us who we are, what world we know and how we know it.
There is a militant streak in Pariser’s book and a wish to warn the reader and Web surfers about the transformation that occurred over the last years. We are not interested here in considering the valuing aspect of the essay; we’ll leave it to others. The Filter Bubble helps us to focus on the filtered nature of the reality descending from the Web (the social Web but also from search engines like Google). A filter that is ours, always and forever, even though the multiplicity we are in front of is not so neutral as we might think.
If the catch line “we are the media” by Dan Gillmor[6] expressed the consciousness that we are all able to create a potentially worldwide message through the new media, the new catch—besides the call “hey there, we’re here”—should be “we are the filter.” The value of our individual and/or collective presence in the Web resides more than ever in the capacity to size quality through the filter, to select, distinguish, and establish a hierarchy of contents. I am the filter I select.
In this era, with the affirmation of what Catalan sociologist Manuel Castells called “mass-self communication,” everybody, at least potentially, is able to shape contents by himself.[7] The key word is “self.” One tends to forget, however, that also the fruition of contents, the shaping of a knowledge scenario, resides in the self.
All of this could be read in light of some famous propositions by Ludwig Wittgenstein in the Tractatus logico-philosophicus: “The limits of my language mean the limits of my world” (5.6). The Viennese philosopher wrote about one hundred years ago, affirming the total mirroring of language and world and identifying a nameless edge in that “my,” in that hidden subjectivity. Slightly further on, Wittgenstein writes, “The subject does not belong to the world but it is a limit of the world” (5.632).
Let us ask: isn’t the worry that the world will turn into my world to inspire all the people who criticize the personalization of knowledge through the web?[8] The solipsism, the risk to become closed monads would always be behind the corner according the “apocalyptical” who are warning the naive users.
However, the additional question is: does some kind of multiplicity selectable/filterable through our categories even exist? In light of what was said before, the answer is no. Without an ordering principle there is no multiplicity to look at. This is rather a byproduct of our reconstruction of the knowledge mechanism in the 2.0 Web in retrospect. Let’s say it again: our TimeLine on Twitter but also elsewhere is empty if we don’t apply a filter; some selecting principle to start a journey headed somewhere. Without a decision, there is no multiplicity descending from that decision.
The passage from chaos to sense is the great mystery of humankind. How is it that words (or concepts, this depends on latitudes and eras) make things speak? There, the experience of sense rising from the disarray of social networks confirms the same philosophical question. How is it that empirical multiplicity turns into science? How is it that noise turns into a signal when a central system (Kant’s “I think”) that donates sense to confusion, which is nonsense by its own terms, is blown away like in the Web? The problem of synthesis is brought up to date.
Many people are concerned by the absence of a higher-level principle to regulate sense on Twitter. This strain of thought posits that the possibility of making sense is lost forever, and in so doing, it denies the daily experience of millions of people who are indeed able to organize some knowledge for better or worse, no matter its quality, on the Web. Knowledge does not disappear; the problem for those who worry is that the objective principles that underpin knowledge are at risk.
“The filter exists no matter how and it is understood in retrospect, it intervenes in a second moment,” writes Sergio Maistrello, one of the leading interpreters of what’s happening in the Web in the past years.
It’s collective, distributed, based on the activity of assimilation and re-launch of any single knot in the Web. It follows that the unassuming presence of a content on the Web never says something on its quality; there’s room for every kind of idea, even the most aberrant, and it is up to a community of interconnected people to distinguish between what holds popular interest from digital trash, what is useful from what is inappropriate.[9]
The problem, the way it is often addressed, seems comparable to the one Kant would call of “subsumption.” If a principle that somehow allows the organization of millions of tweets disappears, could one make use of all the excess knowledge the Web projects us into? A lot of critics of the Internet see this as one of the major cognitive problems we would be exposed to by the current information revolution. But does a philosophical question have to stop at the level of this reasonable worry? I do not think so.
In his constant pursuit of a mechanism that could keep knowledge together, Kant has probed first the subjective logics and later the aesthetic one to seek for the unifying principle.
So now we are in a condition similar to the one the philosopher was facing: we have to decide what holds together our knowledge in the Twitter era, where the summa of knowledge that is now based on Twitter’s TimeLine comes from. Should we appeal to personal taste, to contingent interest? Should we appeal to the “I like it this way” or the “this is what interests me?” It is possible, sure, but philosophy would be put aside.
If we’d rather question what is changing (and also what stays the same) within the present revolution, we may try to discuss what appears as given data in front of us.
For example: does a multiplicity even exist without any principle of selection? Does information exist—no matter how overabundant, failed or scarce—without a “we,” or better said without an effective “I” that poses itself at the center of gravity, among one of the infinite centers of the universe[10] described by the change due to the Web 2.0? Maybe not, and it’s Twitter’s own void at the beginning to demonstrate it. Without millions of Twitter users not only will there not be any content posted every second in the short form of 140 characters, but there won’t be contents either because no one would activate the mechanism of the selection, the filter. “Thoughts without content are empty, intuitions without concepts are blind,” wrote Kant in the first Critique. In the era of mass self-communication, to underline this mutual dependence, we could translate this Kantian quote in: “filters without twits are empty, twits without filters are blind.”
“Information overload is just a malfunctioning filter” means not only that we must acknowledge some limits of content that run on Twitter but also that we must be conscious that the selves as filters count because without filters there is no information, much less information overload.
Commonly, we build millions of daily paths in the Internet and no matter their imperfection, we cross a territory that is also our territory, our map.
It is hard in this dimension to appeal to something from the outside (that was Aristotle’s logics for Kant) to rule our navigation, properly, because it is our world and its limits are established by all our friends and those we follow on Twitter.
1. Clay Shirky is one of the most distinguished authors on the effects of the Web on individuals and society. His books include: Here Comes Everybody: The Power of Organizing Without Organizations (New York: Penguin, 2008) and Cognitive Surplus: Creativity and Generosity in a Connected Age (New York: Penguin, 2010).
2. One must recall Emilio Garroni (1925–2005) with his pivotal work on the third Critique. We have to mention his Estetica ed epistemologia (Roma: Bulzoni, 1976) and Senso e paradosso (Bari: Laterza, 1986) where he focuses on a new interpretation of the Kantian Aesthetics beyond the study of beauty.
3. David Weinberger, Too Big to Know: Rethinking Knowledge Now That the Facts Aren’t the Facts, Experts Are Everywhere, and the Smartest Person in the Room Is the Room (New York: Basic Books, 2011), 10. Even though this is not the proper context, it would be interesting to delve into Weinberger’s distinction between filter out and filter forward.
4. David Weinberger, Too Big to Know, 10.
5. Eli Pariser, The Filter Bubble (New York: Penguin Press, 2011).
6. See his celebrated We the Media (Sebastopol CA: O’Reilly Media, 2004).
7. Manuel Castells, Communication and Power (New York: Oxford University Press, 2009).
8. My mind goes back to Cass Sunstein’s celebrated Republic.com (2001) but also to several studies issued over the decade.
9. Sergio Maistrello, Giornalismo e nuovi media (Milano: Apogeo, 2010), 51.
10. Some talk about this transformation in terms of a “Copernican” revolution, of cosmic proportion. I think it is more appropriate to refer to Giordano Bruno’s idea of a universe with a multitude of centers rather than a universe where the Sun replaced the Earth as a cornerstone.