Image

Discussion

Alex Williams: I find this very interesting, the idea James puts forward of formal languages being a kind of technology, a technology which isn’t just a way of organising intuitions about the world, but instead has the potential to surprise us. Things could come out of them which are not simply reducible to the input, in the sense that you can learn from them, and that’s really intriguing. My question, given that abstraction of some kind seems to be at the core of this (in that it’s not just expressive of common sense) is how are we able to form these languages in the first place—formal languages that are not just an abstract system, are not disconnected from the world entirely to the point of just being a game. They have some degree of traction within the real and yet they still have abstraction. I wondered if you could expand on this, especially given the current currency of the ideas of philosophers such as Giuseppe Longo, who present a story about how abstractions, mathematics for example, get ‘into’ us from the outside.

James Trafford: Yes, it’s a good question; I think it’s tricky, partly because there is no clear methodology for approaching it. I think the short way of thinking about this would be to suggest that you take the ‘good’ bit of grounded cognition, where you can understand conceptual activity as essentially grounded in some sense. We can think of this as an essentially associative structure where there is a relationship between this and the way in which a cognitive system is in bodily interaction with the environment through various things like gestures, etc. But this isn’t all that there is to cognition, reasoning, and so on. Even six-month-old children, and even rats, seem to have some sort of generalized association relating to negation—there are a bunch of experiments suggesting that even very young children have some sort of abstract concept of number, for example. These processes of cognition are all going to be pretty mundane, and, of course, we use things like elementary language to begin to structure these associations and to modify them in different ways—systematizing social interactions and that kind of thing. But that’s not the whole story—this is the point of the account of doxastic conservatism—because we can use the technologies of formalization to combat this, and think of it in terms of a generative process which expands the resources of our language, differentiates our abilities to reason, and so on. In other words, we can develop new abilities that far surpass our initial biosocial conditioning. For example, I think that the traditional idea that we can tether logic to some sort of independent logical structure really holds this back by presuming we have some kind of transparent access to semantics. Instead, we need to think of logic in terms of rational constraints, which are open to processes of revision and navigation, and then whatever we understand by logical consequence and semantics is going to need to be far more flexible.

AW: I’m also interested from the point of view of the geometric aspect of Longo’s work: How is it that you go from some kind of bodily movement through space, or eye-tracking, the ability of predatory animals to track the motions of their prey, to this kind of conceptual structure? The question is, in a crude sense, how do you transpose from these physical gestures towards some kind of cognitive or rational process—i.e. make the ur-abstraction or original abstraction? And having begun such processes, how is it that the ur-abstraction is preserved, maintained throughout (with mathematics, for example) thousand of years of transformations and elaborations? How do you preserve the originary abstraction?

JT: Basically, the initial processes of cognition of these things are going to be grounded in the world in some sense. For example, something approximating negation is going to be learnt by children, possibly as a very basic incompatibility relationship: something can’t be both red and green at the same time. And what happens is that you’re going to get this introduction into representational structures or semantic structures that are associatively conditioned by input, by the habit of these things being triggered. But then we’re using language to systematize those structures, to revise them, and using something like logic to construct community-level norms.

Tom Trevatt: I was thinking of ways of drawing this whole conversation into the more mundane sense of the aesthetic, of art. Ray spoke a bit about this notion of Prometheanism. There are ways of connecting that to things Benedict was saying earlier, about the role of design; and there’s a way in which you can think of design as being precisely, in its original function, to do with changing the world. This is a very important aspect of design, but also, as Benedict pointed out, within this is a disavowed sense of design as, increasingly, designing people’s behaviour; and even reshaping their subjectivities—are they comfortable with this?

To bring in art, now, what’s the relation of art to a promethean Marxism? Can it be a tool within this sort of project of remaking ourselves, remaking the world? Or is it something to do with the structure of art—is its resistance to a purposiveness, to instrumentality, going to be a block to that, limiting art to being a supplement to something supposedly more instrumental? So basically, how do we situate art in relation to that wider project?

Ray Brassier: This is where the relationship of the aesthetic to the conceptual becomes relevant, though also problematic. Art, whatever physical medium it’s operating in, ultimately must have some conceptual content. It must operate conceptually on some level. Now, you can have a sophisticated understanding of conceptualization that does not identify conceptual content with propositional content, so this doesn’t have to be a banal claim. For instance, if you ask: How does a piece of music embody thinking? Obviously it can do so in an incredibly complicated way. The fact that its conceptual content cannot be linguistically recoded in any kind of digestible propositional form doesn’t mean that it doesn’t have conceptual meaning. The argument to the contrary seems to run: Here’s a piece of music which is obviously cognitively sophisticated in terms of everything that is actually going on in it—but how could this possibly be conceptual if by ‘conceptual’ you mean something that can be encapsulated in propositional form? I think this is an unnecessary simplification. Although everyday conceptualization—whose primary function is practical or communicative—uses linguistic resources, this doesn’t mean that every thought can be straightforwardly individuated in terms of a set of propositions. And this is as true for literary art as for music or the plastic and performing arts. If concepts are understood in terms of their functional role, then perhaps what distinguishes the thinking peculiar to art consists in constructing nonpropositional functions by making materials—linguistic, sonic, plastic, etc.—do things we don’t expect in ways we couldn’t have anticipated. Art is the construction of function, as opposed to the relaying of preestablished function.

By obliging you to conceptualize its nonpropositional content, art may make you think about the status of sensation, about what hearing is or what seeing is, and ultimately, what feeling is. So art can perform a subversive epistemic function which feeds into Prometheanism’s broader emancipatory agenda. It seems to me that modernism in art is the idea that by challenging clichéd ways of perceiving, you can encourage people to think about the way they see things, rather than continue to see the world as it is generally accepted to be. This is to begin to understand things differently, but also to expose the various invisible mechanisms that condition habitual perception. This link between cognitive subversion and emancipation is what I would like to retain from the modernist ethos.

So I would say that art can play an emancipatory role consonant with the Promethean imperative, but not by proclaiming this ideal in some platitudinous sense. Promethean art need not be about ‘Prometheanism’: it can challenge our beliefs about ourselves and our world in a way that invites us to remake both without having to enunciate this as a propositional injunction. Art can further the Promethean project without subordinating itself to it as an extrinsic goal. In other words, it can be useful, but precisely by preserving its uselessness. It is the subversion of designated function that functions revolutionarily, not the assertion of the need for a revolutionary function. Perhaps this is a cliché but I still believe that genuinely revolutionary art has to be abstract in the sense of uprooting default modes of thinking and feeling. It should challenge bourgeois epistemic norms, by which I mean cultural conventions stipulating what we ought to think or ought to feel. This is not a new idea, but I want to defend it, and insist that art uses the sensible to teach you to distrust your sensations—don’t trust what you feel!

Peter Wolfendale: There’s a Deleuzian distinction between art as communication and art as composition which is really useful here. I think Deleuze’s critique of the communicative use of art, where the art is giving a message to the viewer, as fundamentally reducing art to this sort of discursive register, is valid. But you can easily go too far in that direction and say that therefore art cannot have any sort of conceptual element to it. I think the important point is that, even if you view art in terms of composition of affects and percepts, concepts are materials of composition. You can compose with concepts.

In relation to James’s presentation, I wanted to return to the whole logic question, and why I think Brandom can answer the kind of things you are talking about. In Between Saying and Doing Brandom gives a really complicated story about what logic is and how we can have logical abilities.1 His basic idea is called logical expressivism, and it’s the idea that what logic does is enable us to make explicit—it’s a tool for making explicit what’s already implicit in what we say. This doesn’t mean there has to be a single logic—there can be different logical vocabularies. I think this is interesting in relation to your idea of doxastic conservatism. What I would add is that, if you take a sort of Sellarsian approach to semantic content—if you see the semantic content of a nonmathematical expression as constituted by its being involved in material inferences, then you see these material inferences as being fundamentally, principally, non-monotonic. You can tell quite an interesting story about what it is to be caught up in a given horizon of possibility, because basically what you get with non-monotonic inference is that the inference is only good as long as you accept a whole bunch of assumptions. Another way of explaining that would be in terms of the shift from non-monotonic frames of inference to purely monotonic frames of inference, which is what you get in mathematics. So that is the way I saw your talk about semantic activation and desemantification, and how this relates to logic. What I want to say to you is, do you really resist this sort of semantic picture? Or do you think that there’s an alternative semantic picture?

JT: To an extent I might disagree with some of this picture, though it’s going to depend upon how it’s cashed out. One thing that’s really important is that I don’t think that you can define the meaning of logical terms using inferential rules if they are supposed only to explicate underlying implicit linguistic moves. There’s a whole set of issues that are raised within this understanding of inferentialism. These concern, for example, the ways in which the sort of proof-theoretic characterization of logic in Dummett is connected with anti-realism and its emphasis on provability. Then, of course, there are issues relating to what we say about bad inferential patterns such as the famous example of ‘tonk’. This relates to how we understand meaning to be defined by certain inferential roles; which roles get to count as defining meaning, and that kind of thing. We need, for example, some way of ‘weeding out’ the inferential rules that seem like they should be kosher, but clearly aren’t. In this sense, I think we require a very flexible understanding of the way in which the semantics of expressions is determined by inferential rules. I do think that this is entirely possible though, when we move to an understanding of rules in terms of rational constraints. Also, if we start with Gentzen’s symmetric sequents, then there’s a clear sense in which the syntax can be seen as generative of semantics (using a generalized form of Lindenbaum-Asser constructions), and so, semantics is, in a sense, internal to the way in which we construe syntactical rules, i.e. without reference to an external reference or independent reality.

PW: It feeds into what was said about functional classification: there are levels of functional classification of logical operators that are not so fine-grained as to pick out an operator within a particular domain—they don’t distinguish between classical negation and intuitionist negation.

Robin Mackay: James talked about the operations of desemantified formal languages; then Alex talked about the fact that we can’t evade the task of finding some new ways to grapple with the abstractions that surround us. The question of how desemantification, abstraction from, let’s say, the manifest image, works has emerged as a key question.

Now, if aesthetic experience is never preconceptual, if there’s no ultimate guarantor of the distinction between abstract and concrete, then surely all we are dealing with is abstractions (plural) from X to Y. Certainly as far as animal perceptual behaviour is concerned, we can say that it already operates through abstractions (Longo; but also, already, Bergson’s point about perception always being a subtraction from what is given).

New abstractions can come to command and direct the substrate from which they emerged, and even potentially modify it in an enduring way. It would be foolish to think that any abstract model can ever perfectly fit the reality it’s trying to grasp, and therefore there is always a gap where novelty can slip in, there’s always a kind of grinding of the gears in which new problems are produced, no matter how powerful the purchase of a given abstraction may be. Surely that production of new problems simply is the story of collective cognition.

Because whatever may take place on an individual level in terms of the loosening and shifting of these given abstractions only becomes truly significant when it takes place within a collective cultural context. And it seems to me that art (institutionally understood) simply doesn’t do this: as an experience or set of experiences, it doesn’t have a long-term relation to behavioural feedback loops or to collectivity which would enable it to make its abstractions real. Art doesn’t do that whereas, for example, computer games, music, popular use of technology, and so on, certainly do—they manifestly change behaviours and modes of thought and gesture, action. Look at smartphones, Facebook, Twitter: the way that people interact with the world psychologically, physiologically, is genuinely being changed. It’s not philosophers who are doing that, and it’s not artists, it’s corporate design and strategy, and technology. It produces real change and real, novel problems, pathologies even. Understanding the concretizing processes of abstraction, the way in which they make themselves real, is the only way one can hope to direct them in the Promethean sense that Ray suggested. And it’s not clear how art might contribute toward that task given its obsession with providing indeterminate ‘spaces of reflection’.

Benedict Singleton: My question is actually related, and is about Prometheanism. That idea seems very important to all of you. What I wonder is how that relates to whether there’s a sort of contradiction between certain embodiments, cognition or whatever, versus the myth of the given. I think it was suggested that there was some sort of tension between the two.

RB: I don’t think there’s necessarily a tension between acknowledging the embodied aspect of cognition and rejecting the myth of the given. The tension can be dissolved by distinguishing between conceptualization and representation, or thinking and mapping. The hypothesis that brains originally developed as navigational mechanisms suggests that the most elementary function of cognition consists in representing the environment well enough to escape predators and seize prey. So from a biological point of view, representation’s primary task is to map the organism’s environment. To the extent that all thinking developed as a function of biological locomotion, then it’s correct to say that mapping is thought’s originary function.

But even if thinking originates in movement, it would be a genetic fallacy to insist that all thinking is necessarily subordinated to navigation and the plotting of trajectories in space-time. This may well be representation’s most elemental function, but concepts are not representations, and the concept’s role is not representational. It’s crucial to distinguish these two aspects of cognition: the conceptual and the representational. Concepts are individuated in terms of their inferential role, whereas the role of representations is mapping. In one sense, the myth of the given consists in conflating concepts with representations, or inferential function with mapping function.

Of course, this is not to say the two modes of functioning are wholly disconnected. We need to study the neurobiological mechanisms that underlie our conceptual capacities using the resources of empirical science. But the contemporary discourse of ‘embodiment’ is not motivated by any regard for naturalism. It’s not concerned with understanding the neurobiological basis of cognition. Embodiment is understood in a phenomenological register as ‘lived experience’ and inflated into an irreducible datum, an unexplained explainer. This is precisely what a neurobiological understanding of embodied cognition ought to displace. There is a way of understanding the constraining role that our biological legacy exerts upon our conceptual capacity, but it does not consist in reducing conceptualization to representation. To do so is to fall back into the empiricist variant of the myth of the given by treating concepts as representations. This is also to confuse reasons with causes. The world causes representations and representations may cause occurrences in the world, but even if concepts supervene on representations, they are neither caused by things nor can they be the causes of things. We have to use concepts to understand the neurobiological processes that underlie conceptualization. But to identify those concepts with the processes they are being used to investigate is to engender paradoxes which threaten science’s epistemic integrity.

BS: I think that perhaps in the account of Prometheanism there may be a contradiction between the articulation of conceptual structures and lived experience, which may be related to how we can understand abductive thinking as being a kind of going back before going forward.

RB: Yes, perhaps there is. Abduction can be understood in terms of cognitive neurobiology because it is rooted in association. Associative synthesis is at the heart of connectionism. There’s good reason to believe animal representational systems are basically connectionist. I think James mentioned the kinds of associative networks that develop these preestablished pattern configurations to synthesize information. Churchland says something similar in A Neurocomputational Perspective: a prototype vector is activated as the best explanation for a heterogeneous perceptual input.2 He proposes a neurocomputational account of abductive inference. Such inferences facilitate the sorts of recognitional prowesses exhibited by higher organisms. But I would insist that there is a difference in kind between the connectionist machinery at the heart of animal representational systems, whose operations are associative and Humean, and conceptual explanation in the strict sense, whose functioning is inferential and Kantian. In other words, we can explain how animals are engaged in predicting their environment, and we know that we rely on some of these same neurobiological resources in predicting our own environment, but we are also able to conceptualize these resources such that they become reflexively articulated. We are self-conscious about the ways in which we are engaged in predicting the world, in a way in which animals arguably are not (I don’t mean ‘self-conscious’ in a phenomenological sense here; I just mean reflexive). So although there is a definite continuity from sentience to sapience, there’s also a discontinuity or phase transition: something important happens in between sea slugs and humans—there’s a difference in the kind of cognition at issue, rather than a mere difference in the degree of cognition, although I realize some naturalists would want to deny this.

Nick Srnicek: Building on Ben’s comment regarding the origins of rational thought, I want to think about how we understand the ends of rational thought. I think there’s a huge open question here. For instance in Reza’s work I think there’s this idea of ramifying the pathways of conceptual consequences, and tangentially approaching a complete picture. Sellars is right I think to highlight this normative aim of a complete and true picture of reality, and I think that answers to the regulative ideal of rationality. But is this aim of scientific knowledge sufficient to underwrite political projects? I’m not sure it is. And so I think there’s this big question about what the normative end of this stuff might be: What is accelerationism for?

RB: But isn’t it part of the definition of acceleration that we don’t know where it might lead? How could we anticipate its end on the basis of our current cognitive resources? Still, I agree we must be able to rationally conjecture certain necessary characteristics of this end, otherwise it collapses into an ineffable alterity, which we have no good reason to pursue. But this is not to say we can predict it.

AW: There is certainly an element of that in Negarestani’s idea of navigation—the notion that there isn’t a predetermined end point or teleological objective, but there is a certain teleodynamic directionality. I think this is part of what Nick Srnicek and I want to extract from Reza’s work, to help develop a kind of leftism which is distinct from traditional Marxism—the idea that we don’t know what the ultimate end will be, and that processes of navigation, in thought and action, inevitably open up new horizons, horizons which may be well beyond our present imaginings.

RM: Isn’t acceleration to do with hooking the project of human emancipation to the essentially psychotic project of scientific rationality which demands that, if any path can be explored, then it must be…? Which is not an end as such, it’s just a protocol for escape.

BS: I think this relates to something which is a nice bit of a corporate cosmology around platforms, and how companies like Facebook and Google actually work, which completely wrongfoots traditional business theories, because they are not products or services.

This relates to how we understand platforms, and how the design aspects regarding platforms relate to plot form. One of the interesting things about platform logic is the following: take Facebook moving into Africa—‘the next billion Facebook users’, as they are quite explicitly calling it. Is there a rationale for it? They just know if they do that then there will be more stuff to do. I mean, it opens up massive space of possibility. Because they are particularly in a capitalist position they are able to persuade other people to give them money to do it. Effectively, that kind of platform logic seems to be something which is in line with certain of these ideas about acceleration and so on: you build it because then you open up further possibilities….

AW: That’s why epistemic accelerationism is still accelerationism: there is this psychotic element to it. It does not become reasonable. It is rational but it does not necessarily become reasonable.

 

1. R. Brandom, Between Saying and Doing: Towards an Analytic Pragmatism (Oxford: Oxford University Press, 2010).

2. P. M. Churchland, A Neurocomputational Perspective: The Nature of Mind and the Structure of Science (Cambridge, MA: MIT Press, 1989).