CHAPTER 2

The Troubled Water of the Ideal Language Paradigm

Specters of the Ideal Language Paradigm

The Ideal Language Assumption

The idea of an ideal language traces back to the early history of Western philosophy.1 According to Aristotle,2

Spoken words [that is the sounds] are the symbols [that is the signs] of mental experience [including affections of the psyche, of the soul, and thoughts], and written words are the symbols of spoken words. Just as all men have not the same writing, so all men have not the same speech sounds, but the mental experiences, which these [that is the written and spoken words] directly symbolize, are the same for all [humans], as also are those things [in the empirical world] of which our experiences [including concepts, thoughts, …] are the images [such as representations, likenesses, …].

Aristotle’s account presupposes the existence not only of substances, but also of real universals (forms or essences realized in matter) accessible to human minds. Such forms are the basic objects of scientific inquiry, and the corresponding universal concepts are the building blocks of language and knowledge. Hence, the structures of humanity’s universal concepts are isomorphic with the fundamental structures of reality. The isomorphic structures can be described in an ideal language into which all human languages are translatable. We call the view expressed in the citation of Aristotle the isomorphy thesis.3

In the time of Leibniz and Descartes, constructing an ideal language was a project of great concern to philosophers.4 Leibniz, whose interest in the Chinese language is well known (Perkins 2004), thought that in five years a few selected persons might be able to devise a universal language (characteristica universalis), a kind of alphabet of human thoughts (Leibniz 1677).5 In a letter to Mersenne, November 20, 1629, Descartes wrote:

I would dare to hope for a universal language very easy to learn, to speak, and to write. The greatest advantage of such a language would be the assistance it would give to men’s judgment, representing matters so clearly that it would be almost impossible to go wrong.

After the appearance of symbolic logic, Frege gave the ideal or universal language project a new impetus with explicit reference to Leibniz.6 Frege (1892) specified that an ideal language should be objective (individual and poetic aspects are eliminated), exact (each expression has exactly one reference and one sense), structured or compositional (the reference and sense of each expression can be “calculated” from the reference and sense of its constituting parts), and each sentence is either true or false. Sinn and Bedeutung are the two aspects of the meaning of words. Translations of these words into English vary; we choose sense and reference. Bedeutung is related to the empirical-synthetic denotation or extension. Sinn is related to the conceptual-analytic designation or intension. The sense (Sinn) of a denoting expression is a procedure for determining its reference, that is, the knowledge a speaker or writer must possess in order to use the word correctly.

Frege emphasized that Sinn is objective, like the conceptual content of the Platonic forms or ideas, as distinct from the inner sense of the content in the head of a particular individual (for which Frege used the word Idee). Science and philosophy should be objective in the sense that each expression should have a well-defined Sinn and Bedeutung, and an ideal language should be developed for that purpose.

Frege’s definition of an ideal language formed the basis of Russell and Wittgenstein’s logical atomism,7 which, in retrospect, laid the foundations for analytic philosophy. According to them the use of natural languages leads to confusions, pseudo-problems, and systematic errors. Such problems can only be avoided in a logically regimented language. Russell states: “Every philosophical problem, when it is subjected to the necessary analysis and purification, is found either to be not really philosophical at all, or else to be, in the sense we are using the word, logical” (1914: 42). In the next stage, Carnap and Tarski shifted the emphasis to the notion of a formal or symbolic language. Carnap says: “When I considered a concept or a proposition occurring in a scientific or philosophical discussion, I thought that I understood it clearly only if I felt that I could express it, if I wanted to, in a symbolic language” (1963: 6).

The development of an ideal language as Frege had proposed turned out to be no easy matter. In the same article where Frege (1892) introduced the distinction between sense and reference, he also identified a major problem: intensional contexts in a natural language do not behave truth-functionally, and substitution salva veritate is not generally allowed. Hence, logical regimentation of natural languages cannot be enforced. In the past century, various proposals have been canvassed; but none has been satisfactory. In addition, there is an ongoing discussion in analytic philosophy concerning the (alleged) necessity of compositionality.8

Among metaphysicians, philosophers in the history of Western philosophy were often committed to the ideal language paradigm, in particular the assumption of universal categories or “ideal” concepts. Among contemporary continental philosophers, Levinas is committed to a strong notion of ideal language. Cultural meaning is in the last analysis only the leftover fragments after the unity of ideal meaning is broken up. Ideal meaning is grounded in the Judaeo-Greek tradition, in particular the former. Merleau-Ponty and other French philosophers believe that the unity of Being resides in the mutual understanding among human beings and the mutual penetration of different cultures. Levinas resolutely opposes himself against this pluralistic notion of Being and the multiculturalism derived from it.9

As Graham has shown, traces of the ideal language assumption can be found in classical China as well. “The Mohist Canons, which consistently use only one particle for one function, and the same word in the same sense in syntactically regular sentences which sometimes defy current idiom, is plainly the result of a deliberate decision, like the cleaning up of English in the 17th century by the Royal Society” (1989: 404).10

Among sinologists, Hansen proposes that one constructs an ideal language and uses it to deal with Chinese terms and thought in general. He writes:

The “ideal” goal of an interpretive theory can be represented as a formalized semantic theory for the entire corpus found in the texts. It would translate each expression of the corpus into a formula in a calculus from which one could “calculate” the logical consequences and presuppositions. The calculus used for this purpose would have to be particularly precise and clear. (1983: 8)

Traces of the ideal language assumption can be found in numerous publications, as the following citation illustrates:

Once all the dao-meanings expressed in the 76 Laoian uses of the term dao are philosophically determined and classified under the framework of one theoretically formal typology, the actual task of interpreting the meaning of each dao-type in its theoretical and semantic relationships to all other dao-types can thus be carried out in a systematic and critical manner. (Phan 2007: 260)

In chapter 4 we present our alternative to the ideal language paradigm, assuming family resemblance of the referents of any general term, both within and across traditions. The latter assumption is a necessary precondition for intercultural (and comparative) philosophy to be possible.

Perhaps the most significant feature of an ideal language as proposed by Frege, is the assumption that precise meanings are possible and can be strived for. In our view, the notion of precise meanings makes no sense for natural languages such as Chinese and English, including philosophical language. For example, one might think that the word “bachelor” has a precise meaning, namely “unmarried man.” Although one might stipulate the meaning of bachelor in such a way, the precise meaning of “unmarried” and “man” has yet to be given. And one has to accept that the Pope is a bachelor. It is possible to propose definitions specifying necessary and sufficient conditions for the use of a word, but one cannot give such definitions for all the words that are used in the definitions. This is an example of the problem of complete description, called the frame problem in Artificial Intelligence (van Brakel 1992).11 It is impossible to give all the necessary and sufficient conditions for the knowledge or application of a concept or a rule, or the cause of a particular event, or the style of a work of art, and so on.

Hence there is no such thing as complete knowledge of a concept. One cannot quantify the degree of knowledge of a concept. Knowing part of something is different from partly knowing something. Knowledge of a concept is to know something partly, not knowing part of something. For example, Maloney (1990: 57) holds that “understanding a concept is a matter of degree” and compares the “limited knowledge” of scripts (brief characterizations of prototypical examples of types of objects for the purpose of knowledge representation in Artificial Intelligence) with the limited knowledge of young children (about chairs, for example). However, it is nonsense to compare the limited knowledge of a chair-script with the partial knowledge a young child has of chairs (if brought up in a chair-environment). Maloney ignores the fact that children’s partial knowledge of chairs is built up from intimate relations with chairs. This relational practical knowledge has little to do with the descriptive knowledge in a chair-script in artificial intelligence.

For specific applications, artificially designed “ideal” or formal languages with precisely defined meanings make sense, for example, those invented in mathematical logic and computer science. But in order to judge the relevance and success of such ideal-language-tools, experts have to communicate in a metalanguage, which cannot but be an extension of a natural language such as Chinese or English. As Lyotard remarks:

All formal languages have internal limitations. Hence, there are no formal or scientific criteria to define the properties of a “good” axiomatic system. The metalanguage used to describe an artificial (axiomatic) language is “natural” or “everyday” language. (1984: 42)

Assume that both analytic philosophers and hermeneuticians develop an ideal language for conducting intercultural philosophy. When they would meet, they would have to use an “ordinary” natural language such as Chinese or English as a metalanguage to discuss their respective ideal language proposals.

Case Study: Machine Translation

The history of machine translation in the past half century shows how the ideal language paradigm failed in a particular case. The project of machine translation started in the 1950s, motivated by cold war hopes of automatically translating Russian texts into English. The aim of the project was to develop a universal interlingua or pivot language. “We are assuming that an interlingua can be designed which models the conceptual level of understanding that people possess independently of any natural language” (Cullingford and Onyshkevych 1985: 76). More modestly it was hoped that machine translation could be based on “a collection of formal and substantive linguistic universals” (Arnold and Sadler 1990: 195). However, as Nagao points out in a book on machine translation: “There is in fact little theoretical basis for the overall structure of the semantic primitives, and alternative systems will differ according to differences in the perspectives of researchers” (1989: 77). The search for a universal interlingua had to be abandoned.

In the second stage of research on machine translation, it was accepted that one had to develop software for every pair of target and original languages separately. The so-called transfer technique became dominant, involving three dictionaries, one for analyzing the source language, one for transferring, and one for generating the target language. The word interlingua might still be used for the transferring stage, but now limited to only one pair of languages. However, it appeared that there were no workable principles of compositionality that are generally applicable. In addition, issues of discourse context, nonlinguistic context, the need for common sense knowledge, inferences drawn about the intentions of the author or about underlying cultural factors, and so on, still presented irresolvable problems.

In the third most recent stage, the ideal language assumption is given up completely when statistical data-driven approaches are used. This approach is only suitable for languages for which one has access to a large quantity of bilingual parallel data. In addition, the statistical nature may lead to unpredictable errors, which require human evaluation of the result. There can be no doubt that further development will improve machine translation, simply because the databases with bilingual translations evaluated by humans will increase in size. Hence, more and more sentence fragments and even whole sentences can be extracted from a database, in particular for narrowly restricted domains.

However, recent theoretically motivated research also shows that, at the level of the translation of single sentences, there are still numerous difficulties that cannot be properly handled. For example, a recent human evaluation revealed that one of the best commercial systems for English-Chinese translation failed to generate satisfactory translations for around 50 percent of the prepositions (Sun 2010: 134).12

Nagao’s conclusions, published in 1989, still seem valid:

It is simply impossible to provide definitions of the scope of linguistic phenomena, to define clearly what language is and what constitutes legitimate and illegitimate linguistic expressions. … Stated bluntly, language is a massive conglomeration of exceptions, and any competent machine translation system must therefore be … open ended and evolutionary. (11–12)

And Richards’ remark of 1953 still rings true: translation is “very probably the most complex type of event yet produced in the evolution of the cosmos” (250).

It may seem that the failures of machine translation do not exclude the possibility that an ideal language might be developed in the future. However, these failures strongly suggest that dependence on the specifics of particular natural languages cannot be avoided.13

Shared or In-Between Language

Habermas’ theory of communicative interaction allows an unlimited exchange of interlocutionary roles and the freedom to move to metatheoretical levels and to call into question any originally accepted conceptual framework.14 But all these possibilities are set within the bounds of one transcendental ideal language.15 Apel, who shares a number of Habermas’ views, writes:

A [universal] language game must be postulated by which, in principle, communication with all language games and forms of life is possible. … The postulated language game must provide itself a paradigm or ideal norm for judging all other language games. (1994: 102)

One can also find traces of the ideal language assumption in those theories of interpretation or communication that advocate that parties work together on a shared or common or in-between language. Gadamer positions himself explicitly against the ideal language assumption,16 but he often speaks of a common language.

Every conversation presupposes a common language, or, better, it creates a common language. … Reaching an understanding on the subject matter of a conversation means that a common language must first be worked out in the conversation. (1989: 371)

In any form of dialogue, we are building up. We are building up a common language. (1984: 63)

Gadamer’s idea of a common language is associated with his well-known notion of the fusion of horizons:

We can now see that this [i.e., fusion of horizons] is what takes place in conversation, in which something is expressed that is not only mine or my author’s, but common. (1989: 390)

Understanding is always the fusion of these horizons supposedly existing by themselves. (305).

One might think that Gadamer’s “common language” need not be understood as a universal language: every conversation or encounter or interpretation might have its own common language. However, the principle of fusion of horizons and of working out a common language applies to any set of languages. Hence, a step-wise process can be envisioned that moves to a universal common language in which all horizons have been fused. This consequence is seen more explicitly in the writings of some of the writers influenced by Gadamer, for example, Charles Taylor, whose views are discussed in chapter 9. Merleau-Ponty expressed similar views:17

It is a question of constructing a general system of reference in which the point of view of the native, the point of view of the civilized man, and the mistaken view each has of the other can all find a place—that is, of constituting a more comprehensive experience which becomes in principle accessible to men of a different time and country. (1964: 120; cf. 122)

The idea of a general system of reference, seemingly contradicts what Merleau-Ponty says elsewhere concerning the impossibility of a universal grammar and linguistic universals.18 But he thinks that “a general system of reference” is possible because of the existence of “a sort of lateral universal which we acquire through ethnological experience” (120). Communication with other traditions is possible in the existential field of lateral relationships, in which the unity of the human spirit exists, “in the echoes one awakes in the other” (139).19

In recent decades there have been numerous proposals similar to those of Gadamer and Merleau-Ponty, which uses one language or one kind of interpretative situation to form the basis of transcending the differences between the language to be interpreted and the language of the interpreter. For example, Tully assumes that an intercultural language for the middle ground where people meet is possible and needed:

The participants are gradually able to see the association from the points of view of each other and cobble together an acceptable intercultural language capable of accommodating the truth in each of their limited and complementary views and of setting aside the incompatible ones. (133)

Burik (2009) advocates new possibilities of reading by allowing the interplay of differences and the in-between of these differences and their relationality.20 He says that Derrida, Heidegger, Laozi, and Zhuangzi can be compared in the margins of everyday thinking.21 However, Burik consults only English translations of the Chinese and French sources and discusses these matters in English. Burik works in the periphery of twenty-first-century English. His “in-between language” is nothing other than the language of the interpreter raised to the status of a universal metalanguage.

Instead of speaking of cultural differences, which brings with it the notion of cultural identity, Jullien (2012) advocates a discourse in terms of distance (écart) and productivity (fécundité) on which he builds his notion of l’entre (in-between), an autoreflexive space. Instead of the usual focus on truth in the Western traditions, the focus should be on effect, output, yielding (effet, rendement). A tradition that does not transform itself is dead. We are sympathetic to Jullien’s program of deconstruction from the outside (dehors) by going “via China,” in place of the more common deconstructions from within the Western traditions. However, Jullien assumes that a particular language (in his case the French language) suffices to make possible such an in-between discourse in terms of écart and fécundité.

Schrag puts forward the idea of transversality (1992: 152) that encompasses all the “good things”: recognizing otherness, mutually acknowledging the necessity of adjustments and accommodations, developing self-understanding, encouraging shared responsibility, avoiding hegemony and war. Jung Hwa Jol (2013) used the idea of transversality or lateral movement for a similar optimistic view of “a world republic of philosophy,” in which the “transversal exchange of ideas and values” (465) is not hindered by differences in language and tradition. Along similar lines, Wu Kuang-ming has used Schrag’s work. His label is “inter-versal sensitivity.” Wu’s method aims to reveal “China-West differences” (2010: 202).22 Another variant is Zhang Xianglong’s proposal that we “must achieve the inter-paradigmatic condition in order to activate the cross-cultural comparative situation” (2010: 99).

All these proposals still contain traces of the ideal language assumption,23 because they require one language suited for interpretation across traditions. We use the expression “ideal language paradigm” to include every proposal to the effect that there is one language that could serve as the ultimate basis for interpretation across traditions. Roughly speaking we find commitment to the development of one ideal language in the narrow sense in analytic philosophy and a commitment to one language “in-between” in continental philosophy. Philosophers engaging in intercultural philosophy tend to assume that an overarching or in-between language or “intercultural ‘common’ ground” (Tully: 14) is necessary (and possible).24 We argue that it is not necessary to presuppose a shared language in a theory of interpretation. Imposing a common language will lead to the hegemony of the language of one tradition or to other asymmetrical effects. Without sharing a language, interpretation and communicative interaction is already happening. To put this approach into a catch phrase: there is no need to speak the same language (the title of chapter 5).

Lyotard’s Approach to Language

What about postmodernist philosophers who stress the “disconnectedness” of language such as Derrida and Lyotard? They certainly provide arguments against the narrow notion of an ideal language. But, are they exempt from the specter of the ideal language paradigm? Let us consider some of Lyotard’s observations and proposals concerning language.

Drawing on Wittgenstein’s notion of language games, Lyotard argues that there is no such thing as the/a language: “There is no unity of language. There are islands of language, each of them ruled by a different regime, untranslatable into the other” (1993: 20). In this remark it would seem that Lyotard essentializes his language isles or language games. Because he assumes that there is a fixed way of following the rules of language games on each language isle, Lyotard speaks of incommensurability between language games.25 This is not a correct understanding of language games, which are open-ended across time and individuals, and are multi-interpretable.26

In The Differend: Phrases in Dispute (1988), again borrowing Wittgenstein’s notion of language game, Lyotard introduces his notion “phrase regime,” and adds to it the agonistics of struggle and conflict. The so-called consensus can only be established on the basis of exclusion. Each phrase is an event, which is a “radically singular happening.” Examples of phrases include silence, a mathematical equation, a sentence, grunting. Examples of phrase regimes include describing, reasoning, displaying humor. Phrases cannot be translated from regime to regime, but they can be linked according to a certain end or purpose.27 Ends are fixed by genres, not by grand narratives.28

Genres are incommensurable, leading to différends (differends).29 Lyotard seems to take the classification of genres as fixed. However, in an intercultural context, one has to allow for alternative classifications. Even the phrase regime or genre of classification may not have a similar well-defined application in all traditions.30

A differend is a wrong done because the discourse of one party is excluded from the outset by the dominant discourse that determines possible (imposed) linkages. One becomes a victim when, due to unfavorable discursive conditions, one cannot express one’s loss or suffering as a “damage” that deserves restitution (indifference to suffering is the limit of the ethical).

The multiplicity of stakes, on a par with the multiplicity of genres, turns every linkage into a kind of “victory” of one of them over the others. These others remain neglected, forgotten, or repressed possibilities. (1988: 136)

The stakes bound up with a genre of discourse determine the linking between phrases. They determine them, however, only as an end may determine the means: by eliminating those that are not opportune. (84)

It has been suggested that Lyotard claims that the aim of the philosopher is to find new rules for forming and linking phrases that are able to deal with the differend disclosed by the feeling of justice.31 But it remains unclear what legitimates the philosopher’s proposals for new linkages and phrases. One step toward justice might be calling for “mininarratives” that would replace a metanarrative. Lyotard writes:

Each genre of discourse would be like an island; the faculty of judgment would be, at least in part, like an admiral or like a provisioner of ships who would launch expeditions from one island to the next, intended to present to one island what was found (or invented, in the archaic sense of the word) in the other, and which might serve the former as an “as-if intuition” with which to validate it. Whether war or commerce, this interventionist force has no object, and does not have its own island, but it requires a milieu—this would be the sea—the Archepelagos or primary sea as the Aegean was once called. (130–131)

This archipelago metaphor has received criticism. For example, Rasch asks: “Why does the faculty of judgment not have an island of its own?” (2000: 96). Is the exercise of judgment not at the same time exercise of language, tacitly assuming a “genre, or at least ‘a concatenation’ of genres, that is, an island, or at least a cluster of islets?” (Rasch: 96). For Lyotard judgment as opposed to language (phrase/genre/discourse) seems to occupy the place of an ideal language. In what way is the admiral meant to handle the differend he encounters in his attempts to bridge the gaps between genres? Is Lyotard suggesting that the role of a philosopher, or of a politician, or of an artist, is to navigate the unchartered waters between formally irreconcilable genres of discourse? Lyotard (following Kant) still seems to be appealing to the faculty of judgment as an ideal regulative genre of discourse, which allows for the possibility of just communication among the archipelago of incommensurable phrase regimes, although, as he says: “a universal rule of judgment between heterogeneous genres is lacking” (xi).

Instead of the deus ex machina of an admiral, we propose the following: two genres of discourse, X and Y, meet at sea. X and Y each devise a new language in the light of the differend (not the same for X and Y, but observed as similar). Then X and Y communicate with the other in their own discourse. This would conform to Lyotard’s view that a just condition of life allows their appearance and flourishing, without trying to resolve the differend by means of litigation, that is, by imposing on X and/or Y rules that do not belong to X’s or Y’s own language games. This does not remove the differend in a concrete situation (cf. discussion of Waitangi treaty in chapter 5), which is impossible.

Are There Universals?

Cultural, Cognitive, and Philosophical Universals

We use the term “universal” with reference to the (allegedly) universal features of humans and human practices. In terms of features we include cognitive and other capacities, conceptualizations (concerning language(s), animals, rites de passage, body parts, etc.), emotions, moods,32 understanding each other, and so on. Human universals may be further classified in cultural, linguistic, logical, philosophical, and cognitive universals. As we show in this section all types of universals are congeners of the ideal language paradigm and exemplify Aristotle’s isomorphy thesis. Universals would provide the meaning of the words and grammar of the (universal) ideal language.

Cultural universals refer to (inter) human behavior, actions, activities, or practices that have been observed in all human life-forms. Using language is the most noticeable universal feature of human forms of life. Linguistic universals (a special class of cultural universals) are features (allegedly) shared by all human languages. Another special class of cultural universals is logical universals: logical constants and operators, logical truths such as the law of noncontradiction. It is not easy to distinguish between linguistic, cognitive, and cultural universals. A strict separation will presuppose a particular theory of language.

Cognitive universals can be said to be the atoms of cognitive science (cf. logical atomism). It is assumed that there exist a fixed number of cognitive domains (COLOR, EMOTION, etc.), each containing a fixed number of culturally invariant (or even species-invariant) basic concepts.33 All the domains together make up a closed system. All action/behavior fits a certain cognitive model. Cognition is, so to say, just a quantity of stuff for which scientific laws (giving its statics and dynamics) need be found. Characteristic of the universals as natural categories is that the words for it are short, learned earlier than other words, are usually introduced ostensively, and are most frequently used.34

Under the influence of the hegemony of cognitive science and the extension of the idea of biological evolution to the cultural domain, cultural universals are often reduced to cognitive universals. Cultural and cognitive universals are, as it were, taken to be derivatives of the natural kind “human being.” If the human brain is part of nature, there is in principle no distinction between studying biological or chemical kinds and embodied natural kinds such as experiencing or perceiving color or emotion. Features of interhuman interaction are also explained in terms of biological adaptation and the principle of survival of the fittest. This leads to the following picture. There is a hierarchy of natural kinds; each natural kind can be divided into other natural kinds and is itself part of a more encompassing natural kind. All these natural kinds hang together. Whatever the essence of pink or angry is, it will be explained in terms of other natural kinds, just as the biophysical and biochemical properties of the natural kind cell or gene will be explained in terms of other natural kinds. The alleged aim of science is to identify the essences of all these natural kinds and to give an account of how contingencies can be understood as variations of locally present natural kinds.

Categorical or philosophical universals stem from the history of Western philosophy. It is claimed that abstract concepts such as justice, beauty, causality, knowledge, truth, the right, the good are universals. The distinction between philosophical and cognitive universals is fluid. Plato’s universals (ideas or forms) apply to both beauty and horses. The so-called descriptive metaphysics proposes a detailed set of philosophical universals. Strawson writes:

It is possible to distinguish a number of fundamental, general, pervasive concepts or types of concept, which together constitute the structural framework, within which all ordinary thinking goes on. … I have in mind such ideas as those of space and time, object and property, event, mind and body, truth, sense and meaning, existence, identity, action, intention, causation and explanation. (1990: 312)

The items listed in the last sentence of the citation provide typical examples of (proposed) philosophical universals.35

In the history of Western philosophy, discussions about universals focus on universals and particulars (the question of the one and the many), realism and nominalism (the question whether universals are mind-dependent), and universalism and relativism (the question as to whether traditions share universals). Our discussions primarily concern the last cluster of problems. Further, we argue against essentialistic (metaphysical realist) theories of universals, which are directly associated with the ideal language assumption. There are family resemblances between human practices and conceptualizations, but these similarities do not harbor universal core meanings that are shared by all human beings.

Quasi-universals contrast with universals. In the present context we assume the word universals to refer to concepts and behaviors allegedly shared among all humans (including philosophical reflections shared among all human traditions). The phrase quasi-universal has been used in numerous contexts. Perhaps the nearest to our use comes the use of “reasonableness” (in a Confucian context, in a Western context, etc.) as a quasi-universal in the sense of not being global, but regional. Such quasi-universals are necessary but only in a particular Umwelt. However, this notion of quasi-universal presupposes universally shared notions (of reasonableness in the example). Our notion of quasi-universal has a number of important characteristics that distinguish it from all other uses of the phrase we know of:

Quasi-universals are family resemblance concepts. Like other FR-concepts, a quasi-universal has no core and is open-ended in its use (see chapter 4).

Quasi-universals connect notions from a limited number of traditions by extension of FR-concepts.36

Each tradition has its own label for a “shared” quasi-universal. However, these labels do not name identical concepts.37

Quasi-universals fulfill a heuristic and necessary role in the interpretative practice.

Quasi-universals are revisable as a consequence of the continuing process of interpretative practice.

Data and background underdetermine the choice of quasi-universals in every particular case. For example, translation of classical Chinese into English or Japanese may draw on different quasi-universals.

The notion of quasi-universal and extension of FR-concepts is more fully developed in chapter 10.

Linguistic Universals

Chomsky’s idea of universal grammar has been the dominant view in linguistics for several decades. This view assumes that grammar (syntax) is hardwired in the brain and fixes properties that all natural languages share. All languages would share the same “deep structure.” However, empirically orientated linguists have always been skeptical of this idea. For example, after providing a long list of grammatical universals,38 Hockett remarks: “Although we tend to find these patterns in language after language, it is entirely possible that we find them because we expect them, and that we expect them because of some deep-seated properties of the languages most familiar to us” (1963: 23). In the margin of mainstream linguistics, numerous “exceptions” have been reported: the subject-predicate scheme is not universal;39 some languages seem to consist mainly of verbs; others seem to consist only of proper names, predicates and prepositions.40 It is not the case that all languages have nouns, verbs, adjectives, and prepositions.41

In recent years mainstream linguistics has become less universalistic. In a so-called target article in Behavioral and Brain Sciences, Evans and Levinson write:42 “There are vanishingly few universals of language … diversity can be found at almost any level of linguistic organization [sound, grammar, lexicon, meaning]” (2009: 429). Almost every language that is carefully analyzed displays some phenomena that cannot be adequately treated in terms of existing concepts. At present, decent descriptions are available for only about 0.2 percent of the full range of linguistic diversity. Hence one may expect more variation to be seen in the remaining 99.8 percent (432).

Merleau-Ponty has given a priori arguments that linguistic science is no more than an addition to one’s own language by extending the assumptions of one’s own language to that of others (1973: 26). He remarks that, although Husserl warns the grammarian of universalizing his own theory, Husserl forgot that “the table of fundamental categories of language” (that is, the negative, the plural, the existential proposition, and so on) demanded by his pure grammar “bears the mark of the language which he himself spoke” (25–26). As Benveniste does, Merleau-Ponty provides a long list of linguistic “facts” that “prove” the lack of universality among languages.43

In the Language of Thought hypothesis, which is primarily concerned with semantic universals, we find a fully developed idea of an ideal language, containing all linguistic, cognitive, and cultural universals needed as basic terms to build all concepts. Chomsky writes:44

The speed and precision of vocabulary acquisition leaves no real alternative to the conclusion that the child somehow has the concepts available before experience with language and is basically learning labels for concepts that are already part of his or her conceptual apparatus. … The child approaches language with an intuitive understanding of such concepts as physical object, human intention, volition, causation, goal, and so on. These constitute the framework for thought and language, and are common to the languages of the world. … It is beyond question that acquisition of vocabulary is guided by a rich and invariant conceptual system, which is prior to any experience.

We argue that the universals mentioned in the above citation are at best quasi-universals in the sense that they are only applicable to a limited set of languages or traditions and that their meaning varies with the language and conceptual schemes in which they are expressed.

Case Study: Basic Emotions

Mainstream psychology presupposes that all emotions can be defined in terms of a small number of basic emotions. Basic emotions are taken to be natural kinds or cognitive universals in the sense that they are readily recognized and referred to by all humans. Hence, words for basic emotions are easy to learn and to translate between languages. In addition, it is taken for granted that there are a small number of prototypical facial expressions corresponding to a small number of basic emotions.45 Ekman writes:46 “Regardless of the language, of whether the culture is Western or Eastern, industrialized or preliterate, these facial expressions are labeled with the same emotion terms: happiness, sadness, anger, fear, disgust, and surprise.”47

Let us look at some details that led to this thesis of pancultural basic emotions. In this kind of study, agreement is, as one might expect, highest among speakers of English, followed by speakers of other languages, roughly in the order of their “distance” from the global discourse. For example, data concerning the Minangkabau people are often cited to support the pancultural thesis, but these data were “gathered by Karl Heider in the Indonesian language from bilingual Minangkabau in Padang, West Sumatra” (Ekman et al., cited in van Brakel 1994: 219n21). Virtually all reported cross-cultural data have been gathered from literate people (often college students, if not psychology students). In cases where the degree of education or contact with Western societies was included as a variable, scores of recognition of the so-called basic emotions dropped when participants were less aware of Western culture. Furthermore, experiments of the forced-choice type are used in most cases. Such experiments cannot tell whether a labeled facial expression corresponds to the concept expressed. In addition, in the most influential studies on which the thesis of universality is based, the alleged agreement is only achieved with specially selected photographs of highly stereotyped, uniform, posed expressions.

The data collected from the preliterate Fore people (New Guinea) were reported to support the pancultural thesis in the “official” publication of Ekman and his collaborators. However, one of the collaborators reported separately on the methodological pitfalls of their research with the Fore (Sorenson 1975).48 In Ekman’s report, only the data for Fore who (also) spoke Melanesian pidgin were included. As one could have expected, their responses showed more similarity with Western responses than those of monolingual Fore people.49

Ekman and his collaborators use three different methods, not one of them acceptable by scientific standards, according to Sorenson.50 One of the major problems was:

The interrogative style had not evolved as a significant part of the Fore communicative repertoire. … Among Fore direct questions were usually considered hostile provocations and answers were not expected. … The least acculturated [to Western ways] were most afflicted; they often seemed bewildered, even fearful, in the face of the kind of interrogative communication which Westerners take so much for granted. (Sorenson: 365–366)

Still the Fore people were eager to conform to what they were requested to do. Hence: “They were quick to seize on the subtlest cues for an indication of how they should respond and react” (368). Schoolboys were used as interpreters (intermediaries between the researchers and the older Fore people), whereas the suggestion that free exchange of information was “cheating” (in the experimental context) was quite alien to the Fore concept of cooperative relationship (364).

Ekman seems to be saying that there are always one-to-one translations across languages for “basic” emotion terms. However, it might well be the case that in the Ekman-type experiments, what is at issue is not universal recognition of “the same” basic emotions, but familiarity with Western forms of life (perhaps indirectly by speakers of a local pidgin language).

A neutral conclusion might be that it is simply impossible to gather “pure” non-Western data. What the data may show is that people often make appropriate guesses of other people’s emotions and congeners, even cross-culturally; but this is a far cry from stating that there is universal agreement on what, say, a prototypical sad expression is, or how this facial expression is understood. What can be defended is that there are family resemblances of mutually recognizable human practices, but there exist no human universals characterized by some sort of essence.51

Wierzbicka (1999) has leveled a different kind of criticism of the pancultural thesis. She holds that as to specific emotions there are no cultural or cognitive or semantic universals. However, she argues that emotions can be defined in a universal metalanguage that uses universals such as TO FEEL (but not, say, TO BE ANGRY).52 According to Wierzbicka, all languages have a word for English to feel (covering both bodily and cognitively based feelings). Ganjue 感觉 would be the modern Chinese label for the universal TO FEEL; similarly, the Hindi mahsūs would be the Hindi expression for the universal TO FEEL.53

Among the emotion words and congeners, which Wierzbicka and her collaborators discuss, we select one example, Wierzbicka’s (2004) criticism of Nussbaum (2000) with regard to the latter’s assumption that grief is a human universal. Grief is the center of Nussbaum’s attention, but numerous languages have no word for grief in the modern American sense: that is, “short-term suspension of normal life”; that is, the idea that grief should not be allowed to spill over in the rest of one’s life. Wierzbicka gives examples of languages in which there are words perhaps having some family resemblance to “twentieth century American grief,” but which do not share a common core.54

For example, the Chinese ai is usually translated as grief, sadness, or sorrow, but it actually focuses on the other person, not on the loss event (Wierzbicka: 593).55 In Tolstoy’s War and Peace, there are two deaths and several “griefs.” But from a Russian point of view the bereaved persons do not feel grief but several other emotions, which Tolstoy refers to with various metaphors.56 Even in English, Nussbaum’s notion of grief is a recent phenomenon: “modern English has exorcized woes, sorrows and griefs from the fabric of normal life.”57

Wierzbicka is right to draw attention to the fact that grief is not a universal with the same core meaning across traditions and time. Nevertheless, by rephrasing grief and congeners in other languages in terms of a more abstract universal language, she fails to allow for family resemblances at a lower level and remains a universalist at one remove. It is not at all obvious that the meaning of Chinese ganjue and Hindu mahsūs is equivalent to English to feel.58 So why should TO FEEL be the universal and not mahsūs or ganjue?

The research and discussions concerning basic emotions show that empirical evidence concerning universals is often unreliable. There are always similarities or family resemblances between “emotion” words in different traditions, but they do not share core meanings.59 In other words, it cannot be justified to consider presumably basic emotions as universals.

Is “Standard” Logic Universal?

That logic is universal has been taken to mean: first, logical operators and quantifiers, such as OR, IF … THEN, ALL are universals (even if not all of them are marked in all languages); second, logical truths such as the law of noncontradiction and modus ponens, are universally obeyed.60 The available justification of logic, the discovery of alternative logics, and disagreements about the interpretation of ethnologics, show that there is no good reason to assume standard logic to be a universal constraint and criterion for such things as what it is to have a language or to be a person. We discuss these three issues in reverse order.

According to reports of Evans-Pritchard (1937), the Azande people believe that witchcraft is a hereditary feature that is localized in a physical substance in the belly. A male witchdoctor will transfer the substance to all his sons, a female witchdoctor to all her daughters. It seems to follow from these beliefs that if one finds one indisputable witchdoctor, one would be able to predict which descendents would also be witches. However, according to the Azande, this depends on other factors as well. Western scholars have considered this view “paradoxical.”

If one wants to learn from the Zande case concerning universality of logical rules, one of the difficulties is how their reasoning is to be formalized. In one reconstruction (in English) it is possible to conclude that the Azande do not use Western logic, but perhaps a different “alternative” logic in situations involving witchcraft. In another reconstruction the Azande employ the method of reductio ad absurdum, and the example presents no problem for a defense of the universality of logic.61

But perhaps this is not an issue of particular views of logic and styles of argumentation. What is “given” is a report by Evans-Pritchard (1937: 12, 65) of a number of conversations he had with Zande people. Assuming that the utterances of the Azande (in particular those concerning witchcraft) have been given good enough translations one may ask: Why embark on a project of identifying premises and conclusions in the recorded speech of the Azande and draw conclusions in such form as “the conclusion follows from the premises” (or it does not)? Is the appeal to logic relevant in the study of the reasoning of the Azande concerning witchcraft? (Should every bit of language be reconstructable as a coherent argument?) Does a logical conclusion force one to adjust one’s beliefs or actions accordingly? (Can what one does with a number of premises itself be constructed as a premise or conclusion?) What is wrong with entering into contradictions or paradoxes? (Perhaps the Azande themselves do not have that sort of worry?) Instead of commentators’ focusing on issues of logic (on which Evans-Pritchard spent less than 1 percent in his text on Zande witchcraft),62 it may be more interesting to look at the inconsistencies in the account of Evans-Pritchard or at the social context of both the Azande and the Western belief systems.63 For example, neither Zande practitioners of magic nor Christians at prayer have a very clear grasp of the logical implications of their beliefs.

At least six different interpretations have been given in the relevant literature to “explain” the (apparent) anomaly in the Zande reasoning. It is generally assumed (with vague references to Evans-Pritchard’s conversations with the Azande) that the “hereditary physical feature” is a necessary and sufficient condition. However, even if it seemed to Evans-Pritchard that the Azande thought in this way, there are indications that this is not what the Azande themselves believed (i.e., the condition being necessary and sufficient). For example, according to the Azande, it is possible that the witchcraft-substance is “cool,” that is to say it is not effective. It is surprising that none of the “specialists” have favored this commonsense explanation. It would seem that ascribing an explicit logic to the Azande (as most commentators do) is at best a model for the purpose of ordering a particular interpretation. Even if the model that entails contradictory beliefs would be the “correct” one to be ascribed to the Azande,64 it is not clear what conclusion might follow from the “fact.” This case study shows again the difficulty of carrying out research that aims to prove or disprove universals across human life-forms.

A range of alternative logics have been proposed and developed in the course of the twentieth century, in response to developments either in science or in the philosophy of logic and mathematics (Haack 1996). Standard logical theory assumes that a sentence or proposition is either true or false. Slightly more liberal views allow that a third possibility may arise, the neither case. In addition to “true” and “false,” a third truth value is assumed, for example “possible” (many-valued logic) or “indeterminate” (quantum logic).65 This opens the possibility of ascribing alternative logics across traditions. One even has to leave open the possibility that one may have to ascribe paraconsistent logic. Paraconsistent logic rejects ¬(p ∧ ¬ p), that is to say, it denies the validity of the law of noncontradiction. Paraconsistent logic might be relevant for interpreting Western and non-Western thinkers who (allegedly) indulge in contradictions.66

That a contradiction might be true, or that dialetheism (the view that there are true contradictions) makes sense, may be a threatening idea. How do the truth (and falsity) conditions of, for example, negation and conjunction, work in paraconsistent logic? They work just as one would expect them to work. ¬α is true if and only if α is false; and vice versa. Similarly, α ∧ β is true if and only if both conjuncts are true, and false if at least one conjunct is false. Further, if α is both true and false, so is ¬α, and so is α ∧ ¬α. Hence, a contradiction can be true (though false as well).

The most common objection is that a contradiction cannot be true since a contradiction entails everything, and not everything is true. However, granted that not everything is true, the argument from α ∧ ¬α to an arbitrary β need not preserve truth, and so is not valid. Moreover, there are no direct arguments in favor of the Principle of Non-contradiction. More subtle arguments suggest that some crucial notions, for example, truth, validity, or rationality, require consistency. These are serious issues that deserve attention, but contradictions should be taken seriously as well (Priest 1995).

It is not unusual to spot much attention paid to the justification of both factual knowledge (epistemology) and moral judgments (ethics) in introductory books to (Western) philosophy. However, the question of the validity of logic (and its rules of inference) is rarely, if ever, raised. One might think that this is because the justification of logic is self-evident, but this is not the case.

How does one convince somebody who refuses to accept a conclusion that is “obviously” correct? For example, modus ponens: if p q is true and p is true then it must be the case that q is true. This is obvious and trivial. However, one cannot produce a proof to convince someone of the logical truth of this inference pattern without already assuming, among other things, modus ponens.67

The problem of justifying this or that logic has led Dummett (1991) to argue that a theory of meaning (of a language) is more fundamental than logic (and also more fundamental than the rest of philosophy, including metaphysics). The first task of such a theory of meaning is to elucidate the meaning of the logical constants. Somebody who denies the law of the excluded middle, p ∨ ¬p, is perhaps giving a different meaning to “or” and/or “not.” Yet how can a theory of meaning be provided that makes argumentative communication possible between those who are for, and those who are against, the law of excluded middle?

To erase doubt concerning the “unproven” status of logic, let’s look at a list of respectable (analytic) philosophers, all having made original contributions to mathematical logic or philosophy of logic, all stating explicitly that there is no such thing as a final justification of logic.

Principles of deductive inference are justified by their conformity with accepted deductive practice. A rule is amended if it yields an inference we are unwilling to accept; an inference is rejected if it violates a rule we are unwilling to amend. (Goodman 1972: 64)

Even topic-neutral truths [like the truths of first-order predicate logic] emerge from experience, or, better, from practice (including reflection on that practice), and belong to conceptual schemes which have to show their ability to function in practice. (Putnam 1994: 161)

Every truth of elementary logic is obvious (whatever this really means). … I am using the word “obvious” in an ordinary behavioral sense, with no epistemological overtones. (Quine 1986a: 111; 1986b: 82)

We conclude that the basis of correct reasoning is logical deduction, but there is no logical justification of deduction. In the end, one has to resort to intuitions that are considered self-evident. This does not imply that such intuitions of what is self-evident shouldn’t be relied upon or could not change or that the variety of intuitions embedded in different lifeworlds would necessarily converge to one horizon or one end of inquiry. Something like standard logical reasoning is a characteristic of most interhuman communication, but it is not possible to stipulate one universal standard to be applied mechanically and independently of context. The logic of alternative logics shows that even if standard logic is assumed to be a universal, one has to be prepared for exceptions in certain domains (Zande witchcraft, Christian beliefs, Gongsun Long, quantum mechanics). Logic, like beliefs, is something that is ascribed to a speaker in actual discourse or as part of interpreting ancient (or any other) texts.

Logic in Classical Chinese Traditions

Chinese logic with a history of 2,500 years came definitely into existence in the 1890s, although details are still a matter of debate (Kurtz 2011) and sources such as the later Mohist Canons remain under investigation (Graham 1978, 1989; Fraser 2007, 2012; Robins 2010). The translation and naturalization of European logic was a necessary condition of the possibility for the discovery of what we have since come to understand as its Chinese counterpart (Kurtz: 10). As every thinker has a method (fangfa 方法), an implicit logic can be unearthed.

Aristotle’s syllogistic logic was not systematically developed in classical China. However, this does not mean that classical Chinese texts contain no logically valid argumentation. In fact, one may explain the absence of interest in technical formalistic logic by the fact that the grammar of classical Chinese is more logical than that of European languages. Graham writes:68

Classical Chinese syntax is close to symbolic logic: it has an existential quantifier (yu [you , x]), which forbids mistaking existence for a predicate and is distinct from the copulas. (1989: 412; cf. 403)

Furthermore, classical Chinese is strictly logical with regard to negation. In Chinese a double negative amounts to positive, as logic requires (classical Greek is untidy on this point). Although Graham has been called a relativist, he is a convinced universalist concerning logic. He falsifies the Sapir-Whorf thesis as follows: “That symbolic logic is a Western discovery confirms that our thought has not been permanently imprisoned by Indo-European language structure” (1992a: 79). According to Graham, there can be linguistic relativity with respect to grammar, but not with respect to logic.69

Robins (2010) has argued that the Mohists were not moving in the direction of formal logic (the latter preserving truth in inferences using sentence-like entities), but that does not exclude “being logical” in other ways.70 Robins argues that Later Mohist logic focuses on applications of names or phrases (ci ) to things, “constrained by relations that are recognizably logical in nature, … notions analogous to those of consistency, contradiction, and entailment” (281).

One noticeable feature of Chinese texts is analogical reasoning.71 Reding suggests that formal and analogical reasoning do not necessarily oppose one another (34).72 According to him Chinese also employs variables (39); for example, in Gongsun Long’s bai ma fei ma 白馬非馬, bai and ma are variables. Reding’s major thesis is that particular syntactic structures may serve as standards of validity. Therefore, Chinese reasoning is rational, even if Chinese scholars did not put it in terms of logical validity. The more transparent sectors of a language provide criteria for the more obscure parts (45). For example, an issue concerning necessary and sufficient conditions can be dealt with in terms of narrative language added by logical intuition. Analogical arguments in classical Chinese use better logical intuition than was available in Europe (because of the logical untidiness of European languages).

Concerning the issue of counterfactuals in classical Chinese, there is also an extensive discussion. Can one “prove” that shi 使 (make that, cause) is used to introduce conditionals? One cannot, because there are numerous possible interpretations or translations of a text. However, even if there is no particular character to indicate a counterfactual, this does not exclude the possibility of counterfactuals actually being used. For example, Daoist satire “thrives on skilful manipulation of counterfactuals” (Wu Kuang-ming 1987: 88).

Many disagreements arise because a text allows for many self-consistent interpretations, including interpretations using conceptual schemes that have never been even tacitly mentioned in the tradition of the texts being studied, for example the subject-predicate scheme. Cheng Chung-ying has argued: “Chinese language has an implicit structure of first-order predicate logic” (2007: 558). Han Xiaoqiang (2009) holds that there are no subject-predicate sentences in Chinese. Lohmann (1972: 333) takes the view that Chinese sentences are feature-placing sentences, which according to the analysis of Strawson (1959) are not subject-predicate sentences. According to Graham (1992a: 72), the subject-predicate scheme may be relative to language, but logical operations (including first order predicate logic) are universal. All four scholars might be right, because the prima facie structure of a language provides the possibility of alternative rational (imaginative) reconstructions.

Even if one takes it for granted that standard logic is universal, except for specific exceptions relative to the background of its overall validity, it should be no surprise that disagreements may arise on matters of detail, because the logical structure of a statement is not self-evident, but is itself a matter of interpretation. An argument in a natural language can be claimed to be valid if there is one logical translation that is valid. In another logical translation it may be invalid. The choice of how to formalize a natural language is not a matter of logic.

One might think that each grammatical sentence should have a particular logical structure. For example, Graham (1992a: 76) renders the following phrase of Wang Chong 王充 (CE 27–c.100): wuwu busi 物無不死, as: “No thing does not die” (context: organisms).73 Graham suggests that the logical structure of the Chinese sentence and the English translation is identical, even though the meanings of wu and si are quite different from the “corresponding” English words. As argued for in detail by Harbsmeier (1998), “logical operations were always the same in China as they were here” (Graham 1992a: 76).

For Graham, logic is independent of syntactic structure (75). We agree that this is true as seen from the perspective of logic. But “translating” natural language into logical formulae is always already interpretation. One is interpreting grammar, assigning a grammatical structure to the Chinese sentence in terms of wu and bu . For example, in classical Chinese there are a large number of particles for negation (Pulleybank: 103–111). Their differences have not been fully understood. Hence it is an assumption that both wu and bu function as logical negation (¬). In principle (though not in practice), the Chinese and the English sentence may be given different logical translations.74

Concerning the (alleged) universality of logic, we conclude:

1.There can be no logical justification of the universal validity of deductive logic, let alone any other logic.

2.Something like logical reasoning (in the sense of first-order predicate logic) is a characteristic of most interhuman communication (and survival),75 but there is not one universal standard to be applied to every situation.

3.In interpretative practice, one should be prepared to encounter situations where conformity to, say, modus ponens or the law of excluded middle, cannot be ascribed to speaker or author. But such situations are exceptions relative to the quasi-universal expressed in the previous point.

4.Because ascribing logical standards to a corpus is also part of (underdetermined) interpretation, different logical interpretations of classical Chinese texts, for that matter, are possible.