7 / VII   To infinity and beyond?

To conclude, I return to reckon once more with the central questions that motivate this book. Some of these are specific to number systems and numerals. Some of them are far broader. And by “reckon,” I do mean both that I hope to think about them and come to some sort of judgment—with the proviso that reckoning is always an ongoing process, never complete.

Is your number system weird?

On the surface, this question may seem weird, and you may be thinking that its author is weird too. But I mean it seriously. We have these numerals, 0123456789, and we know how to combine them using place value. We begin our thinking lives, as infants, surrounded by numerals—if you grew up in any Western country you were surely exposed, almost from infancy, to brightly colored letters and numbers to look at, chew on, and think about.1 One result of this exposure is that, largely unconsciously, you came to accept that your numerals are normal and natural. This exposure is also the root of human ethnocentrism, that feeling we all experience where things to which we have been exposed habitually are easy to take for granted, while unfamiliar experiences produce reactions ranging from mystified wonder to disgust. It’s not good, but ethnocentrism almost surely is inevitable.

So my first reaction to the question of whether the Western numerals are weird might be that every number system is weird, viewed from the outside. This is a form of particularism, the notion that ideas and practices are unique configurations existing at particular points in time and space. Particularism is widespread in the social sciences and humanities, and for good reason: there are lots of things that are local and unique but are nonetheless highly worthy of our attention. There is a lot of emphasis in popular and everyday discourse about science on generalizability, in a way that suggests that we ought, always, to be looking for the most generalizable facts about the world; and by that logic, only universals ought to be of interest. But generalizability is always about generalization to some set of cases or examples—we do not presume, for instance, that biologists who study tapeworms ought to be able to generalize their findings to earthworms or manatees or orchids. There’s nothing wrong with studying the particulars of something.

But, you might rightly object, sometimes we really do want to compare like with like, apples with apples, tapeworms with tapeworms, numbers with numbers. In chapter 1, I outlined some of the things in which we might be interested, including identifying rarities in numerical systems—things that only happen very rarely, but nonetheless that do happen. For instance, the vast majority of the world’s languages have a base of ten, with twenty also being popular. A sample of 196 languages in the World Atlas of Language Structures revealed that 125 languages are decimal, 20 vigesimal (base-20),2 22 a mixed decimal-vigesimal, 20 restricted (having few enough number words to need no base), four having a system using body parts as numerals, and five having some other base (Comrie and Gil 2005). Decimal systems are found on every inhabited continent, across dozens of language families. This simply cannot be a coincidence. We would rightly conclude that, while “normal” is too loaded a word, “common” certainly applies to decimal number word systems. In contrast, within that sample, only one living language, Ekari (also known as Kapauku), spoken in Papua New Guinea, has a base-60 (sexagesimal) system.3 We’re fairly used to thinking sexagesimally because our time system has base-60 elements, but we certainly don’t have any special number words or symbols for 60 or its powers (3,600). In fact, the only other language to have a sexagesimal structure that I’m familiar with is Sumerian, last spoken millennia ago in Mesopotamia, and whose structure influenced the large round numbers on the Weld-Blundell prism discussed in chapter 2 (Powell 1972). Do we conclude, as some have done, that Ekari numerals were borrowed from Sumerian (Pospisil and Price 1966)?4 Barring any historical or other linguistic evidence (of which there is none), the only way we could do that is to show that base-60 numeration is so weird that it could only have happened once. But how would we know that? If it happened once, why could it not have happened twice? Because our knowledge of premodern languages is so limited, because written records are extraordinarily partial and skewed toward elites, and because historical linguistics depends on modern languages to reconstruct past ones, we have no idea how many times sexagesimal numeration was invented, simply on the evidence. We need some body of theory to help us explain why some notations are common and others are rare, before we could possibly answer the question of what is weird and what is not.

We actually know very little about what kind of sample the world’s present 7,000 or so number word systems is, out of all the linguistic variability that has ever existed in the domain of number. Most of that is permanently lost to us, and even the parts that we can reconstruct using historical linguistics depend on the survival of elements of past languages in their modern descendants. We are on firmer ground—though still not completely secure—with regard to the world’s numerical notations. They’re visual and relatively permanent, and we can be relatively sure that there weren’t any base-structured notations (the five basic types I have outlined) prior to the advent of literacy around 5,500 years ago. But if the Cherokee numerals I described in chapter 5 can come so close to slipping out of our knowledge, though invented less than two hundred years ago, there’s every reason to think that there are many more for which evidence does not survive—or has yet to be found.

Thus, the answer to whether a number system is weird or not depends partly on the accumulated empirical evidence of number systems past and present, but also on some set of theoretical principles about what sorts of processes we imagine led to their development. Ten years ago, I would have sworn up and down that induction was the best approach in the historical disciplines—that while we can never aspire to absolute truth, when in doubt we should stick to what we can know directly. But a purely inductive approach only handles well the sort of data that we have at hand. I don’t think all is lost, but we need reasons to hypothesize that our comparisons are good ones. There are good theoretical reasons to suppose that some things, like subitizing, are panhuman, and that some structures, like decimal notation, were cross-culturally just as common today as in the past. And, while it’s difficult to reason about things for which there is no direct evidence, if all known actual numerical notations fit into the five basic and common structural types (even if we can imagine numerical notations with other structures, and even if there were really weird ones that didn’t survive), one reason for it may be that the others weren’t “good to think with”—that they violated some general principle or norm that human brains prefer. But this brings us to a second issue, which is the prospect, frightening to some, joyous to others, that comparison across time scales may always fail to account for the particular weirdness of modernity.

Is the past like the present?

Or, rather, should I ask: Was the past like the present?

This may seem like a bit of linguistic chicanery (fair enough), but I also want to draw our attention to something that historians, archaeologists, historical linguists, and for that matter geologists and cosmologists are trained to be aware of—that we can only study the past in the present, through its traces. But at the same time, to deny the past its own reality is to commit a sort of radical present-centrism, which we would never tolerate in other forms of ethnocentrism. No one doubts that there were, once, people, 100 or 500 or 5,000 or 10,000 years ago, who lived lives and did many of the things that we do. One need not be a radical realist to commit at least that far.

So, better then to ask (though more clumsily): to what extent can we establish whether the range of variation in aspects of the human condition today is similar to, or different from, the total range of variation at all times and places? We can constrain ourselves a bit by restricting ourselves to Homo sapiens sapiens, so that we do not have to deal with many of the problems of our evolutionary history. This would be an arbitrary convenience for an analysis that aims to look at human cognition.

Donald Brown’s remarkable text Human Universals is one of the most underappreciated works of anthropological theory of the past few decades (Brown 1991). Universalism is not in vogue in anthropology, partly for understandable reasons and partly due to contingent and ephemeral discipline-specific biases. The central premise of the work is that one of the more important things that anthropologists can do to theorize the human condition is to pay attention to invariant aspects of human existence, and that these are more numerous than we might think. As I discussed briefly in chapter 1, Brown is keenly aware that universals are time-situated. He notes that some universals are clearly former universals: things that used to be true of all humans, but are no longer so. Prior to the development of agriculture (which the archaeological record tells us happened independently in many different parts of the world), all humans were forager-hunters, but today almost no one is. This is not a triviality—people 15,000 years ago were, to the best of our ability to discern, biologically and cognitively very similar to us, and if something (foraging-hunting) that held for millennia can change, we ought to be very careful asserting that something must be true of all humans, forever and always. Conversely, Brown identifies new universals: things that were formerly rare or at least nonuniversal, but are now universal. The domesticated dog is one proposed new universal Brown gives us, but we can think of other potential ones in an age of rapid modernization and globalization. The point is clear, then: any theoretical framework that aims to assert broad truths about the human condition must be aware of the degree to which its claims are time-specific.

Henrich, Heine, and Norenzayan’s (2010) article “The Weirdest People in the World?” draws on a host of insights from cross-cultural anthropological and psychological research to argue, correctly, that too many of the findings of behavioral sciences are limited by the fact that the samples from which they generalize are excessively Western, Educated, Industrial, Rich, and Democratic (i.e., WEIRD). Just as the fictional Nacirema seem hopelessly weird to anthropology undergraduates until they realize that their customs are actually those of Americans (Miner 1956), most of the academy is populated by WEIRD people who just haven’t realized how weird they are. To their formulation, though, we need to add a sixth letter, M—at the risk of losing a beautiful acronym—because the populations studied by behavioral scientists are almost universally Modern. If the populations today are not like the populations of the past, then any generalizations based on ethnographic data, or psychological experiment, or survey alone will be problematic if extended beyond their time. For anthropologists, not only is the ethnographic record a biased sample of all societies that have ever existed, it is biased to an unknown degree and in unknown ways. To put it another way, we do not know the size of the statistical universe of societies of which the ethnographic record is a sample. Without denying that WEIRD societies are profoundly weird, past societies may be equally weird, but in ways not currently apparent to us.

Thus, for any topic, for any domain of experience, across any branch of anthropology (or any other human science), to build theory is inevitably to choose between a present-centered particularism (which is fine, as long as we recognize it for what it is) and a temporally broad, cross-cultural comparativism that seeks to expand our sample of human variation. So, for instance, an anthropology of the state that takes its start from the work of Eric Wolf (1982) and that regards Western globalization as the principal subject of study is very different from one that situates contemporary state theory in a temporally deep framework, such as recent work by James Scott (2017) or David Graeber and Marshall Sahlins (2017). Wolf’s work starts with the presumption that the relevant history for understanding the modern state is the development of global systems of wealth extraction and power inequalities starting around 1400 or so—i.e., capitalism and its predecessors, centered in Western thought. It is extraordinarily important for counteracting, systematically and empirically, the notion that the ethnographic record reflects pristine peoples untouched by historical and social forces. But in doing so, it also separates us from the deeper past. In contrast, the more recent comparative work on the state takes the view that understanding these inequalities requires a consideration of the millennia of deep, pervasive inequality that preceded them, while noting equality and antihierarchy where they existed, and insisting that inequality is not inevitable. There’s lots to take from both these perspectives, and theoretically they share a common substrate of historical materialism, but they entail very different views about the value of studying large swaths of humanity.

One could object at this point that for some topics, there’s good reason to think that the past and the present might be really different. So let’s take that idea seriously, and think about two different possibilities.

Hypothesis 1: The present is really dissimilar to the past. Perhaps the present system of globalization, industrialization, mass media, etc. is radically different from anything that has existed previously. In so many ways, this hypothesis is so obvious that it’s taken for granted in many circles. It can’t be limitless difference (we’re all human, after all), but it sure could be big enough to warrant treating the past couple of centuries as incomparable to earlier times. Now, if you’re an ethnographer, depending on whether you’re an optimist or a pessimist, you can interpret that in two ways. You might say, “Well, as an ethnographer, that means I don’t need to worry about anything that happened in the past—basically, archaeology is a radically different subject matter because it deals mainly with periods that are completely unlike how anyone lives today.” On the other hand, you might say, “Uh-oh if the past is really that different, then I need to be aware that I’m just describing a small sliver of humanity, not just in space but in time.” While the first answer goes against the idea of a broad, temporally deep approach, the second embraces it; but they rely on the same insight.

But the idea that the past and present are dissimilar is not the only possibility.

Hypothesis 2: The past and the present are not so different after all, but the nature of the data and methodologies used to learn about them are. The problem is then an epistemological one, not an ontological one. Perhaps the nature of the data in the ethnographic record is such that comparable phenomena existed in the past but simply haven’t survived, or are not amenable to archaeological or historical analysis. Ethnographic data are collected at particular moments, and ethnography records data at a micro scale compared to what archaeology records. You get different sorts of things than you get if you’re an archaeologist, where even a span of 50 or 100 years—whole generations—is considered brief. This is related to the problem that Martin Wobst called the “tyranny of the ethnographic record” over forty years ago (Wobst 1978). In particular, Wobst was challenging the notion that we should primarily use models and methods based on ethnographic data to explain archaeological evidence for hunter-forager behavior. It wasn’t that he thought the past and present were different—it’s that he thought that archaeology and ethnology were different. But again, there are two possible approaches. Our hypothetical ethnographer could say, “Method largely determines the questions we ask and the answers we get. As an ethnographer, what I get is really going to be incommensurable with what my archaeologist buddy gets, even if we’re working in the same region. The archaeology of hunter-foragers is thus irrelevant to what I do.” On the other hand, another ethnographer might say, “Uh-oh if I’m honestly interested in getting past my methodological perspective, I’d better find some other complementary perspectives to work alongside mine.” Again, two diametrically opposed answers to the same observation.

I am sympathetic to Wobst’s argument insofar as it recognizes that ethnographic analogies may misconstrue prehistoric data, in particular because the methods and time scales of the two subfields differ, and insofar as it attempts to rebalance the weighting of ethnographic and archaeological data. But, to the degree that Wobst’s arguments have served as a rationale for archaeologists and ethnographers to ignore one another on the assumption that their datasets are incompatible, they are fundamentally misapplied. I am arguing here for a much closer coordination of archaeological and ethnographic data, not despite the fact that they are different, but because they are different, and because we need to explain that difference if we are to have any success in understanding the range of constraints on social formations. I am far more sympathetic to the call for cross-cultural comparisons of archaeological as well as ethnographic data of the sort that the archaeologist Peter Peregrine (2001, 2004) has advocated, adding a diachronic dimension to a previously synchronic enterprise.

The archaeologist André Costopoulos goes still further, and argues that our total knowledge of the universe of possible configurations of human societies (including ethnographic, historic, and prehistoric data) is “a sample of the universe of possibilities whose relationship to that universe is unknown to us. The extent and composition of the universe are completely unknown. Even for the tiny portion of the space that is documented, there are significant disagreements about the number of objects and the ways in which we can circumscribe them” (Gray and Costopoulos 2006: 151). This is a serious challenge to all comparative social scientific research. The answer, again, is that comparative research cannot be conducted in a theoretical vacuum. Cross-cultural research questions acquire validity as part of frameworks of research, rather than as a random inductive search for patterning. For numeral systems, an analysis informed by what we know about numerical cognition is justified as the foundation of the search for patterns in the world’s known number systems, past and present.

Why is there no medieval anthropology?

But the problem is still deeper, because even if data from prehistoric archaeology were used systematically to complement our knowledge of the ethnographic record, anthropology has never asserted as its purview, or, more cynically, actively removed from its purview, a wide swath of societies known principally through historical evidence but not mainly through archaeology, and in particular, that constellation of societies falling under the somewhat inapt but hardy label “medieval.” The study of the so-called ancient civilizations such as Egypt and Mesopotamia is disciplinarily diffuse but includes numerous anthropological archaeologists. There are dozens of archaeologists trained jointly in anthropology and classics departments, and/or who teach interdisciplinarily in both fields. Hundreds of classical archaeology students every year get their archaeological training primarily in anthropology departments. Similarly, there are hundreds of early modernists in anthropology: people who focus on Spanish colonialism in the New World, for instance, or world systems theorists, or people interested in Atlantic World / diasporic studies, or really most of the folks who would describe themselves as ethnohistorians. The Middle Ages, understood roughly as the period from around 500 to 1500 CE, is the only period to be so deeply underrepresented in anthropological theory and empirical research.

One problem that medieval historians face is that “the medieval” is a nebulous object, subject to both scholarly and popular imposed definitions that satisfy no one. “Medieval India,” “medieval Japan,” “medieval Islam,” etc. all refer to very different social configurations and chronological periods. If there were a purely chronological definition of “medieval,” then one would really need to include New World societies as well. Hardly anyone talks about the “medieval Maya” in the way that people seem very happy to talk about “medieval Japan.” As of October 2018, the former phrase has only 8 results in Google Scholar, while the latter has 8,800! There is no methodological justification for including “medieval” Mesoamerica but excluding medieval Islam from anthropological investigation—this is strictly a matter of arbitrary disciplinary divides that, at worst, can be construed as racialized. Fortunately, medieval scholars have recently begun to construe a “global Middle Ages” that might include the Americas, or at least to think beyond the traditional bounds of Europe and the Mediterranean to consider broad patterns of culture contact (Jervis 2017; Keene 2019). But anthropologists have not had much to say on the matter—we have not been part of this discussion.

We have very little conception of what a medieval anthropology would look like. Only a handful of anthropologists have given serious consideration to Old World societies between 500 and 1500 CE (e.g., Goody 1983; Hastrup 1985; Hodgen 1952; Macfarlane 1978). Raoul Naroll’s largely unheeded call for “holohistorical” work, which would insert historical data into broader cross-cultural analyses, could help fill that gap (Naroll et al. 1971; Naroll, Bullough, and Naroll 1974), but only if anthropologists acknowledge that anthropology cannot be a fully comparative discipline without this data. But the medieval is almost entirely out of our grasp. I recall being amazed when, as a graduate student working on historical issues in anthropology, I first encountered A. L. Kroeber’s once-classic textbook Anthropology and realized it had a whole chapter on the invention and diffusion of the zero in medieval India, the Middle East, and Europe (Kroeber 1923). This sort of subject matter is foreign to contemporary anthropology. This is an odd gap, to say the least, for a discipline that purports to be a holistic comparative study of human behavior. In many small (and not-so-small) history departments, medievalists get the dubious honor of teaching Western civilization courses that start with Sumer (in which anthropological archaeology has numerous specialists) and end with the twentieth century (of which the vast majority of cultural anthropologists have some knowledge). It’s not that I think that I, or any other anthropologist, would do a better job than a medievalist would of teaching such a course, nor would I want to do so. But if I were going to construct a “world survey” anthropology course, it would be very challenging to come up with relevant material written by anthropologists or anthropologically trained archaeologists that focuses on the millennium of history in which medievalists specialize. But I can’t think of any valid conceptual or methodological reason to exclude the medieval from the anthropological.

To be fair, there are some notable exceptions. The social anthropologist Jack Goody, whose work on literacy I have already discussed and is an important precursor to this work, treated the medieval as a subject for serious study, across domains as disparate as literacy, food, and marriage (Goody 1977, 1982, 1983). Alan Macfarlane, with doctorates both in history and anthropology, writes on witchcraft, capitalism, individualism, and marriage through a comparativist anthropological and historical perspective in which the medieval receives heavy attention (Macfarlane 1970, 1978). Historical anthropology in Iceland is similarly weighted toward the medieval, partly because of the unique population history of the island (Hastrup 1985; Byock 1990; Durrenberger 1992). In Americanist anthropology, there are two audacious, if ultimately peripheral, efforts of note that incorporate the medieval. Margaret Hodgen, whose career was sadly derailed from what it might have been by her unwillingness to participate in anticommunist loyalty oaths in the 1950s, wrote a series of important papers on the subject of cultural diffusion and innovation using medieval and early modern data (Hodgen 1945, 1952, 1974). In his later years, Gordon Hewes, a true four-field holistic anthropologist and a student of A. L. Kroeber, amassed thousands of pages of material for a comparative history of the seventh century CE, of which only small and programmatic fragments were ever published (Hewes 1981, 1995). I mention these last two not because they are important, but rather because they have had such marginal influence within anthropology.

Yet we must be wary not to simply use “medieval” as an indicator of stage without consideration of chronology. In his massive, almost primordial volume of anthropology, Primitive Culture, E. B. Tylor wrote, “Little respect need be had in such comparisons for date in history or for place on the map; the ancient Swiss lake-dweller may be set beside the medieval Aztec, and the Ojibwa of North America beside the Zulu of South Africa” (1871: 6). At a glance Tylor is allowing for a broad anthropology including historical societies, but only at the cost of locking past peoples into stages, only comparable to others of the same type. This dehumanizing comparativism is far worse than analyses that look only at the present, because data are ripped out of any context that might be relevant to understanding them, and retrofitted to a rigid hierarchy of social standing. These problems of unilinear cultural evolutionary theories are so well known in modern anthropology as to need no further exposition.

The archaeologist Shannon Dawdy, in an essential article on modernity, ideologies of the past and present, and the way in which subjects of inquiry are included and excluded from anthropological knowledge, confronts a similar problem in her account of “clockpunk anthropology”—one that treats chronology, not by ignoring it or turning it into stages, but by recognizing that past and present concerns are not so different (Dawdy 2010). Dawdy’s problem, as a historical archaeologist, is a different one from mine, in that her subject matter, the analysis of ruins from relatively recent American cities, is separated by disciplinary disjunctures from its counterparts in antiquity. She rightly bemoans the neglect of “modern” ruins simply because some archaeologists regard them as insufficiently archaeological to be of interest. Dawdy’s work shares with my argument a broad commitment to comparativism as a framework for breaking down rigid periodizations and disciplinary silos. Challenging what she sees as a rigid divide between modernity and antiquity, she argues that archaeologists who work on prehistoric and historic periods have much more in common than either of them normally allow. We need to be able to ask better questions, such as:

Are the differences between ancient and modern cities simply those of scale and tempo, or are they truly of kind? Are grid patterns and secular subjects such whole new inventions? Or totalitarian architecture and panoptic public spaces? What would ancient Greek and Roman urban sites reveal about our own spaces? Or those of Tenochtitlan and Teotihuacan? Most archaeologists of antiquity decline to consider the possibility of such modern phenomena as racialization, capital accumulation, or terrorism in the past—to look for such things in antiquity is not only anachronistic but also offensive. The past is not supposed to share these dystopian aspects of our present and recent past. The deep past is, for many, a utopian refuge. (Dawdy 2010: 364)

These are important questions indeed, and important problems! From my perspective, there are similar omissions in the medieval period, and in much of antiquity that is considered “historical,” because anthropologists, archaeologists, historians, philologists, and others all seem to have agreed upon a division of labor whose effects on our ability to ask good comparative questions—about number systems or anything else—are pernicious.

I have spent a lot of time in this book talking about the Roman numerals, partly in antiquity but chiefly in the late medieval and early modern period, just when they were beginning to be seriously challenged for dominance in Europe by the newfangled (at least to Europeans at the time) Western numerals. There has never been a full-length English monograph on the Roman numerals, which is surprising given their antiquity, their widespread use, and their continued cultural importance. Nor do I claim that this book serves this purpose. To write a history of the Roman numerals would be to grant that they are sui generis, to reaffirm that they are a subject of antiquity and the medieval, and perhaps even to undermine my own credibility to comment on these matters (as I am neither a classicist nor a medievalist by training). Instead, by framing the decline of the Roman numerals as merely the most prominent case among a comparative set of notations, ancient, medieval, and modern, and governed by the same sorts of constraints and cognitive factors, I hope to historicize these phenomena, but only to the degree they warrant. And we do not know that degree in advance—it can only be established through investigation.

What is the future of numeral systems?

There is a strange paradox in the progressivist framework under which much of Western thinking about technology operates. On the one hand, there is, for many of us, a perception that time is speeding up, that social change is happening more rapidly now than ever before, and perhaps even that the rate of acceleration is itself accelerating. Partly this is surely a cognitive bias derived from a combination of a certain “good old days” brand of traditionalist narrative in the media, techno-utopian futurist notions of progress, and the fact that at the scale of the life course, one’s formative years seem to pass more slowly than one’s later life. This logic leads to some varieties of transhumanism, a viewpoint that contends that, in the near to medium future, technology’s interface with the human body and specifically the human brain will accelerate beyond the point where we can be reasonably considered human, and ultimately, through superhuman intelligence, will lead to a technological singularity the scope of which we present humans can hardly fathom (Kurzweil 2005; Bostrom 2014). Under this frame of reference, we have absolutely no reason to expect that the numeration systems of today will be anything but laughable to humans perhaps only a century in the future. Futurists may not be thinking, specifically, about the “Numerals of the Future,” but as numerals are the product of modern human cognition, we can readily see that the numerals of today may not be suitable for the posthuman tomorrow.

On the other hand, there is an almost perverse certainty, in many domains of existence, including number systems, that the people of today have come up with the One True Answer to many central problems, and that essentially no further progress can ever be made in them. Much mockery was made of the political scientist Francis Fukuyama when he published The End of History and the Last Man (1992), signaling his belief (which he now largely rejects) that with the fall of communism, liberal democratic capitalism was the final social configuration of human societies. The notion that the next millennium, or five, or ten, of human existence will have a single mode of production and a single form of political decision making is nonetheless still widespread. Partly this is just a failure of our imagination (although, for the record, many such alternative models are available, across political spectrums and philosophical frameworks, if you care to look). The idea that we are at the “end of history” of numeration is also widespread. As the cognitive neuroscientist Stanislas Dehaene remarks:

If the evolution of written numeration converges, it is mainly because place-value coding is the best available notation. So many of its characteristics can be praised: its compactness, the few symbols it requires, the ease with which it can be learned, the speed with which it can be read or written, the simplicity of the calculation algorithms it supports. All justify its universal adoption. Indeed, it is hard to see what new invention could ever improve on it. (Dehaene 1997: 101)

All right, you might respond: what might such a new “killer” number system look like that might replace the Western numerals? And I might demur that I’m not a futurologist, simply noting that it’s unlikely that ten thousand years from now, our descendants will still be using ten digits with place value. While there are notations, like the Egyptian hieroglyphs or the Roman numerals, that survive for a millennium or two, even these undergo tremendous change over their period of active use—so why would we expect that change to have stopped? My job is not to make up new systems, just to describe them. But I can see that this response, although accurate, is not the best possible one, because several numerical notations explicitly designed to be better than Western numerals (for some value of “better,” for some purpose) are already on offer.

First, there are systems developed by groups like the Dozenal Society of America. Founded in 1938, it was originally called the Duodecimal Society of America for some decades until it was decided that this was too decimal-centric. The Society advocates for base-12 arithmetic in place of base-10, principally because 12 is a highly composite number with many factors (1, 2, 3, 4, 6, and 12), making work with multiplication and division more convenient (Andrews 1944; Brost 1989). For numerals, converting to base 12 would simply be a matter of adding single digits for 10 and 11 (the Dozenal Society prefers and ). But that’s a fairly minor change in the big scheme of things—for most of us, if we ever think of “alternative number systems,” choosing a different base is a pretty obvious variant. Dozens (or at the very least, tens) of science fiction writers have created weird number systems for alien or future human societies (Pohl 1966), and most of them just use a different base along with ciphered-positional notation.5 You may be familiar with hexadecimal numerals used in computing, whereby the letters A through F represent 10 through 15, so that 1A = 1 × 16 + 10, or 26, as an accommodation between the binary nature of electronics and the constraints on conciseness that make actual binary numbers too long for the human eye and mind. But let’s get weirder.

In 1905, the mathematician and tidal scientist Rollin A. Harris (1863–1918) published a now nearly forgotten article entitled “Numerals for Simplifying Addition” in the American Mathematical Monthly (Harris 1905). Harris begins by noting that many of the number systems of antiquity are self-evident in meaning, while the digits 0 through 9 are obscure in their meaning, and simply have to be learned. In terms of the typology I outlined in chapter 1 and elsewhere, the systems he’s interested in are cumulative-additive—using repeated signs that are added together, while Western (which he calls “ordinary”) numerals are ciphered-positional. Harris proposes, then, to reform our digits, keeping the decimal, place value system intact but reshaping the digits themselves using combinations of vertical strokes for 1, horizontal strokes for 2, and circles for 5, partially along the model of the Syriac numerals (figure 7.1). Because these signs are additive—the structure of each individual sign combines fives, twos, and ones as needed to reach any number up to nine—they can be read just as a Roman VIII can be read as a five followed by three ones.

Figure 7.1

Proposed “simplified” numerals along with several ancient analogues (Harris 1905: 66)

By adopting these new signs, Harris argues, arithmetical errors will be reduced, the numerals will be learned more quickly and with less effort, and children will more rapidly learn addition. A modern cognitive scientist could argue (although we don’t have any experimental evidence) that they offload more of the reader’s and writer’s cognitive load to external representations, and thus require less internal, mental cognitive work (see, e.g., Zhang and Norman 1995). A semiotician might note that these signs, although not exactly iconic, are more highly motivated than our present, opaque ones. They preserve all of the advantages of the Western numerals and add to them a new additive clarity in the sign forms. But of course, their adoption never happened—in fact, Harris’s work does not seem to have been cited by anyone (except me) in the century or more since its publication. We can look at this system and see that it is, in some sense, superior to Western numerals, retaining all their advantages while improving on others—and, at the same time, we can see that it would have been very difficult indeed for this early twentieth-century innovation to be adopted and replace the Western numerals. As clever and innovative as it is, Harris’s system seems to be a solution to a problem that no one was facing.

In 1947, the amateur mathematician James E. Foster showed that it was possible to have a number system that was infinite, concise, but that had no zero symbol—rather, it used a T for 10. So, for instance, counting up from 98, we have 99, then 9T (9 tens + ten), T1 (ten tens + one), T2 (ten tens + two), and onward up to 1,000, which is 99T (nine hundreds, nine tens, + ten) (table 7.1). This system is, at first glance, very strange, but it is totally unambiguous, decimal, positional, and infinitely extendable. Most of its numerals are identical to Western numerals, but numbers that we would represent with zeroes look very different. It is also more concise than Western numerals—compare 100 to 9T or 1,000 to 99T.

Table 7.1

Zeroless numerals from 1 to 120 (11T) (after Foster 1947)

You might object, at this point, that 9T is completely opaque to English speakers who are used to 100. But then again, who’s to say that the user of this system would be an English speaker? In creating this system, Foster was no reformer or technocrat—he was also the author of “Don’t Call It Science” (1953) and “Mathematics Need Not Be Practical” (1956), and thus an advocate, following G. H. Hardy’s classic A Mathematician’s Apology (1940), of a humanistic, deeply impractical mathematics. Unlike Harris, his goal was not to advocate that such a system should be adopted, but to show that it was infinitely extendable, had a single representation for each number, and that it was easily manipulable as a mathematical system. But the advantage of conciseness is there nonetheless, if that’s your interest.

As it turns out, while Foster was the first to invent such a system, his creation has been (apparently) independently invented several times since. The mathematician Raymond Smullyan, in his classic Theory of Formal Systems (1961), called such systems lexicographically arranged—their order is from shortest to longest and, within strings of a given length, “alphabetically” from lowest to highest. Robert Forslund (1995), seemingly unaware of Foster’s precedent, redevelops it and then argues that premodern inscriptions and texts using such systems might have been misidentified as errors, calling for “archaeologists specializing in the interpretation of these ancient documents to examine this usage.” Fortunately for us, there do not appear to be such longstanding errors of interpretation, but Forslund’s general point is correct: even if this system was never used, there is no reason why it could not have been invented.6 After all, he invented it unaware of Foster’s creation!

The mathematician Vincenzo Manca (2015), building on Smullyan’s lexicographic ordering, invented a system identical to Foster’s (using X instead of T for 10, but otherwise the same) and noted a further benefit. These systems are known as bijective numeration systems—unlike Western numerals, where one can (as in identification numbers) insert leading zeroes (so that 1, 01, 001, 0001, etc., all represent 1), this system has a unique identifier for each number, so any set of strings can be readily ordered. There are no leading zeroes, since there are no zeroes at all. Manca’s use for this sort of system was to produce an ordering of DNA sequences viewing the four letters A, C, G, T representing the bases as a sort of base-4 numeration system. Unlike Foster, who started with a concept without any notion of practical utility, Manca started with a practical problem and then hit on the same solution.

I’m not mentioning this system because I think it is the numeral system of the future, or because it’s so wonderful that we should adopt it. Frankly, my goal is to flummox you. Staring at this weird system, bewildered, is about as close as we can get to the mindset of those Indian, Middle Eastern, or Western European mathematicians upon first encountering decimal, positional notation between the sixth and eleventh centuries. How strange it seemed to them at first glance, this idea of place value with the zero. It was a subject of bafflement. It took centuries to be fully accepted, and it needed to be explained in great detail to new users. The zero, in particular, was especially confusing, and was described by one late twelfth-century author as a “chimaera”—a monstrous digit, a number and yet not a number (Burnett 2002). The philosopher of science Helen De Cruz argues that some numerical concepts, like the zero, may be difficult to accept or adopt because they are deeply counterintuitive, although, once accepted, their weirdness may be appealing (De Cruz 2006: 317). That feeling you probably have right now, that this monstrosity with Ts for 10s couldn’t possibly be workable, possibly mixed with a protective paternalism for the beloved Western numerals, is what often happens when novelties get introduced. It is partly because their unfamiliarity requires effort—the QWERTY effect discussed in chapter 4. But it is also because the functions for which this new system would be useful are not those most relevant to our present concerns.

But perhaps you want a system that actually has a demonstrable, current use. As I argued in chapter 1, there is a gap between the imaginable and the attested—those things that the human mind is capable of imagining are numerous, but many of those things never find their way into use, because of cognitive and functional constraints. Only a small set of “stable engineering solutions satisfying multiple design constraints” (Evans and Levinson 2009: 1) survive and thrive—because they are solutions to human problems, and because they are perceived as sufficiently relevant to warrant their adoption. There is such an alternative numeral system, and you’ve likely seen it before if you’ve ever looked inside electronics, although you don’t necessarily know what it all means (figure 7.2).

Figure 7.2

680-ohm resistor using electronic color code (680 ohms 5% axial resistor by bomazi is licensed under CC BY-SA 2.0; source: Wikimedia Commons)

Almost every electronic device uses resistors, like the one shown here to create resistance (measured in ohms) to the flow of electric current in the device, and they almost all have a set of colored bands to indicate their resistance, standardized internationally. The first two or three bands represent some numerical quantity, using different colors to indicate the numerals 0 through 9. So, in figure 7.2, the leftmost band is blue, representing 6, and the next is gray, for 8. The third band is the multiplier band—it uses the same color system, but represents powers of 10—so in this case, brown is 1, or 101 = 10. The product of the first two bands and the multiplier, 68 × 10, gives the resistance, 680 ohms. The fourth band uses a different color scheme to represent tolerance, in this case gold meaning ±5%. It also serves a useful function of highlighting the end of the numeral—to quickly show the reader to start reading from the other end of the resistor.

The other advantage they have is that color bands are highly distinct and (except to the color-blind) visually salient at small scale—resistors are generally only a few millimeters thick and it is not always feasible to print highly readable numerals on or near them. The colors are useful for human eyes in the context of a nonlinear text medium (resistors can be oriented all sorts of ways within a circuit) where bright color visible at small scale can be readily processed. Cross-culturally, numerical notations almost never use color directly to represent differences in numerical value (Chrisomalis 2010: 365). It is probable that color serves some semantic function in the encoding of the Inka khipu records (Hyland 2017), but not, apparently, to indicate differences in numerical value. But in most written traditions—whether incised in wood or stone, imprinted in clay, or written with ink on some flat surface—using color to denote semantic difference is rarely necessary. Without color, resistor numerals are structurally similar to someone saying “I make 56K a year”—56 being the digits, K being the multiplier for 1,000. They combine a ciphered-positional structure for the significant digits and then are multiplicative-additive for the power. But unlike “K,” which is the only multiplier used in English in this way (as discussed in chapter 6), resistor numerals have a color band for each power, equal to the exponent of 10 being represented.

Table 7.2

Resistor numeral values

Color

Digit value

Multiplier value

Tolerance

Black

0

                1

Brown

1

              10

±1%

Red

2

            100

±2%

Orange

3

         1,000

Yellow

4

       10,000

Green

5

     100,000

±0.5%

Blue

6

  1,000,000

±0.25%

Purple

7

10,000,000

±0.1%

Gray

8

              —

White

9

              —

Gold

              0.1

±5%

Silver

            0.01

±10%

Resistor numerals are really most useful for representing round numbers—those with one or more trailing zeroes—in a very concise way. In this way they are similar to scientific notation, which is principally used to express large or small round numbers (6.02 × 1023) but is fairly useless for numbers without lots of zeroes. To write 478,239 on a resistor would require six bars of different colors—4, 7, 8, 2, 3, and 9, followed by the multiplier bar for “×1.” But no one makes a 478,239-ohm resistor—that kind of precision isn’t needed most of the time. You might have a 470,000-ohm resistor, though, in which case you just need the 4 and 7, along with the multiplier for 10,000. Thus, resistors rarely need more than two or at most three “digit” bands, plus a multiplier band, to indicate the resistance—even for really high resistances up into the millions of ohms, which are actually quite common in everyday use. In many cases conciseness in numeration is not an issue, as I have shown repeatedly, but on a tiny resistor, being able to convey the resistance in just a couple of visually salient colored bands that are hard to misread is very valuable indeed.

Again, I don’t suppose that everyone in a century or two is going to use color-coded bands for digits and a multiplier for representing numbers generally. Nonetheless, you have, in your home, numerous instances (possibly hundreds) of a radically different numeration system from the Western numerals, one that was invented in the mid twentieth century and is still used decades later in electronic devices throughout the world, albeit hidden from ready view. People—real people who design, install, and repair electronics—find them more useful than Western numerals for this purpose. We don’t need to be transhuman to use them—they were designed by humans to be read by humans.

Not only are there imaginable alternatives to Western numerals, but some of them are actually adopted and in use. We should bear in mind that many new innovations will fall under the pressure of frequency-dependent biases, as we saw in the case of the Cherokee numerals. Even so, the fact that invention in the domain of number continues apace—and perhaps has even increased in pace, although that would be harder to show—demonstrates the continued vitality of human numerical inventiveness. We should thus feel liberated to ask under what conditions and in what contexts a seemingly ubiquitous “killer” system like the Western numerals can have its triumph upended. Knowing how it came to be universal is one step. These conditions are, I have shown, likely to be a combination of cognitive-structural and social ones, linked through ongoing processes of reckoning—evaluating, thinking, judging, deciding. The next step is to ask how this system came to be believed to be irrevocably triumphant. We can hardly foresee the details, but we should check our arrogant assumption that the narrative has ended forever. We are not at the end of history of numbers, and we never will be.

What are the limits on human variation?

In his startling essay Fragments of an Anarchist Anthropology, David Graeber sets out a forthright agenda preparing anthropology to aid in producing forms of sociopolitical anarchism (Graeber 2004). No armchair theorist, Graeber’s goals are explicitly political: to demonstrate the feasibility of alternative political and economic solutions to those currently prevalent, and to actualize processes that will bring them about. Graeber and I share a common concern that anthropology is the only discipline well suited to examine the totality of human behavior and social organization, the only discipline whose explicit comparativism permits an honest investigation of the nature of inequality, violence, power, hierarchy, knowledge, and the state, and the interactions among them. We share a conviction that the sociologists, philosophers, and literary theorists who have dealt with these questions lately have done so poorly, and from a Western-centered viewpoint. It is very easy for those living in modern Western societies where states are large, powerful, and oppressive to imagine that there are no alternatives. Graeber’s vision of an anthropology that does not yet exist is a discipline that seeks to present such alternatives, while recognizing that not everything is possible—and that it is thus imperative to circumscribe the possible within the imaginable.

Graeber also moves explicitly to include historical and archaeological time scales in his work—for instance, in his account of debt over the past 5,000 years (Graeber 2011). This is not to deny the value of the study of living people in all their complexity. Ethnography is one of the methods by which I and some other anthropologists produce contingent knowledge about present societies through sustained participant observation and other systematic interaction with living people. However, it is quite dangerous to equate the range of variability in human behavior today with the range that has existed in the past, or the range that might exist in the future. We run the risk of failing to observe social formations that once existed but no longer do, anywhere, and thus of constraining our ability to think about alternative solutions. In analyzing how societies resist state institutions, we must go beyond those ways used by people living under and resisting a particular form of domination, that which is particular to the past several centuries. These are five-hundred-year solutions to ten-thousand-year problems.

We do not know, nor is there any immediate prospect of knowing within any contemporary body of anthropological theory, what the constraints are on configurations of social inequality in human societies. Most of the literature I have discussed, including the numerical evidence, has identified constraints in cognitive and linguistic domains, not social ones, but that is largely because social constraints have not been conceptualized as such, rather than because they do not exist. Assuming that there are none evokes without warrant the specter of dystopian futures as much as it allows the prospect of utopian ones. It is imperative, if anthropology is to make any contribution to the social sciences in the twenty-first century, to answer the question, “To what degree, and by what processes, is variability in the degree and intensity of social inequality created?” Such an enterprise requires not only that we understand the range of variability in contemporary inequality but that we remind ourselves that this range is not fixed chronologically.

David Aberle’s (1987) distinguished lecture “What Kind of Science Is Anthropology?” outlines a historical theory of anthropological constraints, rightly noting that “the historical constraints on a system are ever-changing, since some of the novelties of today that are incorporated into the system become the constraints of tomorrow” (Aberle 1987: 554). Recognizing that environmental and functional constraints (including cognitive ones) do play some role in constraining human behavior, Aberle insists that although anthropology cannot be a predictive science, it can and should be a probabilistic, explanatory historical science along with geology, historical linguistics, evolutionary biology, and cosmology. I agree with this fully. His insistence on the value of historical reconstructions using synchronic ethnological data is appropriate, but reconstructions that are not aware of the possibility that the past may be different from the present will almost surely be flawed. It is as if we were to argue for evolutionary taxonomy without paleontology even where the fossil record is abundant. Where historical or archaeological data are available, it is appropriate to use them to reconstruct the past in a more direct, less inferential manner. And so, while anthropology must be comparative, it must be comparative in its totality, across many time scales: ethnographic, historical, archaeological, evolutionary.

This is a call, then, for a macro-anthropology to parallel our current attention to micro-anthropology. Macrohistorical scholarship, for many historians, raises the specter of historians such as Arnold Toynbee (1934–1961) and Oswald Spengler (1926) who, copious though their work may be, lack the rigor that characterizes thorough historical scholarship. The challenge of such work is principally that its scope is too broad, seeking not only to unify all of world history but every subject of world history in grand narratives or epoch-spanning cycles. I do not claim, just because numeral systems are amenable to comparative analysis, that every subject or domain of experience must be as well, or that the history of numerals helps explain the history of all domains. I do insist, however, that anthropological theory ought to be grounded in the broadest range of data we can have available, and that that includes evidence from all historical periods.

Over the past century and a half, anthropology has been no stranger to asking big questions—theoretical ones that drive forward the discipline, even if unanswered, simply because they are asked. Yet anthropology over the past several decades has retreated from asking these big questions in favor of the local, the historically situated, and the contextual, overshadowing the need for anthropologists to develop and use theories of culture and behavior. This timidity is understandable as a reaction to racist and colonialist excesses, but it denies anthropology any reentry into relevance in understanding the human condition. As a result, anthropological contributions to the human sciences have been limited in a way that would have been unthinkable fifty years ago.

While we are living in the here and now, we are part of much larger-scale and longer-term processes: the “long now” that encompasses all the variability in behavior and knowledge of our species over the millennia of its existence. This phrase, coined by the musician and futurist Brian Eno, a founder of the Long Now Foundation, emphasizes the value of the macro scale to our understanding of the present (Brand 1999: 28). Anthropologists should recoil at the proposition that the local and the particular are all that we do well. Anthropology is the only discipline that claims to be able to study humanity at any time and in any place, in all its variability and sameness. We are gravely in need of a theory of power and inequality, one built by anthropologists with all the data that we are willing and able to gather.

This, for me, is the strongest rationale for the broad, historically inclusive, evolutionarily informed formulation of anthropology that has predominated in North America for the past century, and which ultimately has much deeper origins in Enlightenment and early evolutionary analyses of human behavior (Balée 2009). It is a call for holism; not, as Roy Ellen (2010) argues perceptively, for a vague meaningless undivided presumption of unity, but for a methodologically well-supported rejection of disunity among the subfields and topical specialties. I cannot see how “unwrapping the sacred bundle” is anything but detrimental to what anthropology has to offer the social sciences (Segal and Yanagisako 2005). To this formulation I would add historical anthropology, much-neglected yet crucial for integrating past and present. An anthropology that is ethnography and nothing more is unlikely to be relevant for explaining social configurations and remedying social problems beyond the local and contemporary.

Similarly, if anthropology is only a borrower rather than a lender of theory, a discipline whose social theories are borrowed rather than built from our data, then it will cease to have much relevance in the eyes both of other social scientists and of the general public. In this I share with the social anthropologist Matti Bunzl a concern that anthropology’s lack of generalizing focus renders us irrelevant (Bunzl 2008). But the problem is not that disciplinary trends swing like a pendulum between generalizing and particularizing. Generalization has been out of fashion in anthropology for two generations now. The real problem is that we have reified this dichotomy and forgotten about diachronic cross-cultural comparison. Because most universalists and particularists presume that the ethnographic record will be a good basis for theorizing—for universalists because of the astonishing sameness of humanity, and for particularists because the past is no more and no less unique than the present—they neglect sources of data that are diachronically situated. An anthropology that aims to solve human problems cannot restrict itself to a tiny sliver of human variability.

Numerical notations are a particularly tractable domain of experience for this perspective. We have ample textual and archaeological evidence for their use over five thousand years of recorded history, and tantalizing evidence such as tallying going back tens of thousands of years earlier, into the Upper Paleolithic. Their materiality offers us a foothold into their contexts of use (Overmann 2016). They vary, but they do so in constrained ways, with the same five basic structures recurring multiple times independently, so they are neither so universal nor so variable as to be uninteresting. They have understandable histories—trajectories of development, use, and abandonment—that allow us some insight into more general cultural-evolutionary processes of long-term change. They have fruitful and persistent connections to language, a domain for which there is already a well-accepted framework for historical analysis (first philology, and now its descendant, historical linguistics). And they reflect a key interest in cognitive science—numerical cognition—where the study of links between language, notation, perception, and memory are over half a century old (Miller 1956). But I do not think, despite these advantages, that numerals are the only or even the principal subject amenable to this kind of analysis. Broadly comparative and historical approaches to anthropological data are available, if we have the courage to try them.

In his last words published during his lifetime, the archaeologist Bruce Trigger said in an interview, “I look forward to the day when knowledge of human behavior has reached the point where archaeologists are not only able to understand social variation in the past but can help to construct credible models of societies that have never (yet) existed, in order to broaden and inform public discussion of future alternatives” (in Yellowhorn 2006: 324). Here I think Trigger was exactly right. I would expand his point to include all social scientists, not only archaeologists. We must not think that the present is the best guide to the future, just because yesterday is more distant from tomorrow than today is. Change counts, and a holistic, comparative, and historical anthropology is witness to diachronic and social processes at a multimillennial scale. An anthropology that aims to be relevant to contemporary life must, therefore, build on its foundations as a science of constraints, and to seek to become an anthropology of change.

Notes

  1. 1.  This may have had a greater impact on you than you imagine. Some research shows that among individuals with color-grapheme synesthesia—people who perceive letters and numerals as having inherent colors—the specific associations they had were strongly correlated with the most popular colored letter and numeral toys widely available in their infancy (Witthoft and Winawer 2006, 2013).

  2. 2.  Most of these are in fact quinary-vigesimal (mixed base-5 and base-20), or decimal-vigesimal (mixed base-10 and base-20). Only a few languages have totally distinct words for 1 through 20 without any subdivisions or structures.

  3. 3.  Hammarström (2010: 32) also reports that historically, though no longer, Ntomba (a Bantu language of the Democratic Republic of the Congo) had a base-60 structure.

  4. 4.  Despite the time and distance of the purported connection, this was a serious debate for a time in the 1970s, occupying considerable attention, as charges of racism flew in both directions among different parties to the dispute (Bowers and Lepi 1975; Pospisil and Price 1976; Bowers 1977).

  5. 5.  A notable exception is Ursula K. Le Guin’s first published science fiction story, “The Masters” (first published in 1961), in which a six-fingered future (or past) human species uses base-12 cumulative-additive numerals, much like the Roman numerals, but where the invention of a zero and ciphered-positional notation brings chaos (Le Guin 1975). I do not think it is a coincidence, in mentioning this rather obscure early work, that Le Guin was the daughter of A. L. Kroeber and was heavily influenced by his anthropology.

  6. 6.  Just before this volume went to press, however, a new preprint has suggested that the Maya numerical notation was originally a bijective positional system where the sign for 0 was actually a sign for 20 (Rojo-Garibaldi et al. 2020). I’m doubtful (no archaeologists or linguists were involved in the study), but it is not inherently implausible.