INTRODUCTION

WHAT IS LOGIC?

WE LIVE in a world of constant change: armies collide, empires decline, sometimes whole civilizations slide into oblivion. Does anything last forever? The Apostle Paul says three things last (faith, hope, and love), but we would suggest a fourth constant in our lives: the laws of logic.

We all have a sense of logic; it shapes us every day. Yet its nature is deeply mysterious. Logic isn’t like language, varying from culture to culture. Logic is like arithmetic—tricky yet objectively true. Just as the number seven has always been prime to every culture that has ever defined prime numbers, so the most common methods of deductive reasoning have always been valid. Of course, not everyone sets forth a logician’s definition of validity in the first place, and not everyone pursues the idea to its further reaches. But those who reflect on it always arrive at the beginnings of the same abstract realm, a realm infinitely complicated yet implicit in much that we do—a realm of form, structure, and pattern discovered twenty-three centuries ago. The nature of that discovery was strange, just as logic itself is strange.

For one thing, though everyone uses logic, not everyone studies it (just as, though most people walk, not everyone studies walking). Logic as a discipline begins only with the ancient Greek philosopher Aristotle, and peculiar as it sounds, all modern studies of logic in the sense of deductive validity (meaning logical necessity) descend from his efforts. The deductive validity of argumentation was studied by later Greeks, by later Romans, by Arab physicians serving powerful caliphs in the tenth century A.D., and by medieval theologians working in various European universities. It is now studied by computer programmers the world over. Yet all these studies owe their origins to exactly one person: Aristotle.

Most people today learn logic out of books. Yet all these books were written by people influenced by other books, and all the books have a lineage that leads back to the same original inspiration. The lineage always points back to the same Greek thinker, who flourished in the fourth century B.C. There is simply no historical record of anyone ever studying validity in the logician’s sense of the term except Aristotle or people directly or indirectly influenced by Aristotle.

How can this be? If the truths of logic are objective and culturally invariant, why does the study of logic show up only in particular times and places, like Greece in the fourth century B.C.? And why do all known studies of logical validity lead back to the same original source?

There is one thing to keep in mind from the start: logical discoveries usually depend on individual insight, but logic as a discipline requires something more—insight with an audience. Logicians need other people who are willing to listen, and audiences are a consequence of social forces—forces that affect large numbers of people quite apart from individual will. As a result, logic has a social history as well as an abstract one. Logic considers unchanging truths, but the extent to which large numbers of people will ever really explore these truths still depends, in part, on their social setting. And one’s social setting turns on various factors—political, economic, technological, and even geographical. The history of logic is a mix of the abstract and the mundane.

THE STRANGE NATURE OF LOGICAL VALIDITY

When it comes to the early study of logically valid reasoning, much depends on what we mean by “valid.” We can get the basic idea from a pair of examples:

The difference between these examples is easy to see.

In the first example, if the first two statements are true, then the third statement must also be true. (If all cats are cool, and Felix is one of them, then Felix must be cool.) But in the second example, even if the first two statements are true, the third could still be false. To revert to a bit of ancient phrasing, in the first example the third statement follows from the other two whereas in the second example the third statement doesn’t follow. The ancient Romans expressed this difference by saying non sequitur (“it doesn’t follow”).

Logic studies the difference between examples of this sort, but there are infinitely many of these examples in which, if the first statements are true, the last must also be true. These are the examples logicians call valid, and by valid they mean something specific. Logicians study arguments, and an argument, to a logician, isn’t a quarrel but an attempt at proof. An argument consists of reasons, called premises, and a point to be proved—the conclusion. For a logician to call an argument valid, then, is to say exactly this: if the premises are true, the conclusion must also be true. But notice that the question of validity is strictly hypothetical in this sense: to ask whether an argument is valid isn’t to ask whether any of its statements are true but only whether, if the premises were true, would the conclusion have to be. As a result, a valid argument can consist entirely of false statements, even whimsical ones, like this:

All hedgehogs are laborious.

My landlord is a hedgehog.

Therefore, my landlord is laborious.

If the terms of the argument are meaningful at all, then the argument is still valid. In consequence, logic isn’t really about whether any of these premises or conclusions is true or false but only about abstract connections. It is about how the truth or falsity of some propositions would connect with the truth or falsity of others. It is just this fact that makes logic mysterious.

Logic is the study of these abstract, hypothetical connections—connections involving logical necessity. But where do the connections come from, and why do we see them at all? Are the connections really objective features of the arguments themselves or just features of our making? How we answer these questions will help to determine just what we think logicians have been doing since they first turned logic into a discipline.

When we speak of objective truths, we often mean an accurate description of physical facts, like an accurate measurement of the Eiffel Tower’s height or the base of the Great Pyramid at Giza. But when we speak about logic, we are speaking of something different: we are speaking of connections between statements, propositions, or assertions. Of course, we can sense patterns in these connections (and surely it is the patterns that matter), but the question now is what makes some patterns valid and others not. In the first and third examples from before (the one about Felix and the one about the landlord), we see the following pattern:

All As are B.

C is an A.

Therefore, C is B.

It seems any argument fitting this pattern is logically valid; here’s another familiar example that also fits the pattern:

All men are mortal.

Socrates is a man.

Therefore, Socrates is mortal.

Yet our ability to think logically can’t depend on learning just a few of these patterns. On the contrary, there are infinitely many valid patterns, as well as infinitely many invalid ones, yet somehow we are able to classify many of the simpler examples into one of two groups—valid or invalid. So how do we do this? What goes on when we distinguish the valid from the invalid? Are we merely repeating something we have been taught?

As it turns out, a sense of the most basic patterns can’t be taught (or so it seems) for the direct reason that no one can follow such a lesson without sensing some of the patterns already. To learn anything, we need to have a sense of logic, because every lesson is a pattern in itself. Even if we are taught all sorts of patterns (patterns like the one involving Felix, hedgehogs, or Socrates), this kind of teaching is useful only if we can also draw conclusions—conclusions from the presence or absence of the patterns. This is the same problem over again. When we draw a conclusion from the presence of a pattern, should we draw the conclusion validly or invalidly? How can we tell which is which unless we already have a sense of what counts as valid? Admittedly, we might be taught to follow rules, but to follow a rule we still need to see what that rule logically implies, which involves a pattern too.

For example, we might be taught to follow the rule that, whenever we find some object A, we ought to do B; nevertheless, to follow this rule we still need to watch for A and then do B, and thus the whole procedure seems to assume that we already sense the following pattern, which logicians call “modus ponens”:

If A, then B

A

Therefore, B

In other words, some sense of the valid and invalid seems to be already innate, and it is exactly this innate sense that makes human beings teachable in the first place.1

Logic is present in countless mental operations, but if we ask how we really know its patterns to be valid, this question turns out to be, apparently, unanswerable. The reason is that all possible answers must still take some of the patterns for granted, at least in particular cases. We still need the ability to recognize a valid argument before we can prove anything else to be a valid argument. Any answers we come up with will still have to be logical answers, and what counts as logical is precisely the point at issue. The only way to justify the patterns will be to invoke an argument embodying another pattern; yet if we challenge the reasonableness of all patterns, no matter where they appear, we hit a dead end. Any answers we come up with, to be answers, must still involve patterns of their own; patterns are how we tell what counts as an answer and what doesn’t. Thus it appears that some aspects of logic must remain forever undemonstrated. Moreover, we hit a similar dead end if we try to turn the question around and ask it differently: we reach an impasse if we ask of the valid patterns, “What makes them so?”

WHAT MAKES A VALID ARGUMENT VALID?

Many people think certain patterns of logic are valid only because our brains happen to work in a particular way; they think that if our brains were wired differently, logic would become illogic, and illogic would be logic. As a result, they imagine the principles of logic to be nothing more than effects of our brain structure. Logic is the way it is (people suppose) because of our brains.

The idea that logic is simply a consequence of the human brain has always been appealing, but the trouble is that it seems to put the cart before the horse. It leads us to mix up what depends on what. True, our brains work in a particular way, but they do so for a reason: the way is often useful. Our brains function usefully whenever they solve the puzzles that need solving—how to grow food, build shelters, or find water—and it is logical reasoning that allows this.

What this observation apparently shows, however, is not that logical patterns are valid because of the structure of the brain. Instead, it shows the opposite: our brains have their useful structure because the patterns are valid. Logic doesn’t come from our brain’s mechanism; instead, our brain’s mechanism appears to come from the very nature of logic.

Suppose different patterns were valid. Suppose everything we now think of as logical were illogical and vice versa. In that case, human brains would have had to evolve differently, or our early ancestors would have perished. Of course, many behaviors might turn out to be useful in unexpected ways (even illogical behavior), but if different patterns had always been the logical ones (and if all our current patterns had been the illogical ones), then a brain that failed to reason according to the different patterns would have been useless, perhaps even dangerous. If all the logic of our ancestors had been mistaken, they would have died off. They would have failed in their efforts to manipulate their surroundings and to find water, food, and shelter. And what this consideration shows is that the very idea of a human brain evolving in useful ways (logic being one of the useful ways) still seems to assume that the demands of logic have shaped the brain—and not that the brain shaped logic. Certain patterns aren’t valid because our brains happen to prefer them; instead, our brains prefer these patterns because the patterns are already valid. Logic helps to define what counts as a functioning brain in the first place, and consequently it seems impossible (without circularity) for the brain to define logic.2

As thorny and difficult as analyzing the basis of logic might seem, is there any other way to explain why the valid patterns are valid? Historically, many people have tried to explain why logic is the way it is, but most of these attempts have come to naught. Indeed, it seems most of them must come to naught, and we can see this last point better if we return for a moment to the idea of usefulness.

We said a moment ago that logic is useful; if so, couldn’t we say that a logical pattern is valid precisely because it is useful? And couldn’t this usefulness be the real reason some patterns are logically valid and others not? This new answer is initially attractive, but, on reflection, it appears to be just as incoherent as the last, because it once more puts the cart before the horse. Logical patterns aren’t valid owing to their usefulness; instead, their usefulness is owing to their validity. Whatever tricks of reasoning our ancestors devised to endure in a difficult world, the tricks were useful because they were logically correct (not correct because they were useful). Again, we seem to be mixing up what depends on what.

If we don’t reason in logically valid ways, then we often get useless results—we get nonsense—and sometimes we make serious mistakes about how the world works. We harvest at the wrong times or drive the car in the wrong direction or fail in trying to operate a computer. And in that case, what follows is that the usefulness of our reasoning depends on its validity, not the reverse. To put this point abstractly, if giving up A causes you to lose B, what follows is that B depends on A, not vice versa. If giving up food would cause you to lose your life, then your life depends on the food. Just so, if giving up logic causes you to lose the ability to think usefully, then your ability to think usefully depends on logic, not the other way around. The patterns aren’t valid because they are useful; they are useful because they are valid.

In fact, logical patterns seem to fall into a special category, a strange and sometimes bewildering category—the category that we might call the collection of life’s ultimate truths. We can describe valid and invalid patterns and show that some are tied to others; we can also ask how the idea of validity might be connected with other logical notions, like the idea of necessity. (We say the conclusion of a valid argument follows as a matter of “logical necessity.”) And we can even ask whether all valid patterns might be characterized by a more general set of logical rules, rules that might be captured in an abstract logical system. Logicians argue about these matters all the time, and they often disagree.3 Nevertheless, the key point is that all these studies still assume that there is indeed a difference between the valid and invalid, and this difference has been recognized for thousands of years. And it is just this difference that seems immune to any explanation whatever, whether in terms of the structure of our brains or the considerations of usefulness or any other physical fact. Why does the difference between the valid and invalid exist at all?

Of course, we sometimes disagree about what counts as logically valid—and so do professional logicians—but these disagreements still assume that the difference between the valid and invalid is real. And most of the disagreements seem to involve complicated examples (where it is easy to become confused) or abstract examples (where it is hard to know just what is being discussed).4 On the other hand, when we stick to simple examples (like the ones involving Felix, hedgehogs, or Socrates), we find vast, general agreement among many different people in many different times, and the validity of these examples is no less conspicuous today than it was in the ancient world of Aristotle. The validity of the pattern in question, at least in ordinary contexts, is just as obvious as the fact that two and three make five. Our knowledge of these simple cases remains, despite whatever questions anyone might pose (even questions from a professional logician) about other, esoteric cases. (It is fallacious to argue that, just because we might be mistaken about esoteric cases, we can’t know the validity of the simple ones.)

Moreover, our knowledge of these simple cases of validity doesn’t seem to depend on having any knowledge of a more difficult and abstract logical system (like one a logician might invent), any more than our knowledge that two and three make five depends on knowing the finer points of set theory in mathematics. Most people know the sum of two and three even though they don’t know any formalized axioms of arithmetic; it follows that their knowledge of the sum can’t depend on knowing the axioms, interesting though the axioms are. Formalized mathematics of the sort now studied in mathematics departments didn’t emerge until the nineteenth and twentieth centuries, but it would be absurd to say that Isaac Newton didn’t know the sum of two and three simply because he hadn’t embraced such a system. Just so, most people can distinguish simple cases of validity and invalidity, and it likewise follows that this ability is independent of the formal techniques of professional logicians.

More generally, the human species has long had basic logical intuitions in many particular cases, intuitions that have proved remarkably invariant over the centuries; in many such cases, there has never been the slightest reason to suppose that the intuition is incorrect.5 In the twentieth and twenty-first centuries, logicians, mathematicians, and computer scientists have also constructed alternative logical systems (often called alternative “logics”), but the existence of these systems in no way undermines the straightforward intuitions we have already invoked. Instead, most of these systems, if sound, concern a different subject matter (a point we shall be discussing in chapter 5).

But how do we explain this? How do we account for this durable difference between the valid and invalid? Indeed, we might well wonder how there can be any such explanation in the first place. After all, any explanation (it would seem) must already take the difference for granted. To say anything meaningful about the difference from the start, we must still speak logically, and this assumes we already have an ability (at least a rudimentary one) to distinguish between what is logical and what isn’t.

THE DIVINE-COMMAND THEORY OF LOGIC

Before we leap to this last conclusion too quickly, however, there is another curious possibility we ought to consider, a special one contemplated and investigated over many centuries and still of interest today: maybe certain patterns are valid because they come from God. If we think of God as creator of the physical world, then couldn’t we also suppose that God created logic? Maybe valid patterns are valid because God says so.

Not everyone believes in a god, of course, but both believers and nonbelievers can still ask: If there is a god who created the world, could this god also be the cause of why logic is the way it is? Religion seeks to answer many of life’s other questions; might it also offer an answer to the question of what makes an argument valid? Is there a way to think of logic such that the discipline rests ultimately on a truth of religion?

This new idea might be called the divine-command theory of logic, and at its core is the suggestion that logical necessities exist only because God commands them. Even if the useful depends on the logical (a piece of reasoning is useful because it is logical, not the other way around), and even if the utility of our very brains depends on what counts as a valid pattern (our brains work only because they recognize these patterns), all these things might, nevertheless, still depend on God’s all-powerful will. God might still be the creator of logic.6

This divine-command theory seems entirely plausible when considered in itself, but the trouble is that it appears to make nearly all other talk about God—most of the central questions of theology—futile. The reason is that it deprives God (if there is one) of any rational qualities. The divine-command theory seems to make God thoroughly and irreducibly arbitrary; it excludes from the idea of God any objectively rational qualities that would make a human being revere a god in the first place. And in that case, we might as well worship some other arbitrary force that affects our lives, like the force of gravity or the burning of the sun. The “god” in question would just be a blind power, and to say that the existence of this power then “explains” why logic is the way it is would be to say virtually nothing. We might as well say that logic is the way it is simply because some force or other—a force we can never understand—has made it so.

We can see the apparent emptiness of this approach better if we turn for a moment to a similar sort of divine-command theory suggested in the fourth century B.C. by the Greek philosopher Plato when he pondered the nature of morality. Plato asked whether things are right or good only because the gods said so. (The classical Greeks believed in many gods.) Plato suggested his version of the divine-command theory indirectly in one of his dialogues, the Euthyphro. He had the philosopher Socrates pose a question about “holiness”: “Is a thing holy because the gods love it, or do the gods love it because it is holy?” By analogy, we might now ask, “Is a thing logical because the gods love it, or do the gods love it because it is logical?” Plato wondered whether things might be holy only because the gods said so.7

Ultimately, Plato rejected the divine-command theory when applied to morality because its logical consequence was to make the right and the good depend on the gods’ will even if this will was arbitrary. The theory’s effect was to deprive the gods themselves of any real goodness (according to the theory, to call the gods “good” would only be to say that the gods loved themselves) and to deprive their actions of any real rightness (to call their actions right would only be to say that they loved their own actions). Instead, Plato wanted to suppose that the gods, if any, had objectively good reasons for their actions and had objective moral qualities. In consequence, he supposed that rightness and goodness were qualities the gods recognized, not qualities they merely invented. Even if the gods seemed arbitrary, he supposed, this was only because the reasons for their actions often surpassed human understanding.8

The divine-command theory of logic gives a similar result. If being logical is part of being reasonable, and if God creates logic, then to say God is reasonable and has “good reasons” for his ways is only to say that God’s reasons are his reasons, however arbitrary. And in that case, God would be equally reasonable if he worked in opposite ways. Why, then, doesn’t God will the opposite? There could never be an answer. The divine-command theory seems to force us to abandon the notion of a just and reasonable god and replace it with the idea of a merely powerful, even capricious one.

Even if we approach logic from the standpoint of religion or theology, the ultimate nature of logic still has a certain inexplicability about it. We can say which patterns are valid, and maybe even reduce some of them to general rules, but we apparently contradict our tacit assumptions (even religious assumptions) if we try to explain why the patterns should exist at all.9

LOGIC AS CULTURALLY INVARIANT

All these considerations were anticipated centuries ago by medieval philosophers who, working in the tradition of Aristotle, insisted that logic was the common tool of all the sciences.10 Their meaning? That all analyses, explanations, understandings, methods, procedures, sciences, and rules must take some notion of logic for granted. This is true even if the explanation or understanding is religious; some sense of logic is always presupposed. On the other hand, if we seek a further basis for this presupposition, all our efforts end in stalemate. We get nowhere, and we see this consequence today even when we try to understand different cultures.

It is sometimes asserted that different cultures have different “logics” and that the key to understanding another culture is to comprehend its logic. Now, if this is just a highfalutin way of saying that different cultures have different beliefs or sets of beliefs, then the underlying assertion is true. Logicians, however, don’t study beliefs; they study how beliefs are connected. They study how some beliefs are inferred from other beliefs, and so they study methods—or patterns—of inference. (And they can study these patterns, by the way, whether the things being connected are beliefs, sentences, symbolic strings, electronic pathways, or what have you. The forms of the patterns don’t depend on the sorts of entities they connect, and as a result, as long as the parts of the inference can be properly labeled true or false, the exact nature of the things being labeled is irrelevant.)11

Admittedly, there is sometimes a certain vagueness in determining what counts as a belief and what counts as a method of inference; nevertheless, if there were nothing in common between two cultures, not even their inferential methods, then what would it mean for one culture to “understand” another? How would understanding be possible at all? What would disciplines such as sociology and anthropology be about?

To put this point generally: to understand another culture even partially is presumably to understand how a different tradition or experience would lead people to draw different conclusions about the world. And what does it mean to understand someone else’s conclusions? The only thing this could mean (it seems) would be an understanding of how different conclusions would follow logically from different premises. Yet, in that case, we must once more assume some sort of logic in common. Without something in common, how would we even distinguish another culture’s reasonings from a merely random collection of its opinions? (We might look for words that correspond to the ones we use to introduce a premise or a conclusion—what logicians call indicators—like “therefore,” “because,” “hence,” and “for the reason that.” But how would we even know how to translate these expressions except by finding them embedded in something we already recognized as a logical inference?) One of the overriding aims of social science is to grasp how a common human nature can still result in widely varying ways of life. The mind’s ability to distinguish the logical from the illogical is part of that common nature.

In all these remarks, we have merely considered the many ways in which the invariant nature of logic is presupposed. Logic is the common tool of the social sciences no less than the physical sciences, and it can’t follow as an outcome from any other discipline—disciplines like physics, neurology, theology, anthropology, sociology, or linguistics—because logic defines what it means to “follow” in the first place. Nor can it be a consequence of anything else, because logic defines what counts as a consequence.

All the same, there is a further point here—and this last point is perhaps the strangest of all: none of these remarks really proves logic to be universal or invariant, not in the least. None of them proves such things unless we already invoke logic in the proof, and this is surely “proof” only in the sense that it preaches to the converted. We still end up assuming what we are supposed to be justifying.

If we try to show that logic is somehow more correct than illogic, we must still make an argument. But what kind of argument must we make, logical or illogical? This is the same problem once again. If we try to justify logic logically, we end up arguing in a circle—a point made long ago by the ancient Greek philosopher Epictetus. (When asked for a reason to regard logic as “useful,” Epictetus replied by asking, in effect, whether the reason he gave should be “logical”: “You see how you yourself admit that logic is necessary, if without it you are unable even to learn this much—whether it is necessary or not.”)12 The best we can say is that, if invariant logical principles dont exist, then nothing else can remain comprehensible, because all our methods of comprehension already assume some sort of logic in advance.

Epictetus’s observation (which comes from the first century A.D.) might be put in its most extreme form as follows: suppose all laws of logic, whatever they are, were to change in the next ninety seconds; suppose all disciplines by which we now try to analyze the world were suddenly to undergo a corresponding change so that the consequences of an alternative logic were to ripple through our universe of ideas like a series of unsettling aftershocks. In that case, the change would be utterly inexplicable, unpredictable, unanalyzable, and unfathomable. Why? Because all our methods of explanation, prediction, analysis, and comprehension already depend on things that are presumably changing. The change in question would be entirely mysterious. Logical laws might still be variable, of course, but all our methods of understanding assume otherwise. In essence, logic is a horizon beyond which none of our earnest and self-reflecting arguments can help us see.

This last, ultimate result is admittedly eccentric, but it is this very eccentricity that, we believe, establishes the point we most want to stress: logic is strange. Logic is indeed one of the strangest things in the world, if it is even “in” the world at all. Our attempts to understand the world certainly depend on it, but logic’s principles are beyond the here and now, beyond the local and material, beyond the arbitrary and accidental. Logical relations are apparently timeless, placeless, and independent of what any human being happens to think. Or so we implicitly assume (whatever the time and place) whenever we try to draw a durably correct inference about what really follows from what. (There are other ways of viewing the ultimate nature of logic, of course, but the view we offer here is at least as plausible as any other, and it has one of the longest of all pedigrees.)13 After all, logic is useful, but only because it helps us discover new connections between propositions. And this assumes the connections are indeed “there” to be discovered, independently of whether we already believe in them.

LOGIC AS TIMELESS AND PLACELESS

We might mockingly refer to the notion that logical relations are somehow transcendent as positing a sort of Never Never Land where the patterns of logic reside. Mockery aside, however, this way of talking is only a metaphor for expressing an altogether different idea: the patterns of logic aren’t “in” a land of any sort. They don’t have a place or a time. Perhaps, though, we can picture this idea better if we think for a moment, by way of contrast, about physics.

The truths of physics are what they are only because the contents of the physical universe happen to behave in a particular way. This behavior is indeed a matter of time and place. Physical objects have locations. But logical truths are independent of this behavior of time and place—logical truths are true regardless. An argument’s validity doesn’t depend on whether the things it describes happen to exist in the physical world. There need not be any actual laborious hedgehogs. Valid forms of argument remain valid come what may, and this fact of their validity is also a truth but a truth of a different sort. We can describe such truths according to laws and principles, and these become the laws and principles of logic.

Of course, we sometimes makes mistakes about such laws, just as we sometimes make mistakes in arithmetic, but the behavior of physical objects doesn’t count as evidence for or against these judgments any more than the behavior of physical objects counts as evidence for or against the idea that seven is a prime number. (If we count up seven physical things and find them evenly divisible by four, we just suppose that we must have miscounted; just so, if we determine that all wicked witches are irritable and that the Witch of the West, though wicked, is plainly not irritable, then we just suppose we must have made some faulty determinations.) Unlike physics, logic isn’t about how things actually are; it is only about how they could or would be. Logic is a science of coulds and woulds. As medieval scholars would have explained it, physics is about contingency, but logic is about possibility and necessity.

The distinction between the contingent world of physics and the necessary one of logic goes back to the ancients, and anyone who draws the distinction today soon embarks on the study of something decidedly odd and, in some respects, ineffable—logic itself. Yet this ineffable something is apparently real, because without it nothing else would make sense.

The first people to leave a written record of these matters were the classical Greeks, and from that distant, shimmering moment the story of logic begins. Our word “logic,” by the way, comes from the Greek logos, but the term didn’t acquire its present meaning until the second century A.D. In Aristotle’s day, in the 300s B.C., the meaning of logos was only that of principle, thought, reason, story, or word—the last usage appearing at the opening of the Gospel according to John (originally in Koine Greek): “In the beginning was the Logos, and the Logos was with God, and the Logos was God.” When Aristotle first invented logic as a discipline, he didn’t even have a name for it; he called his studies “analytics.”

Yet we often forget just how singular the beginning of logic really was. We all use logic, but not everyone studies it. In fact, we often speak loosely of ancient Indian logic or ancient Chinese logic, but when we do so we are often confusing “logic” in the broad sense of argumentation with true logic in the specific sense of validity. Various ancient peoples studied argumentation. They studied debate, reasoning, controversy, disputation, refutation, and deliberation, and they recorded their studies in writing. But only Aristotle inaugurated a study of the validity of arguments in isolation from an argument’s other features. Why, then, did no one before Aristotle study logic in this way?

THE SOCIAL HISTORY OF LOGIC

To uncover logic’s peculiar origins as a discipline, and indeed, to answer most other questions about its history, we shall need to think about some of logic’s abstractions in their own right, thus, acting like logicians. We shall need to examine these abstractions in detail. But we shall also need to see how particular social conditions tended to encourage logical discoveries. For example, why did logic in the strict sense (meaning the study of deductive validity) start in classical Athens, and why did Athens remain the center of logical studies for generations? The answer turns out to depend partly on classical Greek geography, which encouraged the growth of democracy. Aristotle’s logic began, in fact, as a reaction to Athenian democracy and to the argumentation of the Athenian Assembly.

To take another example, why did the logic of the ancient Stoics in the late 200s B.C. gain momentum only after the collapse of the old Greek city-states and after the triumph of the new imperial regimes of the Hellenistic age? Stoic logic, which modern logicians now recognize as the propositional logic that runs computers, derived from a search for a rational, eternal law—a “law of nature”—and it gained popularity only when a new cultural outlook had emerged, one that stressed introspection and personal meditation. This introspective emphasis was actually a consequence of the political absolutism of the time.

Again, why did the most important work on induction and scientific method only appear during a much later period—after the wars of religion in Europe in the sixteenth and seventeenth centuries and then, in a further burst of effort, in the years surrounding the First and Second World Wars? This time the answer lies in political turmoil provoked by the growth of trade. The rising commercial classes of early modern Europe instigated devastating fanatical violence in the sixteenth and seventeenth centuries, so much so that intellectuals like René Descartes undertook a new search for the rational foundations of belief. But these same commercial classes then ushered in, in the succeeding centuries, a fresh approach to reason and evidence that now underlies the logic of modern science.

To cite yet another example, why did the study of rhetorical frauds and sophistical ploys, inaugurated by Aristotle in ancient times, remain largely undeveloped for more than two thousand years after his death until being finally revived in the nineteenth century by the eccentric English philosopher Jeremy Bentham? (Despite occasional advances in the intervening centuries, Bentham complained, “From Aristotle down to the present day . . . all is blank.)”14 The reason for this revival was the rise of public opinion as a modern political force. Popular opinion carried increasing weight in the political struggles of the late eighteenth and early nineteenth centuries, and the response of Bentham and his followers was to catalogue the many different ways in which devious speakers of his day had tried to divert public attention and thwart the public good.

Finally, why did symbolic logic, which manipulates signs by mechanical rules and underlies modern computing, remain largely unexplored until the mid-nineteenth century, just as the Industrial Revolution gained momentum? As it happens, mechanization and symbolic logic are intimately connected. Symbolic logic is essentially a consequence of an age of machinery, and it has given rise, in turn, to a new generation of machines—the logic machines we call computers.

There are solutions to the many curious puzzles of logic’s long history, and in the pages that follow we shall seek them out. Our method, however, will be to look at social forces—forces that have played out during formative periods—and we shall look at the forces behind four broad categories of logic as understood by professional logicians today: classical deductive logic, inductive logic, the analysis of informal fallacies, and modern symbolic logic.

Logic underlies every attempt to scrutinize the world, but the study of logic has emerged only in precarious ways. Without a series of peculiar accidents—accidents of geography, trade, and politics—logic as a discipline might never have existed. Many individuals helped to make the discipline possible, including the ancient Stoic philosopher Chrysippus; the medieval theologian Peter Abelard; the modern thinkers René Descartes, David Hume, Jeremy Bentham, George Boole, Augustus De Morgan, John Stuart Mill, Gottlob Frege, Bertrand Russell, and Alan Turing; and, of course, the enigmatic Aristotle. But behind each of these figures there was also a willing audience. Logic is in many ways philosophical, and many of its visionary contributors could be equally called philosophers. But logic is also a discipline in itself, whose history we mean to examine as a window into its inner nature. All things considered, then, what provoked people to study the different branches of logic in the first place? And where, in the end, will the discipline lead us?