9

SYMBOLIC LOGIC AND THE DIGITAL FUTURE

THE INDUSTRIAL Revolution, beginning in the late eighteenth and early nineteenth centuries, has given the world new conveniences, factories, and cities—and also a new kind of logic. For good or ill, the visible effects of industrialism are now everywhere around us, but the abstract effects of industrialism are around us too. One of the most profound of these abstract effects is symbolic logic, which is a consequence of an age of machinery. Symbolic logic emerged from a nineteenth-century world of mass production and large mechanical operations, and in the twentieth and twenty-first centuries it has given rise, in turn, to the new world of modern digital computers.

Before the nineteenth century, farsighted thinkers had long toyed with the idea of a fully symbolic logic, but they had never turned any such project into reality. Only with the advent of large-scale manufacturing did symbolic logic finally take shape. The first fully symbolic systems were laid out by the English logicians George Boole (whose algebra now underlies the operations of modern computers) and Augustus De Morgan, both of whom published major books in 1847, just as the Industrial Revolution in England was in full swing. And the reason for symbolic logic’s growth was the Industrial Revolution itself.

The Industrial Revolution convinced large numbers of logicians of the immense power of mechanical operations. Industrial machines are complicated and difficult to construct, and unless used to manufacture on a large scale, they are hardly worth the trouble of building. Much the same can be said of modern symbolic logic. Symbolic logic relies on abstract principles that are difficult, at first, to connect with common sense, and they are particularly complicated. Nevertheless, once a system of symbolic logic has been constructed, it can embrace a vast array of theorems, all derived from only a few basic assumptions, and suitably elaborated, it can supply a clear, unequivocal, and mechanical way of determining what counts as a proof within the system and what doesn’t. The Industrial Revolution convinced many logicians from the nineteenth century onward that complicated mechanical systems are truly worth building, and the consequence has been to capture large new areas of valid reasoning, areas that had long been intuitively obvious to human beings yet never before reducible to formal technique.

Symbolic logic has already had far-reaching effects. Symbolic logic first arose out of abstract changes in mathematics, and mathematics seems, likewise, to have been deeply influenced by nineteenth-century industrialization. But symbolic logic has since brought ever-larger swaths of everyday reasoning within the reach of mechanical procedures, and it has changed the way logicians and mathematicians think about proof. In addition, it has had a large impact on the daily lives of ordinary citizens. Just as machines first gave rise to symbolic logic in the nineteenth century, so, in the twentieth and twenty-first centuries, symbolic logic has given rise to a new generation of machines: digital computers, which are essentially logic machines whose programming languages are analogous to the symbolic languages invented by logicians. The effects of modern symbolic logic are now felt around the world, but what first encouraged this new branch of the discipline was the nineteenth-century success of large-scale industry.

THE IMPACT OF THE INDUSTRIAL REVOLUTION

The Industrial Revolution began to gain strength in Britain in the late eighteenth century, but its impact was first felt on a massive scale in the nineteenth century. One of the most influential of all its consequences was to change the physical appearance of millions of people. The nineteenth century was the first time in human history when large numbers of ordinary people could own more than one suit of clothing—clothing manufactured on large industrial machines. Earlier, most people in most parts of the world had darned or mended the one set of garments they would continue to wear through much of their lives.

Odd as it sounds, this new surfeit of mass-produced clothing would deeply affect the thinking of logicians, as would the other mechanical wonders of the industrial age. Nineteenth-century inventors would ultimately lay down most of the basic economic and social patterns that would continue to dominate the developed world ever after. The patterns were the result of the machines.

Among other things, nineteenth-century mechanics designed a new generation of steam engines, fired by coal, and they greatly expanded the use of iron and steel. They also produced colossal public works. Nineteenth-century builders erected more structures in stone and metal than all previous ages put together, and they may also have put up more buildings in the Gothic style than did all the peoples of the Middle Ages, and more architecture in the style of ancient Greece and Rome than did all the ancient Greeks and Romans. Paradoxically, many of these developments, which would serve as mechanical models to logicians of the future, were actually stimulated by changes in agriculture, especially in Britain and in the nations that traded with Britain.

Britain had long imported cotton from various parts of the world, but in 1793 in the United States, Eli Whitney invented the cotton gin (short for cotton engine), which separated cottonseeds from cotton fibers. The result was that American cotton plantations, which exploited slave labor on a massive scale, dramatically increased their output. Cotton had long been shipped to Britain for spinning, but its new abundance encouraged British manufacturers to build more mechanical mills. By 1830, nearly all cotton spinning in Britain had been mechanized. The driving force behind this change—one that would then cause other trades to mechanize too—was the desire of millions of people for new and inexpensive clothing.

The early mills had used waterpower and had relied on components of wood and leather, but iron soon replaced wood, and steam power, generated by burning coal, replaced water. (In earlier times, Britain’s inhabitants had used wood as their primary fuel, but Britain’s forests had long since been depleted.) Cotton, iron, and coal thus became the three fundamental pillars of the early years of the Industrial Revolution, and in all these areas the British had special advantages.

The most important advantage was Britain’s dominance of the seas, on which the cotton trade depended. In fact, no point in Britain is more than seventy miles from the sea, and the nation has many accessible ports. In addition, Britain had ample deposits of iron and coal, and the coalfields, like those of Newcastle upon Tyne, were often adjacent to the seacoast. This made the transporting of coal comparatively easy.

Beyond these factors, British farmers had already increased their harvests, beginning in the eighteenth century, and this advance in British farming (sometimes called the “agricultural revolution”) had made it possible for a proportionately smaller rural population to feed a much larger urban one. Parliament’s many Enclosure Acts, especially from 1760 to 1830, had made the nation’s farms more efficient by making them larger, though the practice of consolidating farms had also thrown great numbers of peasants off the land. These peasants drifted to the cities in search of work, and in so doing, they gradually formed an immense urban proletariat. Crop rotation and new metal tools also added to the productivity of farms, and much of the displaced peasantry, settling in the ever-expanding cities, then found work operating complicated machinery, often under appalling conditions. (In the early years of the Industrial Revolution, children sometimes worked in the mills sixteen hours a day, and in their drowsiness, they sometimes fell fatally into the machinery.)

New forms of transportation also emerged during this period, especially railroads, which first appeared as an easy means of moving coal from the pitheads of English and Welsh mines to nearby docks for loading onto ships. Inspired by the use of rails to ease the way for horse-drawn coal carts (and then by the use of steam engines, which replaced the horses), inventors like the Englishman George Stephenson saw the possibility of building railroads for city-to-city transport. In 1830, his locomotive Rocket carried passengers between Liverpool and Manchester at the startling speed of seventeen miles an hour. Stephenson used a steam engine to pull the railcars, and steam engines rapidly appeared in other industries too. Steam engines soon became the reliable beasts of burden of the new mechanical age.

A primitive and inefficient steam engine had already been available since the early 1700s (developed by the English blacksmith Thomas Newcomen for drawing water out of mines), but in 1784, after many years of experiments, the Scottish inventor James Watt patented a rotary steam engine. Watt’s engines soon supplied power to water pumps, potteries, textile mills, and flourmills. Later inventors made further improvements in the efficiency of steam engines, and gradually they became the dominant source of power for ships. Eventually, Britain would invest in steam power more than any other nation on earth, and by 1900 three-fourths of the world’s steam engines would be located in Britain. Nevertheless, other nations soon imitated this example, especially Belgium, Germany, and the United States, all of which had ample coal deposits. (By contrast, nations like France, which had less access to coal, lagged behind.)

The resulting industrial world of the nineteenth century, out of which symbolic logic would finally emerge, was largely a place of cast iron, still dependent on a plentiful but brittle metal until yet another crucial invention, which appeared in the 1850s: a new method of making inexpensive steel. It was a transformation in steelmaking from the 1850s onward that finally made possible the world of durable machines and tall skyscrapers we now see around us.

In 1856, the English engineer Henry Bessemer, looking for ways to improve gun-making, explained a new technique for producing steel in large amounts. Steel is hard yet malleable iron, with only small portions of carbon, usually less than 1.7 percent. The earlier method of steelmaking had been a laborious process called puddling, in which small quantities of molten iron were stirred to remove impurities. Bessemer, however, found he could decarbonize iron in large amounts in a new way, by mechanically forcing a blast of air through the molten metal. In the 1860s, other inventors (especially William and Friedrich Siemens) perfected an alternative technique, the open hearth process, which would become the most common method of steelmaking through much of the twentieth century. The resulting manufacture of cheap steel gave rise to a whole new generation of machines, more durable than in the past, and it also made possible the first skyscrapers, which depended for their support on internal skeletons of inexpensive structural steel.

In all these changes, nineteenth-century inventors and engineers were transforming the earth, and they brought forth many other inventions too: diesel power, photography, telegraphs, telephones, petroleum-based chemicals, roadways of macadam (named after the Scottish engineer John L. McAdam, who covered roads with bitumen to form a smooth and lasting surface), and inexpensive bicycles (which, for the first time, gave ordinary working people, on their days off, a chance to become tourists). Nineteenth-century inventors also produced the rotary printing press, thereby conjuring into existence mass-circulation daily newspapers (along with all the sound and fury of an active and sensationalist Fourth Estate). It was out of this mechanical whirl of nineteenth-century industry that modern symbolic logic finally developed.

THE ORIGINS OF SYMBOLIC LOGIC

The seeds of symbolic logic go back farther than the nineteenth century; they can be traced at least as far back as the seventeenth century, to the work of the German philosopher Gottfried Wilhelm Leibniz, who aimed at a logical calculus he said would be “mechanical.” In particular, Leibniz was interested in geared machinery that could be used to solve mathematical problems, and he was especially drawn to the idea of an artificial language that would allow people to express all observable facts unambiguously and to make from them all valid deductions.1 Nevertheless, most of Leibniz’s notes on these matters remained long unpublished, and only in the nineteenth century did this mechanical tendency finally become real. Ideas about symbolic logic had been, at best, conjectural, but in England in 1847, Boole and De Morgan both published major books that laid out fully symbolic systems, and they did so just as the Industrial Revolution was fully under way.

De Morgan had been a brilliant student at Cambridge University but had refused to submit to religious tests then required for a fellowship, and so he had decided to accept an appointment as professor of mathematics at the new University of London, a position from which he resigned twice over matters of principle. He disliked official organizations, preferred to decline honors, stubbornly refused to declare his beliefs as a condition of holding a job, and refused an honorary law degree from Edinburgh. His great work was in mathematics and logic, but he also wrote extensively on the history of science and was much admired for his wit. (Remarking on metaphysics, a subject in which he also took much interest, he nevertheless warned the prospective student, “When he tries to look down his own throat with a candle in his hand, to take care that he does not set his head on fire.”)2

Boole was equally unconventional. His father had been an impoverished shoemaker who, nonetheless, had a deep interest in science and mathematics. His mother was a lady’s maid. Boole’s formal education ended in the third grade, yet, by the age of sixteen, he had already won a post as assistant master of a school, where he taught Latin and Greek, among other subjects. By twenty-three, he was publishing original mathematical research. His work won recognition from the British Royal Society (founded in 1660 for the advancement of science), and he then produced his Mathematical Analysis of Logic (1847), which laid the basis of what we now call Boolean algebra. The effect of his studies was to codify all the compounds treated earlier by the Stoic logician Chrysippus into a notational system that could be manipulated mechanically using the same methods by which one manipulates equations in an algebra class. (We considered Chrysippus’s logic, now called propositional logic, back in chapter 4.)

In addition, Boole’s techniques could capture a good deal of Aristotle. (Aristotle’s logic, which is a logic of classification by way of categorical syllogisms, was described in chapter 3.) All the same, Boole’s life, swift in its beginnings, was unexpectedly brief. Despite his lack of a university degree, Boole was appointed professor of mathematics at Queen’s College in the city of Cork, Ireland, where he taught for fifteen years, but he then died suddenly after insisting that he deliver a scheduled lecture even though his clothing had been soaked in a rainstorm.

Working independently but from the same historical impulse, these two curious professors took the crucial steps that would finally usher in a new world of logical relations.

The distinguishing feature of a modern symbolic logic is that all parts of an argument are rendered with symbols expressly designed for analysis. Of course, the use of symbols as variables goes all the way back to Aristotle. (In “All As are Bs,” A and B are symbolic variables.) With modern symbolic logic, however, there are no remaining bits of English, Latin, Arabic, or Greek (the remaining bits being mostly what the medievals called syncategorematic words, which help to form a proposition or connect different propositions together but without themselves being either subjects or predicates of a proposition). De Morgan’s and Boole’s systems were different from each other, and neither had the scope and power of the more far-reaching system later developed by the nineteenth-century German logician Gottlob Frege or the power of the system developed independently of Frege’s by the American philosopher C. S. Peirce.3 Nonetheless, De Morgan and Boole had shown the way. A categorical syllogism, as expressed in the sort of notation now used in textbooks of symbolic logic, might look like this:

(x)(Ax Bx)

(x)(Bx Cx)

__________________________

(x)(Ax Cx)

Modern symbolic logic takes this as an expression of the English utterances,

All As are Bs.

All Bs are Cs.

Therefore, all As are Cs.

Of course, the symbols look complicated and difficult at first, but the basic idea is to make the argument mechanical, and the symbolism is less intimidating once one sees how the specific symbols can be translated:

(x)can be read “for all x.”
Axcan be read “x is an A.”
can be read to express “if-then.”
Bxcan be read “x is a B.”
Cxcan be read “x is a C.”

In addition, the long horizontal line can be rendered “Therefore.” As a result, the whole symbolic argument can be read this way:

For all x, if x is an A, then x is a B.

For all x, if x is a B, then x is a C.

Therefore, for all x, if x is an A, then x is a C.

Symbolic logic treats this as an expression of “All As are Bs; all Bs are Cs; therefore, all As are Cs.” (This translation holds good so long as the interpretation of the initial English version is Boolean, meaning we make no assumptions about whether the classes of As, Bs, and Cs have members; the As, Bs, and Cs might stand for unicorns, griffins, or chimeras, whether or not such creatures truly exist.)

Other symbols can be used in place of the ones given here (and the notational systems of De Morgan and Boole were different; the notation we use now, with some modifications, comes from a series of conventions worked out by the English logicians Bertrand Russell and Alfred North Whitehead in their Principia Mathematica, 1910–1913, and initially developed by the Italian mathematician Giuseppe Peano).4 The important thing, however, is that, because ordinary words are excluded, our knowledge of what any words mean is excluded too. This is what makes symbolic logic fundamentally different from the logic of the past.

With earlier sorts of logic, to determine whether the validity of an argument had been proved, we still needed to understand the meanings of the words—at least some of the words—and this meant we still needed to think. With symbolic logic, by contrast, we don’t need to think at all, or rather the only thing we need to think about is whether the symbols appear in the order specified by the rules that govern them. In consequence, so long as the proof is sufficiently spelled out in a symbolic language, the task of verifying it is strictly clerical. What we look at is simply a matter of form—and purely form. The form is a pattern in the placement of the symbols, and as a result, detecting the pattern’s presence has nothing to do with knowing what any of the symbols signify. In this respect, then, determining whether an argument’s validity has been proved within the system is strictly mechanical, and in our age, it can certainly be done by machines. (We should add that, among professional logicians, the notion of a mechanical procedure is now more precise and nuanced than it was in the nineteenth century.5 Nevertheless, what many nineteenth-century thinkers aimed at was a system that would involve the repetition of clerical operations in the manner of a machine, and symbolic logic made this possible.)

Algebra was moving in just this direction when De Morgan and Boole wrote; it was becoming purely a calculus of symbols, independent of whether the symbols stood for numbers, and so both thinkers saw the deep formal connection between mathematics and logic. They saw that symbols could be manipulated to solve logic problems just as symbols had long been manipulated to solve problems in arithmetic. Of course, different scholars in different ages might have arrived at this same insight if clever enough; Leibniz had already been thinking along these lines. All the same, the increasing use of machinery in the nineteenth century tended to emphasize the point. Why? Because nineteenth-century machines showed the immense power of mechanical operations. Every section of society had been touched in some way by this new and mysterious power, and logicians saw this power too.

Much depends, to be sure, on precisely what one means by the words “mechanical” and “mechanically,” and as it turns out, these are words on which the nineteenth-century logicians often relied. For example, in his preface to The Mathematical Analysis of Logic, Boole quotes John Stuart Mill on the importance of the mechanical in logic: “Whenever the nature of the subject permits the reasoning process to be without danger carried on mechanically, the language should be constructed on as mechanical principles as possible.” (This passage comes from Mill’s System of Logic, 1843, and Mill adds by way of caution that, when the reasoning cant be carried on safely in a mechanical way, the mechanical approach should be avoided—and Boole agreed.)

As to Mill’s intention in using the word “mechanical,” Mill explains it as a matter of limiting one’s thinking strictly to the placement and arrangement of symbols and not being distracted by thoughts of what the symbols might mean:

Soon after Boole published, De Morgan wrote to him (again, in 1847) and explained that his own system aimed, likewise, at a mechanical approach. De Morgan remarked, “I have employed mechanical modes in making transitions with a notation that represents our head work.”7 In short, the aim of this mechanical approach was to make reasoning easier by reducing the mental labor of thinking about the objects of the reasoning—and by replacing such thinking, instead, with the manipulation of symbols according to fixed rules.

Now, in speaking here of the Industrial Revolution as an important historical stimulus for this innovation in logic, we don’t mean to discount the role of individual insight on the part of the logicians. Perhaps the most important insight of all was one we shall come to shortly, the insight of Frege when he hit on the idea of qualifying an entire symbolic assertion with a quantifying expression, such as “for all x” or “for some x” (the technique called “quantification”). Nevertheless, we do mean to assert that the Industrial Revolution was a cause of symbolic logic’s development just as individual insight was a cause. By analogy, the Battle of Waterloo depended on the insights, proclivities, and decisions of specific individuals: of Napoleon, of the Duke of Wellington, and of many other people. But the occurrence of the battle also resulted from the existence of much larger social forces, forces that shaped British and French society and that led these nations (and other nations) into the series of collisions we call the Napoleonic Wars. Just so, the history of logic is made up of individual insights by individual logicians, but it is also deeply influenced by social changes. To treat logic’s history as if it were only a matter of individuals would be like treating political history as if it were only a matter of individuals; it would be like writing political history as if it were only a story of shrewd, villainous, or heroic characters, with no social analysis.

When it comes to social changes, then, the most profound social change of the nineteenth century was, by far, that millions of people witnessed the practical power of mechanical operations as the Industrial Revolution unfolded. And it was out of these millions that the logicians of the age were recruited. Intellectual movements usually require various scholars learning from one another, but such movements are typically sustained in the first place by things larger than individuals. Many people can arrive at similar ideas at different times, but it usually takes something common to them all to make them work in concert. The Industrial Revolution served as an enormous suggestive influence, and it persuaded both logicians and mathematicians alike that they might achieve a more powerful result if they could summon the patience to construct a larger and more complicated system, one in which questions of what had been proved and what hadn’t could be answered mechanically. We can get a better sense of this aspiration if we now look at how symbolic logic uses a more complicated technique to get a more powerful result.

THE LOGIC OF RELATIONS

Symbolic logic’s most fundamental effect has been to bring together ever-larger areas of valid reasoning under a single set of formalized rules. Once we see how this effect is accomplished, we can also see why the techniques needed to make it happen would chiefly appeal to an age already captivated by the immense power of modern machinery.

For example, here is an argument that looks valid intuitively but that doesnt lend itself to the analysis of Aristotle or Chrysippus (meaning it is neither a categorical syllogism nor a propositional form of the sort studied by Chrysippus):

Tom is to the right of Dick.

Dick is to the right of Harry.

Therefore, Tom is to the right of Harry.

If the premises are true, the conclusion must also be true, and this remains so no matter what names we substitute for Tom, Dick, or Harry. But the argument also depends crucially on the expression “to the right of.” If we substitute something different for that expression, the result is often invalid:

Tom is the uncle of Dick.

Dick is the uncle of Harry.

Therefore, Tom is the uncle of Harry.

In our everyday reasoning, we often link individuals to other individuals by way of relations, like “to the right of”; nevertheless, these arguments are neither categorical syllogisms nor the propositional structures of Chrysippus. And their logic depends on which relations we employ. (Mathematics is full of such relations: “is equal to,” “is greater than,” “is the square root of,” “is a factor of,” and so on.) One of the things symbolic logic does, then, is capture this sort of reasoning, and it does so in addition to capturing Aristotle’s syllogisms and Chrysippus’s propositions. Symbolic logic achieves this synthesis—a grand synthesis of the Aristotelian, the propositional, and the relational—through two different maneuvers.

The first maneuver is to add an implicit premise that characterizes the information we already have (and have tacitly assumed) about the relation “to the right of.” We know what it means for something to be to the right of something else, and we also know what this information implies if something in addition should turn out to be to the right of that. So we try to capture this information in a further premise. But we capture it in a particular way—by employing a second maneuver—which is to frame this further premise as a generalization that applies to any individuals we might name. If we put the two maneuvers together, we get this:

For any individuals x, y, and z, if x is to the right of y, and y is to the right of z, then x is to the right of z.

This is a generalization that characterizes the logical import of “to the right of”; however, it also applies to any individuals, x, y, or z. If we then combine this new premise with the two others we already have, we get the following argument:

For any individuals x, y, and z, if x is to the right of y, and y is to the right of z, then x is to the right of z.

The individual Tom is to the right of the individual Dick.

The individual Dick is to the right of the individual Harry.

Therefore, the individual Tom is to the right of the individual Harry.

Other relations can also be handled in this way: by capturing the specific relation in a premise that holds good for any x, y, z, and so on.

It is easy to see why techniques like these would have become interesting chiefly in an age of machines. The new argument we have just constructed is rhetorically complicated, so complicated, in fact, that no thoughtful intellectual of the ancient world would have spent much time with it. (An ancient gibe against logicians of the past was that they always acted like men eating crabs, dismantling the shell with a great deal of labor only so they could eat a tiny morsel of meat.)8 Ancient logic always derived from public speaking, and no ordinary audience in ancient times would have paid much attention to someone who spoke in so cumbersome and convoluted a manner.

If we think back for a moment to Aristotle’s syllogisms or Chrysippus’s common forms, what leaps out immediately is their rhetorical simplicity. Aristotle’s categorical syllogisms never have more than three key terms, and the classic forms explored by Chrysippus rarely use more than three elements in forming their compound propositions. (The complex dilemma is the most complicated of the traditional forms, and it stops at four elements: A, B, C, and D.) Ancient logic, then, was close to everyday speaking and thinking, and though our everyday reasoning certainly does make use of special relations like “to the right of,” we usually manipulate these relations intuitively, without expressing them or even thinking of them as involving a complicated general premise that applies to any individuals, x, y, and z.

This is where nineteenth-century industrialization made such a difference. Nineteenth-century logicians were much more willing to submit to the apparent tedium of a new approach because it promised a further reward. If we approach argumentation in an admittedly tedious but mechanical way, making explicit everything we usually grasp intuitively, without analysis, we can construct a system that captures much larger areas of valid argumentation, and we can bring out its apparent form. It is as if we were building a complicated, steam-driven loom. Building such a loom is laborious, and putting it together requires forging some odd-looking components. If, having finally built the loom, we were then to use it to manufacture only one piece of cloth, the whole exercise would be pointless. Yet, if we brought the loom up to full speed and set it running for prolonged periods, we would then be able to manufacture cloth in large quantities, and we would have a machine of great power. Just so, building the abstract system of symbolic logic requires tedious labor and odd components, but if designed correctly, it can then express a vast array of theorems, and the proposed proof of a theorem can be checked mechanically.

As it turns out, this same impulse toward mechanical procedures also appeared in much nineteenth-century mathematics, and it was the mathematical version of the impulse that had particular influence on logic.

THE EFFECT OF THE NEW MATHEMATICS

Nineteenth-century mathematics is certainly a vast subject, but one of its chief tendencies was an emphasis on new axiomatic systems. Of course, Euclid’s axioms for geometry go back to the fourth or third century B.C. But the axiomatic systems of the nineteenth century were different in that their starting points were more difficult and therefore less intuitive.

The axioms of nineteenth-century mathematics required greater mental abstraction. To be sure, Euclid’s starting points require attention; they, too, demand a certain measure of abstraction, and before we can even enter into Euclid’s world, we must first realize that a point, by definition, has no parts and that a line, by definition, has no breadth. Even to think of these things, we must partly abstract from our ordinary experience of three-dimensional objects. But Euclid’s basic ideas are not so deep that they elude the grasp of ordinary, literate citizens; on the contrary, Euclidean geometry has been a traditional part of a liberal education for more than a thousand years. (Boethius included geometry among the seven liberal arts back in the sixth century A.D. and so did Martianus Capella in the previous century.)

By contrast, when it comes to the nineteenth century’s axiomatic systems, the starting points are harder. Arithmetic had been carried on for thousands of years without axioms and without any general, systematic proof that its procedures were sound. Only in 1830 did this situation start to change, when the English mathematician George Peacock published the first part of his Treatise on Algebra. And for the rest of the century, mathematical logicians closed in on an axiomatic arithmetic that finally emerged in the work of Giuseppe Peano. Peano’s axioms can now be used to prove that two and three do indeed make five, but knowing the sum intuitively, or knowing it by counting it out on one’s fingers, is easier than proving it from the axioms. (The axioms are not insuperable, but one must first master what is called the “successor” function, and one must also master the idea of mathematical induction.)

As a result, whereas Euclid’s geometry generally goes from intuitive ideas as its starting points and then reasons to a series of complicated and elaborate theorems, nineteenth-century systems tended to start with complicated ideas at the outset and then continue forward from there. In present-day mathematics, foundational axioms and principles are matters of great abstraction and delicacy (often involving set theory), and though they can be used to prove simple equations of arithmetic, they can also be used to prove deep theorems about infinities. Despite the increasing abstractness of the axioms, however, these nineteenth-century systems also made it easier to assess the correctness of a mathematical proof through the mechanical manipulation of symbols.

Throughout the nineteenth century, this delicate relationship between logic and mathematics was complex and varied, and it sometimes seemed to go back and forth in different directions. Boole, for example, was trying to solve logic problems by using the methods of mathematics (specifically, those of algebra), whereas Frege, working later in the century, was trying to reduce parts of mathematics to logic itself. Nevertheless, in both logic and mathematics, thinkers of the period were fashioning complicated systems out of obscure and odd-looking components, and they were trying to devise an intelligent way of manipulating symbols so that a wide range of their fields’ questions could be put to rest in the unthinking manner of a machine.

As it happened, mathematicians of the period often took particular interest in machines that could do mathematical calculations. De Morgan witnessed a demonstration of Thomas Fowler’s ternary (or “three-valued”) calculating machine in 1840 and left behind a detailed description of it. He was also a friend of the mathematician and inventor Charles Babbage and of Ada Lovelace, both of whom collaborated on the plan for the “Analytical Engine,” an immense steam-powered mechanical computer that was never built. The Analytical Engine would have consisted of thousands of metal moving parts, and it might have been the size of a railroad locomotive. (Most experts today say the machine was simply impossible to construct given the technology of the age, and some have suggested that the power plant alone might have shaken the rest of the engine to pieces.) Ada Lovelace, who had been De Morgan’s student, worked out much of the theory behind the machine, and she said its inspiration came from Babbage’s earlier and more limited plan for an arithmetical calculating machine (the “Difference Engine”) and from a still-earlier invention in France of a programmable loom. Back in 1801, the French weaver Joseph Marie Jacquard had used punched pasteboard cards to control the operation of a mechanical loom that wove intricate designs in silk.

As Lady Lovelace wrote in 1842, “We may say most aptly that the Analytical Engine weaves algebraical patterns just as the Jacquard-loom weaves flowers and leaves.” She calls the Analytical Engine “the material and mechanical representative” of mathematical analysis. She adds, “In enabling mechanism to combine together general symbols in successions of unlimited variety and extent, a uniting link is established between the operations of matter and the abstract mental processes of the most abstract branch of mathematical science. A new, a vast, and a powerful language is developed for the future use of analysis in which to wield its truths.”9

Of course, in speaking here of machines, we don’t mean to suggest that mathematicians and logicians of the period typically wanted their work to be done by machines (though Lovelace and Babbage certainly had some such project in mind).10 On the contrary, the dominant aim behind the new axiomatic systems of the nineteenth century was to increase mathematical and logical certainty; mathematicians and logicians alike wanted to purge their disciplines as much as they could of the possibility of error.11 But their route to this certainty was by way of mechanical procedures, and this was because they saw the mechanical as essentially rote and unthinking. Much of this impulse came from the observation that the mechanical could be carried on with less awareness—or with less “consciousness,” to use Mill’s word—of the things one was reasoning about. The less the verification of a proof needed you to be conscious of anything other than the placement of the symbols, the easier it would be to detect mistakes.

In fact, this was Leibniz’s idea back in the seventeenth century: make mistakes in reasoning more easily detectable. As Leibniz expressed this view in 1685, “The only way to rectify our reasonings is to make them as tangible as those of the Mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate [calculemus], without further ado, to see who is right.”12

Leibniz’s aim? To find an error “at a glance.” This was the aspiration that was to recur in the work of nineteenth-century logicians. Their basic strategy was to reduce the thought required to detect errors and to achieve this result by increasing the thought that went into a logical system’s design.

In many ways, this approach mirrored nineteenth-century industry. The essence of nineteenth-century industry was the clever and carefully thought-out design of operations that, in themselves, were thoughtless. The machines repeated a series of otherwise insignificant motions, but because the motions were cleverly arranged, the results were highly significant. The machines did what they did even though none of the machines could think. Simple, repeated movements were carefully combined in a complex system.

Symbolic logic does this too. Given a clever arrangement of rules for the symbols, we can express proofs for a large array of theorems. But when it comes to assuring that the proofs are correct, the symbols might mean nothing at all; what matters is their arrangement. The power is in the patterns, and what makes these otherwise meaningless patterns significant is that they do indeed resemble the structures that we discover in our thinking, the structures we call reasoning (or, to use De Morgan’s term, they resemble our “head work”). What the Industrial Revolution did, then, was show in a spectacular way how unthinking things could be cleverly arranged to achieve an intelligent result. (In our time, many observers believe some machines, such as electronic computers, do indeed think, provided that the machines are sophisticated enough; but this is an idea we shall come to shortly.)

To an outsider, these nineteenth-century pioneers in logic and mathematics might almost look like a collection of mad scientists who scribble down weird ciphers and speak in languages no one else can understand. Their starting points were increasingly remote from ordinary intuition, and few of their fellow citizens could know what they were up to. (As for Babbage, in his later years, some people thought he was indeed mad, though it seems more probable that he just became cranky. He tried to mount a public campaign against street musicians, especially organ grinders, because he objected to the sound, and he used to drive away children from his home because he regarded them as noisy; the children retaliated by collecting before his home and imitating the sounds of street musicians. While still a young student at Cambridge, Babbage had joined the Extractors Club, devoted to extracting its members from the madhouse in case any should be involuntarily committed.) All these thinkers were giving to their complicated efforts the same sustained and minute attention that the engineers of the period were giving to industrial production, even when the potential benefits of these undertakings were obvious only to a few.

To be sure, much the same that we now say about nineteenth-century logic and mathematics could also be said about the work of any scientist in any historical period; scientists often work on things that few other people understand, and they often work with demonic energy. The sheer power of science, however, was far more obvious to the generations of the nineteenth century than to those of earlier times. As a result, there were many more people who were convinced that these mechanical efforts could prove fruitful. Millions of people saw what complicated industrial systems might do, and out of these millions, the mathematicians and logicians of the age emerged.

The historical mechanism we now propose is, once more, necessarily conjectural. We can’t go back and ask the thinkers of the period, “Would you ever have invested such time and energy in these complicated mathematical and logical systems if you had never seen the immense power of complicated machines?” And even if we could go back and ask this question, the thinkers themselves might simply not know the answer. Nevertheless, the correlation between industrialization on the one hand and the development of abstract algebras and symbolic logic on the other is striking. Peacock, Boole, and De Morgan all came from England during a period of intense industrialization, and later in the century, as Germany industrialized, one sees the eminent German figures of Frege, Georg Cantor (the inventor of set theory), and Richard Dedekind (an important contributor to algebra and set theory). (As one might expect, all these thinkers worked in universities, which increasingly supported the advancement of industry through science.) As for Peano, he taught for most of his career at the University of Turin, in Italy, and he, too, flourished at a time of increasing industrial development. (While he was teaching in Turin, the Automobile Factory of Turin, whose acronym in Italian is “FIAT,” began to build its first automobiles.) And in America, as the United States industrialized in the wake of the Civil War, the philosopher C. S. Peirce worked out a complicated symbolic logic of his own, and in 1886 he foresaw the possibility that mathematical and logical problems might be solved by machines that used electrical switches.13

Certainly these same thinkers pursued many different lines of inquiry, and some of them also entertained deep divisions of opinion (especially concerning the foundations of mathematics). The mere fact of industrialization no more entailed the existence of any of these more particular developments in logic and mathematics than it entailed the existence of more particular developments in the history of politics. Nevertheless, industrialization was a crucial background condition in both logic and politics, and in the case of mathematics, the development of symbolic logic, spurred by industrialization, meant that many of the most important twentieth-century debates in the philosophy of mathematics were conducted squarely within the framework of fully symbolic systems.14

In sum, science, system, and mechanical procedures all became recurring themes of nineteenth-century industry, and they likewise became powerful recurring themes in mathematics and logic. All the same, this mechanical tendency in logic (whatever its causes) would have been far less effective were it were not for the ability to capture, in symbols, the logic of relations. And what finally made this possible was Frege’s invention of quantification.

THE IMPACT OF QUANTIFICATION

The essence of quantification is to express a proposition with variables and yet qualify this proposition by saying exactly how many things can stand in the place of these variables. When we say,

if x is to the right of y, and y is to the right of z, then x is to the right of z,

we say something with variables; yet we haven’t said just how many things can stand in the places of x, y, and z. The utterance still has no quantification. We quantify the utterance by adding an initial phrase,

For any x, y, and z . . .

This phrase applies to the whole utterance, and now we can say (in the parlance of logicians) that the phrase “binds” the variables; it tells us how many things can stand in their place. It means that, any time the utterance uses the variable x, y, or z, the usage applies to any x, y, or z. Symbolic logic attaches these quantifying phrases as prefixes, and in effect, Frege used two different quantifying phrases: (a) “For any x,” and (b) “For some x.” The first of these phrases can also be read, “For all x,” and the second can also be read, “There exists an x such that” or “There is at least one x such that.” (Notice that, in all these uses, we read “some” to mean “at least one.”)15

A common symbolism for these prefixes, as it now appears in many logic textbooks, looks like this:

(x)“For all x . . .”
(x)“There exists an x such that . . .”

Suppose, now, that we define another symbol to stand for the relation “to the right of”:

Rcan be read “to the right of”

Also, suppose we establish a way of symbolizing what is to the right of what:

Rxycan be read “x is to the right of y”

We need a few more elements here to complete the project. First, we need a way to indicate exactly how far down a string of symbols a quantifier is supposed to apply, and this is done by brackets or parentheses. Thus,

(x)(y)(Rxy)

or

(x)(y)[Rxy]

This means that there exist an x and a y such that x is to the right of y. For convenience, it will also be useful to add symbols for conjunction and disjunction. (We can add these symbols in addition to the one we supplied just a moment ago for “if-then,” , even though we already know from chapter 4 that each of these operations can be defined in terms of the others as long as we add a symbol for negation.)16 For convenience, then, we can add these further symbols:

&can be read “and”
vcan be read “or”
can be read to express “if-then”
~can be read to express “it is not the case that”

For example,

Rxy & Ryzcan be read “x is to the right of y, and y is to the right of z.”
Rxy v Ryzcan be read “x is to the right of y, or y is to the right of z.”
Rxy Ryzcan be read “if x is to the right of y, then y is to the right of z.”
~Rxycan be read “it is not the case that x is to the right of y.”

Once all this is done, we are ready to symbolize the first premise of the argument about Tom, Dick, and Harry, the one that expresses the tacit, general principle that governs how the relation “to the right of” actually works. We can symbolize it in the following way:

(x)(y)(z) [(Rxy & Ryz) Rxz]

“For any x, y, and z, if x is to the right of y, and y is to the right of z, then x is to the right of z.”

To complete the argument, all we need now are symbols to stand for the individuals Tom, Dick, and Harry; these symbols are called constants. Thus,

tcan be read “Tom”
dcan be read “Dick”
hcan be read “Harry”

Putting it all together, we get this:

Of course, to make the system mechanical and strictly clerical, we also need the crucial rules that tell us when we can validly infer a conclusion from its premises and when we can’t—some rules of inference that we know to be valid, like modus ponens or modus tollens. (These rules will also need to include a way of passing from statements with variables to statements with constants, and vice versa.) We shall also need rules to tell us when a string of symbols is to count as a genuine statement within this system and when it is merely gibberish; these are the “formation rules” that tell us what counts as a “well-formed formula.” Frege did all this, but he did one crucial thing more: he constructed his system with the aim of turning at least part of mathematics into a form of logic.

FREGE’S NEW FOUNDATION FOR MATHEMATICS

Frege’s idea was that, if arithmetical objects like numbers could be defined in terms of some basic notions of logic and expressed in a precise symbolism, then perhaps all arithmetical truths could be shown to follow from a few logical rules and assumptions. This is the thesis (still controversial) called “logicism,” according to which at least some parts of mathematics, if not all, are reducible to logic. (Frege focused on arithmetic, but other logicians, like Russell and Whitehead, have tried to cast the net farther.)

Why undertake this exercise in the first place? Frege answered that doing so would increase the certainty that our current mathematical inferences are correct. This was all part of his grand scheme to place mathematics on firmer foundations, and he had already pointed to examples of mathematical reasoning that had run off the rails in earlier times because mathematicians had relied on something that had seemed obvious, intuitively, but without carefully deducing it from earlier premises. Frege wrote, “In making a transition to a new assertion one must not be content, as up to now mathematicians have practically always been, with its appearing obviously right, but rather one must analyze it into the simple logical steps of which it consists—and often these will be more than a few.”17

In effect, Frege imposed a new and highly demanding conception of what should count as a formal proof, either in mathematics or logic. For Frege, a proof had to be expressed in abstract symbols whose form and logical implications were governed by a set of rules that had already been specified in advance. This conception, generally called formalization, has dominated mathematics and symbolic logic ever since. But he then wanted to apply this new approach to achieve a further goal: he wanted to show how, according to his new, formalized method, arithmetic could be reduced to logic.

Not everything went the way Frege had hoped. Such a system would have to be internally consistent; its rules and axioms18 couldn’t result in contradictions if the system was to be logical in the first place. Yet the system’s rules and axioms would also have to be powerful enough to entail all the theorems that need proving; we don’t want a system that proves a few things but whose axioms are so weak that they leave many other truths unproved. And in assessing this further quality, the power of the system’s axioms, logicians now ask if such a system is “complete.”

That is, can every logical truth expressible according to the formation rules actually be proved from the axioms? If so, the axioms are “complete”; otherwise, they are not. Frege devised a notation that allowed variables to stand in for individuals, and these individuals could be represented as having properties or relations (the properties or relations being symbolized by “predicates”). The resulting system (“first-order predicate calculus”) turned out, on later examination, to be both consistent and complete. All logical truths expressible in the system were shown to be provable, and the makings of the system generated no contradictions.19 On the other hand, to make the system strong enough to capture arithmetic, Frege had to go a step further: he resorted to variables that applied to relations and properties (called “predicate variables,” thereby generating a “higher-order logic”), and Kurt Gödel showed in 1931 that no such system could be proved to be both consistent and complete. More exactly, no system rich enough to include arithmetic as a consequence can be shown within that system to be both consistent and complete. Some statements of the system will have to remain unprovable, or if provable, the system will be inconsistent. (This is one of the paradoxes of formalization that we noted in chapter 6.)

Instead, mathematicians can show that such a system for arithmetic is consistent and complete if the systems for various other branches of mathematics are, and vice versa; but these proofs of consistency are “relative,” meaning that, at most, they tie one branch of mathematics to another. One branch is thus consistent relative to another; all the same, none of these proofs is “absolute,” meaning that none of them ties these properties of a mathematical system to logic alone. But what is the practical import of this research?

As a practical matter, formalization has increased the confidence that mathematicians and logicians now feel in the more abstract reaches of their disciplines. Most areas of mathematics are now formalized by tying them to Frege’s basic system (to first-order predicate calculus, as modified by Russell and Whitehead) along with a version of set theory. Paradoxes (formally called “antinomies”) can still arise, but mathematicians have acquired considerable experience in anticipating these paradoxes and have a variety of specific devices for heading them off.

On the other hand, no technique of formalization seems likely to alter our rational confidence in something so basic as modus ponens or the disjunctive syllogism when used as a form of argument in ordinary cases. (Exotic cases are a different matter, and these are the areas where classical and nonclassical logics sometimes clash.) The reason is simple. If such a system ever did contradict either of these two basic methods in ordinary cases (either modus ponens or the disjunctive syllogism), it would be more reasonable to suspect the system than to suspect the method. Such systems are delicate from the start, and they are vulnerable to many kinds of errors in their construction. Rather than doubt the validity of ordinary instances of modus ponens, the more reasonable approach would be to doubt the complexities of a particular construction.20

Yet symbolic logic has had another practical impact too—in fact, a colossal one that now affects us every day: the digital computer.

THE INVENTION OF DIGITAL COMPUTING

Just as machines encouraged the development of symbolic logic, so symbolic logic finally encouraged the building of new machines—machines to manipulate the symbols. And the first precise description of a programmable digital computer in its modern form (different from that of Babbage and Lovelace) came from someone utterly steeped in formalized logic, the English mathematician Alan Turing.

Turing hit on the idea in the 1930s, when he was still in his twenties. Turing had studied mathematics and logic at Cambridge (where, later, he would also study briefly with Wittgenstein) and then at Princeton University in the United States. He conceived of a computer as a machine that would inscribe or erase symbols according to an established set of mechanical rules. The rules would require different actions depending on the internal state of the machine at any one time, and they would determine just how the machine would react to each new symbol that was fed into it. So imagined, the machine would change internally as it read new symbols, but only according to the rules, and these rules would, in turn, dictate what new symbols the machine would print out. In consequence, the machine’s output would be a complex result of both the rules and its own internal state, as altered by the symbols going in. This is exactly what we see in a programmable computer today.

The symbols going in correspond to your keystrokes on a keyboard; the symbols the machine prints out correspond to the images you see on a screen. And Turing’s mechanical rules correspond to the software program your computer happens to be running. But one can also understand the process from another standpoint—from the standpoint of symbolic logic as embodied in a formalized system.

The computer’s program corresponds to the rules of inference that govern a system like Frege’s; the program allows one set of symbols (the inputs) to be translated as logical implications into another set of symbols (the outputs). (The program thus operates like modus ponens, which allows one proposition to be inferred logically from another.) At the same time, the specific computer language the programmer uses to write the program corresponds to the formation rules of Frege; for Frege, the formation rules tell us what counts as a well-formed formula and what doesn’t, and they govern what propositions the logical system can accommodate.

Just so, the rules of the computer language tell us what counts as a genuine statement within the system and what counts as gibberish. In effect, then, every programmer does, on a smaller scale, the same thing Frege was seeking to do for logic as a whole. Every programmer lays down the equivalent of inference rules that allow the deducing of some propositions from other propositions, and once the computer is programmed, the operator’s keystrokes then correspond to the premises of an argument (the axioms and postulates of a formalized system), and the outputs on the screen correspond to a series of logical conclusions (the system’s theorems).

Turing worked out his idea for this kind of computing even before the electronic components to put it into practice had been invented. And his guiding thread was always the notion that machines could be made to think. He believed there were few real differences between the intellectual life of human beings and the behavior of machines—as long as the machines could be made sophisticated enough. As a result, his aim was to make the machines sophisticated. (Turing advanced this last idea—that machines think—by way of a thesis now called the “Turing test.” In his view, the true test of whether a machine thinks is whether it can generate results that are indistinguishable from those of human beings. If the machine seems to reason as well as we do, then it thinks as much as we do.)21

The proposition that machines can think still motivates much scientific and philosophical research, but in one crucial respect, we believe Turing’s approach was mistaken—in assuming that, in designing a reasoning machine, one is therefore designing a thinking machine. Reasoning isnt necessarily a form of thinking, and we believe that in supposing otherwise Turing was misled by a fundamental confusion about human behavior. In fact, even when human beings reason quite logically and skillfully, they aren’t necessarily thinking—or else they are thinking very little.

To explain:

Consider, first, what happens in a machine. It seems obvious that a modern computer can be programmed to evaluate many forms of argument, and based on this evaluation, it can print out a list of which forms are valid and which are not.22 Of course, if any human being could work out such a list, we would certainly call it “reasoning.” On the surface, at least, it seems no more strange to say that machines that evaluate reasoning are indeed reasoning than it does to say that adding machines add or sewing machines sew.

On the other hand, the machines do all this quite mechanically by rigid adherence to rules, and the point we commonly forget is that, if we could do it mechanically by rigid adherence to rules, we wouldnt call it thinking. On the contrary, we would call it reflex.

Consider: we do many things unthinkingly—breathing, standing, walking, sometimes even talking—yet the point we often overlook is that one of the chief purposes of logical technique at its best is to reduce the evaluation of arguments to the same sort of unthinking behavior. This was symbolic logic’s purpose from the start. The whole idea was to replace behavior that required a great deal of thought with other human behavior that required much less thought. Symbolic logic was intended from the outset as a labor-saving device, just as a machine was a labor-saving device. The labor to be saved was the labor of thinking. The aim was to escape the difficulties of thinking through intractable disputes about what was valid and what was not.

Alfred North Whitehead (Russell’s collaborator) once expressed this aspiration very ably: “By the aid of symbolism, we can make transitions in reasoning almost mechanically by the eye, which otherwise would call into play the higher faculties of the brain.” Whitehead stressed the difference between thinking behavior and skillfully mechanical behavior: “It is a profoundly erroneous truism . . . that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges—they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.”23

In fact, one of the chief aims of logic in its systematic development is to render logical judgments as close to brute reflex as possible. This is why we study forms. Forms leap out at the eye. But once we spot them, the thought required is immediately reduced because we know what to do. We know which rules to apply. To say this isn’t to say that logic makes us unthinking, but only that it saves our thinking for other matters—not for determining the mere cogency of arguments but for anticipating where the arguments are going or why they exist at all. In this respect, then, logic is like walking. The better at it we get, the less we need to think about it and the farther it takes us—to the contemplation of new vistas.

This is why some people reason much better than others; they have trained themselves to reason in this reflexive way. They have taught themselves to be aware of arguments as arguments and to look for logical structure, for premises and conclusions. But the more they train themselves in this endeavor, the less conscious it becomes. Eventually, it is nearly thoughtless. It becomes like typing (the better you type, the less conscious you are of the individual letters) or like making music (the better the musician, the less the musician thinks of fingerings and bow strokes and the more the musician contemplates pathos or radiance). Thus, to put the point about machines in another way, the argument that machines think usually rests on an analogy to human behavior, but the analogy misstates what human beings really do. We say to ourselves, “If I had to do the calculation the machine just did, I’d certainly have to concentrate very hard.” But of course if you had the wiring the machine has, you wouldn’t concentrate at all. You could do it without thinking.

The analogy to human behavior, then, supports the opposite inference: not that machines think, but that they dont think, because, when human reasoning is most mechanical, it involves the least conscious thought. (Machines behave mechanically, and when we behave mechanically, we don’t think either.)24

However this may be (and perhaps we are wrong and Turing was right after all), Turing was steadfast in his attempts to develop mechanical reasoning, and his explorations also turned out to have a profound effect on World War II.

After studying at Princeton, Turing returned to England and shortly afterward joined Britain’s secret code-breaking effort at Bletchley Park in Buckinghamshire, a crucial operation during the war. Turing devised rapid mechanical methods for deciphering the coded messages of the German Enigma machine, the German military’s chief coding device, and his insights allowed British and American ships to avoid German U-boats for prolonged periods during the Battle of the Atlantic. Turing, in fact, saved many sailors and allied ships and helped to undermine the U-boat campaign. For his services during the war, Turing was awarded the Order of the British Empire, but he was later arrested by British police for “gross indecency” when he admitted candidly that he had had a homosexual affair. To avoid prison, Turing agreed to injections of the hormone estrogen, which rendered him impotent, and in June of 1954, he committed suicide. Such was the end of a man who did far more than most to help shape the future of the world.

Where the brave new world of computers will finally take us is still unclear, no less unclear than the final outcome of the Industrial Revolution itself. The building of intricate machines over the last two centuries has led, in turn, to the building of intricate logical systems, and the logical systems have turned out to have consequences of their own that are ever harder to foresee. Logicians, like everyone else, have been carried along with the tide.

Much of the earth is still being changed by this process, and even in the early days of industrialism, many thoughtful citizens, especially of a romantic bent, had wondered whether the inexorable march of analysis and invention might actually do more harm than good. The Industrial Revolution brought conveniences, medicines, better food, and mighty towers. But it also created blighted landscapes, fouled streams, dying seas, and terrifying new weapons. Many people now have similar misgivings about the ultimate effects of computers.

Strangely, the same questions we now pose about the beneficial or harmful effects of industrialization and digital technology can also be posed about logic. Does logic truly make us better, or might it make us worse? Will it do us good, or might it do us harm?

Such questions can be asked today, but they were also asked nearly a thousand years ago in medieval France, especially when medieval thinkers tried to sort out the competing claims of reason and faith. And so in considering these last, troubling issues, we propose to turn back to a medieval version of these same questions as expressed in the tragic love story of the great, twelfth-century logician Peter Abelard—and his ardent student Heloise.