In this chapter we briefly study classical computation, presenting the ideas in roughly chronological order. We start with boolean functions and logic, first introduced by George Boole in the late nineteenth century. In the 1930s, Claude Shannon studied boolean algebra and realized that boolean functions could be described using electrical switches. The electrical components that correspond to boolean functions are called logic gates. Composing boolean functions becomes the study of circuits involving these gates. We will begin by studying boolean functions in terms of logic; then we will show how to translate everything into circuits and gates. The material, up to this point, is now considered standard and is contained in every introductory computer science text. But after this we look at some ideas that are usually not part of the standard introduction.
In the 1970s, the Nobel Prize–winning physicist Richard Feynman became interested in computing and, for a few years in the early 1980s, he gave a course on computation at the California Institute of Technology. These lectures were eventually written up as Feynman Lectures on Computation. Feynman’s interest in computation was partly due to his interaction with Edward Fredkin and Fredkin’s idiosyncratic views of physics and computation. Fredkin believes that the universe is a computer, and that since the laws of physics are reversible we ought to study reversible computation and reversible gates. But even though Fredkin’s overarching thesis is not widely accepted in the physics community, he is recognized for having some brilliant and unconventional ideas. One of these is the billiard ball computer. Feynman’s book includes a discussion of reversible gates and shows how any computation can be performed by bouncing balls off one another.
We take Feynman’s approach. It turns out that reversible gates are exactly what we need for quantum computing. The billiard ball computer led Feynman to think of particles interacting instead of balls. It was the inspiration for his work on quantum computing, but we include it here mainly because of its sheer simplicity and ingenuity.
In the late nineteenth century George Boole realized that certain parts of logic could be treated algebraically—that there were laws of logic that could be expressed in terms of algebra. We adopt the now standard way of introducing boolean logic by using truth tables for the three basic operations not, and, and or.
If a statement is true, then its negation is false, and conversely, if a statement is false, then its negation is true. For example, the statement 2 + 2 = 4 is true, and its negation 2 + 2 ≠ 4 is false. Instead of giving concrete examples of statements, we often let the symbols P, Q, and R stand in for them. So, for example, 2 + 2 = 4 might be represented by P. The symbol stands for not; if P represents the statement 2 + 2 = 4, then
stands for 2 + 2 ≠ 4. We can then summarize the basic properties of negation using our symbols: If P is true, then
is false. If P is false, then
is true.
To make things even more concise we can use the symbols T and F to denote true and false respectively. We can then define the properties using a table.
The symbol for and is . If we have two statements P and Q, we can combine them to form
. The statement
is true if and only if both of the component statements P and Q are true. We define and by the following table, where the first two columns give the possibilities for the truth-values of P and Q and the third column gives us the corresponding truth-value of
.
The symbol for or is and is defined by the following table.
Notice that is true if both P and Q are true, so
is true if either one of P or Q is true and also if both are true. This is the or that is used in mathematics and is sometimes called the inclusive or. The exclusive or is defined to be true if either one, but not both, of P and Q is true. It is false if they are both false, but is also false if they are both true. The exclusive or is denoted by
. Its truth table is below.
(Later we will see why the symbol for the exclusive or resembles a plus sign—it corresponds to addition modulo two.)
We start by showing how to construct the truth table for any binary expression. For concreteness we will construct the truth table of . This is done in several steps. First we write down the table for the possibilities for P and Q.
Then we attach columns for and
, writing in the appropriate truth-values in each case.
Next we add a column for . This is true only in the case when both
and
are true.
Finally, we get to the column associated with . This statement is true if and only if
is false.
Omitting the intermediate columns corresponding to the intermediate steps gives the following table.
Notice that the truth-values in the table for are identical to the truth-values in the table for
. They have exactly the same truth-values in every case. We say that the statements
and
are logically equivalent. We write:
This means that we need never use . Every case where or occurs can be replaced using expressions involving
and
.
What about the exclusive or, which we write with ? Can we replace this by an expression involving only the use of
and
? We can, and we will now show how to do this.
We consider the truth table for
We look for entries of T in the third column. The first occurs when P has value T and Q has value F. An expression that gives us a value of T only for those particular truth-values of P and Q is .
The next value of T in the third column occurs when P has value F and Q has value T. An expression that gives us a value of T only for those particular truth-values of P and Q is .
These are the only places where T occurs in the third column. To get an expression equivalent to the one that we want, we now join all the expressions we have generated so far using s, so
We know
Using this to replace gives
Again, this means that we that we need never use . Every case where
occurs can be replaced using expressions involving
and
. The method we have just used for replacing
using
and
works quite generally.
We can think of the logical operators that we have introduced as functions. For example, is a function that has two inputs, P and Q, and gives us one output;
has one input and one output.
We could invent our own function that has a number of inputs that take on values of T and F and in each of the cases gives us a value of either T or F; such a function is called a boolean function. To make things more concrete, we will invent a function that has three inputs that we will label P, Q, and R. We call our function . To define our function, we have to complete the third column in the following table.
There are eight values that need to be filled in. There are two choices for each value, giving us a total of possible functions. We will show that no matter how we choose our function, we can find an equivalent expression that uses only the functions
and
.
We use exactly the same method that we used to show that
We begin by looking for values of T in the last column. To help make things easier to follow we will use the specific function given by the following table, but the method we use will work for any boolean function.
The first T occurs when P and R have values T, and Q has value F. A function that gives us a value of T for only this set of truth-values is . The next T occurs when P and R have values F and Q has value T. A function that gives us a value of T for only this set of truth-values is
. The final T occurs when P, Q, and R all have value F. A function that gives us a value of T for only this set of truth-values is
.
An expression that takes on value T in just these three cases is
so
The final step is to replace using the fact that
Replacing the first occurrence gives
Replacing the second occurrence tells us that is logically equivalent to
This method works in general. If f is a function that is defined by a truth table, then f is logically equivalent to some expression that involves only the functions and
. Since we can generate any boolean function whatsoever using just these two functions, we say that
is a functionally complete set of boolean operators.
It might seem surprising that we can generate any function defined by a truth table using just and
, but incredibly, we can do even better. There is a binary operator called Nand, and any boolean function is logically equivalent to some expression that only uses the Nand operator.
Nand is a portmanteau word formed from combining not and and. It is denoted by . It can be defined by
or, equivalently, by the following truth table:
We know that is a functionally complete set of operators, so to show that Nand by itself is functionally complete—that any boolean operator can be rewritten as an equivalent function that just uses Nand — we just need to show that both and and not have equivalent expressions that are written solely in terms of Nand.
Consider the following truth table, which considers just the statement P, then and finally
.
Notice that the final column has the same truth-values as , telling us
This shows that we can replace all occurrences of not with Nand. We now turn our attention to and.
Observe that
We can now replace not using the preceding identity to obtain
Henry M. Sheffer, in 1913, first published the fact that Nand by itself is functionally complete. Charles Sanders Peirce also knew this fact in the late nineteenth century, but like much of his highly original work it remained unpublished until much later. (Sheffer used the symbol | for Nand. Many authors use, or have used, Sheffer’s symbol instead of . It is called the Sheffer stroke.)
Boolean variables take on one of two values. We have been using T and F for these, but we can use any two symbols. In particular, we can use 0 and 1. The advantage of replacing T and F by 0 and 1 is that we can then think of boolean functions as operating on bits. This is what we will do from now on.
There are two choices for how we could do the substitution. The convention is that 0 replaces F and 1 replaces T, and this what we shall use. Notice that conventionally we list T before F, but 0 before 1. Consequently, truth tables written in terms of 0 and 1 reverse the order of the rows when written in terms of T and F. This shouldn’t cause any confusion, but just to hammer home the point, here are the two tables for .
Various people realized that if logic could be expressed in terms of algebra, then machines could be designed to perform logical operations, but the most influential by far was Claude Shannon, who showed that all of boolean algebra could be performed using electrical switches. This is one of the fundamental ideas underlying the circuit design of all modern computers. Remarkably, he did this while still a master’s student at MIT.
At discrete time intervals either a pulse of electricity is transmitted or it is not. If at the appropriate time interval we receive a pulse of electricity, then we think of this as representing the truth-value T or, equivalently, bit value 1. If at the appropriate time interval we do not receive a pulse of electricity, then we think of this as representing the truth-value F or, equivalently, bit value 0.
The combinations of switches that correspond to our binary operators are called gates. The common gates have special diagrams associated to them. We look at some of these.
Figure 6.1 shows the symbol for the NOT gate. This can be thought of as a wire entering from the left and leaving from the right. If we input 1, we get output 0. If we input 0, we get output 1.
Figure 6.1 The NOT gate.
Figure 6.2 shows the symbol for the AND gate. Again, it is read from left to right. It has two inputs that can be either 0 or 1 and one output. Figure 6.3 shows the four cases.
Figure 6.2 The AND gate.
Figure 6.3 The four possibilities for inputs to the AND gate.
Figure 6.4 shows the symbol for the OR gate, along with the inputs and output for the four cases.
Figure 6.4 The OR gate.
Figure 6.5 shows the symbol for the NAND gate, along with the inputs and output for the four cases.
Figure 6.5 The NAND gate.
We can connect the gates together to form a circuit. Despite the name, there is nothing circular about circuits. They are linear and are read from left to right. We input our bits into the wires on the left and read the output from the wires on the right. We will look at examples that correspond to the boolean functions that we looked at earlier.
We start with the boolean expression The corresponding circuit can be given using gates. This is shown in figure 6.6, where the wires entering and leaving the gates have been labeled with the appropriate expressions. Recall that
, so the circuit in figure 6.6 is equivalent to the OR gate.
Figure 6.6 A circuit for
Our next example is . We want to enter the same value, P, into both the inputs of our NAND gate. Splitting the input signal into two by connecting an additional wire achieves this. This process of splitting a signal into multiple copies is called fan-out. Figure 6.7 shows the circuit.
Figure 6.7 A circuit for .
We know that , so the circuit in figure 6.7 is equivalent to the NOT gate.
Our final example is the binary expression . To get the two copies of
, we again need to use fan-out. Figure 6.8 shows the circuit.
Figure 6.8 A circuit for .
We know that , so the circuit in figure 6.8 is equivalent to the AND gate.
Earlier we showed that the boolean function Nand was functionally complete. In this section we repeat the argument using gates.
Our argument started by showing that we could replace any occurrence of or by using the identity
The corresponding circuit, shown in figure 6.6, shows that we need never use the OR gate.
The argument continued by showing that any boolean function could be constructed using combinations of not and and. Consequently, we can construct a circuit that computes any boolean function using just NOT and AND gates.
Then we showed that both not and and could be generated by Nand showing that Nand by itself was functionally complete. The analogous statement is true for the NAND gate. You can implement any boolean function using a circuit that just uses NAND gates. Instead of using the term functionally complete, the standard term for gates is universal, so NAND is a universal gate. But let’s look at this in a little more detail.
The circuits in figures 6.7 and 6.8 show how to get rid of NOT and AND gates, replacing them with NAND gates. But notice that we also have to use fan-out. This operation takes one bit of information and outputs two output bits that are identical to the input bit. It might seem obvious that we can do this; it just requires connecting one piece of wire to another, but we will see later that we cannot perform this operation when it comes to quantum bits.
Gates are the fundamental building blocks of the modern computer. In addition to performing logical operations we can use gates to compute. We won’t show how this can be done. (The interested reader should see the wonderful book Code by Charles Petzold, where he starts with switches and shows how to construct a computer.) But we will give an example to help illustrate how the ideas underlying addition can be implemented.
Recall the exclusive or, denoted . It’s defined by:
This can be compared to adding odd and even whole numbers. We know:
This addition of “oddness” and “evenness” is often called addition modulo 2. If we let 0 stand for “even” and 1 stand for “odd,” addition modulo 2 is given by . This is why the symbol contains a plus sign. (It is often easier to calculate with
thinking of addition, rather than use the exclusive or.)
The exclusive or gate is called XOR and is denoted by the symbol shown in figure 6.9.
Figure 6.9 The XOR gate.
We will use this gate to construct what is called a half-adder. This is a circuit that adds two binary digits. To understand what is going on we will compare it to a decimal half-adder. If we have two digits that sum to less than ten, then we just add them. So, for example, 2 + 4 = 6, 3 + 5 = 8.
If the digits sum to more than ten, however, we write down the appropriate digit, but we must remember to carry one for the next step in the computation. So, for example, 7 + 5 = 2, and we have a carry of 1.
A binary half-adder does the analogous computation. We can construct it using an XOR gate and an AND gate. The XOR gate computes the digit part, and the AND gate computes the carry.
0 + 0 = 0, with carry = 0; |
0 + 1 = 1, with carry = 0; |
1 + 0 = 1, with carry = 0; |
1 + 1 = 0, with carry = 1. |
A circuit that performs this is shown in figure 6.10. (In this picture the crossings of the wires that have dots indicate fan-out operations. The crossings without dots mean that the wires cross one another, but are not connected.)
Figure 6.10 A half-adder circuit.
The reason that this is called a half-adder, and not just an adder, is that it doesn’t take into account that we might have a carry coming in from the step before. We look at an example where we are adding standard decimal numbers. Suppose that the calculation is to add the following four-digit numbers, where the stars represent unknown digits.
To add the 6 and 5 we might get a digit of 1 and a carry of 1, but it is possible that we might have a carry of 1 from the first step of the calculation, in which case, the digit would be 2 and the carry 1. A full adder takes into account the possibility of an incoming carry from the step before.
We won’t draw the circuit for a full binary adder, but it can be done. Since all of our gates can be replaced with NAND gates, we can build an adder just using NAND gates and fan-outs. Indeed, we can build a whole computer just using these two components.
We have shown how to use gates for logic and indicated how we can use gates to do arithmetic, but to build a computer we also need to be able to store data. This can also be done using gates. It will take us too far afield to describe in detail how to do this, but the key idea is to build a flip-flop. These can be built out of gates using feedback. The outputs of the gates are fed back into inputs. An example using two NAND gates is shown in figure 6.11. We won’t describe how to implement these, but we will end by commenting that once we start using feedback it is important to get the timing of inputs and outputs exactly right. This is where the clock comes in, sending pulses of electricity at constant time intervals.
Figure 6.11 A flip-flop using two NAND gates.
Now that we have given some idea of how a computer can be built from classical gates, we begin our study of reversible gates.
Gates can be considered as boolean functions. For example, the AND gate takes two boolean inputs and gives a boolean output. Often the easiest way of representing this is through a table. (This table is exactly the same as what we have been calling a truth table.)
We can also represent the half-adder using a table. This time there are two inputs and two outputs.
In this section we will look at reversible gates. These correspond to invertible functions. Given an output, can we determine what the input was? If we can in every case, the function is invertible—the gate is reversible.
Looking at AND, if we get an output of 1, then we know that the input values must have been both 1, but if we get an output value of 0 there are three pairs of input values that have this output, and if we are not given any other information, we have no way of knowing which one of the three possibilities was actually input. Consequently, AND is not a reversible gate.
The half-adder is also not reversible. There are two pairs of input values that give a digit of 1 and a carry of 0. In both of these cases we have two bits of input, but are not getting two bits of output. We have lost some information doing the computation.
The study of reversible gates and reversible computation began by looking at the thermodynamics of computation. Shannon defined entropy for information. Entropy is also defined in thermodynamics. In fact, this is where Shannon got the idea. How closely are these two entropies related to one another? Can some of the theory of computation be expressed in terms of thermodynamics? In particular, can one talk about the minimum energy required performing a calculation? John von Neumann conjectured that when information was lost energy is expended—it dissipates as heat. Rolf Landauer proved the result and gave the minimum possible amount of energy to erase one bit of information. This amount of energy is called the Landauer limit.
If the computation is reversible, however, no information is lost and theoretically it can be performed with no energy loss.
We will look at three reversible gates: the CNOT, Toffoli, and Fredkin gates.
The controlled not gate or CNOT gate takes two inputs and gives two outputs. The first input is called the control bit. If it is 0, then it has no effect on the second bit. If the control bit is 1, it acts like the NOT gate on the second bit. The control bit is the first input bit and denoted by x. This bit is not changed and becomes the first output. The second output equals the second input if the control bit is 0, but it is flipped when the control bit is 1. This function is given by
Notice that this operation is invertible. Given any pair of output values, there is exactly one pair of input values that corresponds to it.
We can build a circuit that performs this operation using a fan-out and an XOR gate. This is shown in figure 6.12.
Figure 6.12 A circuit for CNOT.
This, however, is not the picture that is most commonly used. The usual picture is the simplified version shown in figure 6.13.
Figure 6.13 Usual representation of CNOT gate.
The CNOT gate is not just invertible, but it also has the nice property that it is its own inverse. This means that if you put two CNOT gates in series, where the output of the first gate becomes the input of the second gate, the output from the second gate is identical to the input to the first gate. The second gate undoes what the first gate does. To see this, we know that applying the CNOT gate once is given by
Using this output as the input of another CNOT gate gives
Here we have used the facts that and
.
We started with the input and the output after going through the gate twice is
, back where we started.
The Toffoli gate, invented by Tommaso Toffoli, has three inputs and three outputs. The first two inputs are control bits. They flip the third bit if they are both 1, otherwise the third bit remains the same. Since this gate is like the CNOT gate, but has two control bits, it is sometimes called a CCNOT gate. The function describing what this gate does is:
This can also be given in tabular form.
The standard diagram for this gate comes from the diagram of the CNOT gate (figure 6.14).
Figure 6.14 Toffoli gate.
We can see from the table that the Toffoli gate is invertible—each triple of output values corresponds to exactly one triple of input values. Like the CNOT gate, this gate also has the property that it is its own inverse.
We know that Now, using the output as the new input and applying T again gives:
Here we use the facts that and
.
The Toffoli gate is also universal. Recall that we can construct any boolean circuit using just NAND gates and fan-outs. To show that the Toffoli gate is universal, it is enough if we can show how to use it to compute both of these.
The NAND gate is described by , so we want a way of inputting x and y and getting an output of
. Since we are using the Toffoli gate, we will be inputting three values and getting an output of three values. Now
is logically equivalent to
We can choose the third input value to always be 1, and we can ignore extra output values. We use
to show that we can emulate the NAND gate by inputting x and y and reading off the third entry of the output.
We can use a similar idea for fan-out. We want to input just one value x and receive two outputs that are both x. Again, the Toffoli gate has three inputs and three outputs. We can choose the two other inputs apart from x to be fixed and as long as we get xs for two of the outputs we can ignore the third. This can be done by
Consequently, any boolean circuit can be constructed using just Toffoli gates.
These constructions illustrate something that often arises when we use reversible gates. The number of inputs must equal the number of outputs, but often we want to compute things where the number of inputs and outputs differ. We can always do this by adding extra bits, often called ancilla bits, to the inputs, or by ignoring bits that are output. Output bits that are ignored are sometimes called garbage bits. In the example where we showed that fan-out could be done using the Toffoli gate we had The 1 and 0 in the input are ancilla bits, and the 1 in the output is a garbage bit.
The Fredkin gate also has three inputs and three outputs. The first input is a control bit. If it is 0, the second and third inputs are unchanged. If the control bit is 1, it swaps the second and third inputs—the second output is the third input and the third output is the second input. It is defined by
Equivalently, it is given by the following table.
It is easily seen from the table that the Fredkin gate is invertible and that, like both the CNOT and Toffoli gates, it is its own inverse. The table also has the property that the number of 1s for each input is equal to the number of 1s in the corresponding output. We will make use of this fact later when we construct a Fredkin gate using billiard balls. (When constructing billiard ball gates, you want them to have the property that the number of balls entering is equal to the number of balls leaving.) Figure 6.15 shows the diagram for this gate.
Figure 6.15 The Fredkin gate.
Notice that and
so for both possible values of x,
telling us that we can use the Fredkin gate for both fan-out and negation. For fan-out, we think of as a garbage bit. For negation, we think of both the xs as garbage bits.
If we put z equal to 0 we obtain:
We can write this more succinctly as
This tells us that we can use the Fredkin gate to construct the AND gate (0 is an ancilla bit, and both x and are garbage bits).
Since any boolean circuit can be constructed using just NOT and AND gates along with fan-out, we can construct any boolean circuit using just Fredkin gates. Like the Toffoli gate, the Fredkin gate is universal.
We defined the Fredkin gate by
but we will give another equivalent definition.
This gate outputs three numbers. The first number output is always equal to the first input x. The second number will be 1 if either x = 0 and y = 1 or if x = 1 and z = 1, which we can express as . The third output will be 1 if either x = 0 and z = 1 or if x = 1 and y = 1, which we can express as
. Consequently, we can define this gate by
This looks somewhat intimidating and seems much more complicated than just remembering that if x = 0, then both y and z are unchanged; if x = 1, then y and z get switched. However, there is one place where this complicated formula is useful, and that is in the next section, when we show how to construct this gate using billiard balls.
We haven’t discussed how to actually build gates. They can all be built from switches and wires with electric potential or its absence representing the bits 1 and 0. Fredkin showed that they could also be built using billiard balls that bounce off one another and strategically placed mirrors. A mirror is just a solid wall that the ball bounces off. (They are called mirrors because the angle of incidence is equal to the angle of reflection.) Billiard ball gates are theoretical devices; it is assumed that all collisions are totally elastic—that no energy is lost. An example of a simple gate, called the switch gate, is shown in figure 6.16. In these pictures the solid lines represent walls; the grid lines are drawn to help keep track of the centers of the balls.
Figure 6.16 Billiard ball switch gate.
In the picture on the left a ball has just entered via Input 1. Since we haven’t entered a ball into Input 2, the ball just rolls unhindered and exits via Output 1. The picture on the right shows the analogous situation when one ball enters via Input 2 and we don’t send a ball through Input 1: It rolls unhindered out of Output 2A.
There are two other possibilities for sending ball through the two input slots. Unsurprisingly, if we don’t enter any balls, then no balls exit. The final and most complicated case is when balls are sent through both inputs. The assumption is that the balls have the same size, mass, speed and are entered simultaneously. Figure 6.17 indicates what happens.
Figure 6.17 Two balls entering switch gate.
First the balls collide with one another, then they both bounce off the diagonal walls (or mirrors), then they collide again. Finally, they exit. One leaves via Output 1 and the other by Output 2 B. (The paths of the centers of the balls are indicated by the bold arrows.)
We can denote the presence of a ball by 1 and the absence by 0, and then we can summarize what the gate does in a table.
We can construct a table with the same values using the statements x, y, , and
.
This enables us to depict the switch as a black box with the inputs and outputs appropriately labeled, as is depicted in figure 6.18.
Figure 6.18 Switch gate with inputs and outputs labeled.
This picture tells us where balls enter and leave the gate. If a ball enters via x, a ball must leave via x. If a ball enters via y, a ball will leave via the exit if there is no ball entering via x and will leave via the
exit if there is also a ball entering via x. At this point you might be slightly worried by the fact that in the case when two balls enter, the balls get switched because the ball that exits via x is the ball that entered via y and the ball that exits from
is the one that entered from x. But this is not a problem. We regard the balls as being indistinguishable—we just keep track of where there are balls, not where the balls originally came from.
We can also reverse the gate as is depicted in figure 6.19. We have to be slightly careful interpreting this. If a ball enters via , then there won’t be a ball entering via x, and so the ball sails directly across. If a ball enters via
, then there will be a ball entering via x, and consequently they will collide. One ball exits through the top of the gate and one exits via the output on the left. This means that a ball will exit through the left output if either
or
, so this exit can be labeled
. However,
is logically equivalent to y, which means that reversing the gate just reverses the arrows but leaves all the labels the same.
Figure 6.19 Switch gate with inputs and outputs interchanged.
We are now in a position to construct a Fredkin gate. Recall that
We need a construction that inputs x, y and z and outputs x, and
. This can be done with four switch gates and a lot of ingenuity. It is depicted in figure 6.20.
Figure 6.20 Fredkin gate constructed from switch gates.
In this picture, the right angles in the paths are obtained by bouncing off diagonally placed mirrors. The only other interactions occur in the switch gates. Paths crossing don’t indicate collisions; the balls pass through the intersection points at different times. To make sure that balls don’t collide where they shouldn’t and do collide where they should, we can always add delays to paths by adding little detours to paths using mirrors. For example, we can add a little delay by changing a straight-line path to one like the one depicted in figure 6.21.
Figure 6.21 Delay added to straight-line path.
By putting mirrors in the appropriate places and adding delays, we can construct the gate so that the outputs are lined up with the inputs and when balls enter at the same time they leave at the same time. (This is depicted in figure 6.22.) We can then form circuits that contain more than one Fredkin gate.* Since the Fredkin gate is universal, it can be used to construct any boolean circuit. Consequently, any boolean circuit can be constructed using just billiard balls and mirrors.
Figure 6.22 Billiard-ball Fredkin gate to be used in circuits.
Fredkin believes that the universe is a computer. He didn’t convince Feynman of this, but the billiard ball computer did impress him. As they both realized, any slight error in the position or velocity of a ball would result in an error that would propagate and get amplified. Collisions are never perfectly elastic; there is always friction and heat is lost. The billiard ball computer is clearly just a theoretical machine, not something that can be constructed in practice. But this machine does conjure images of atoms bouncing off one another, and it led Feynman to consider gates based on quantum mechanics rather than classical mechanics. We look at this idea in the next chapter.