6 
Classical Logic, Gates, and Circuits

In this chapter we briefly study classical computation, presenting the ideas in roughly chronological order. We start with boolean functions and logic, first introduced by George Boole in the late nineteenth century. In the 1930s, Claude Shannon studied boolean algebra and realized that boolean functions could be described using electrical switches. The electrical components that correspond to boolean functions are called logic gates. Composing boolean functions becomes the study of circuits involving these gates. We will begin by studying boolean functions in terms of logic; then we will show how to translate everything into circuits and gates. The material, up to this point, is now considered standard and is contained in every introductory computer science text. But after this we look at some ideas that are usually not part of the standard introduction.

In the 1970s, the Nobel Prize–winning physicist Richard Feynman became interested in computing and, for a few years in the early 1980s, he gave a course on computation at the California Institute of Technology. These lectures were eventually written up as Feynman Lectures on Computation. Feynman’s interest in computation was partly due to his interaction with Edward Fredkin and Fredkin’s idiosyncratic views of physics and computation. Fredkin believes that the universe is a computer, and that since the laws of physics are reversible we ought to study reversible computation and reversible gates. But even though Fredkin’s overarching thesis is not widely accepted in the physics community, he is recognized for having some brilliant and unconventional ideas. One of these is the billiard ball computer. Feynman’s book includes a discussion of reversible gates and shows how any computation can be performed by bouncing balls off one another.

We take Feynman’s approach. It turns out that reversible gates are exactly what we need for quantum computing. The billiard ball computer led Feynman to think of particles interacting instead of balls. It was the inspiration for his work on quantum computing, but we include it here mainly because of its sheer simplicity and ingenuity.

Logic

In the late nineteenth century George Boole realized that certain parts of logic could be treated algebraically—that there were laws of logic that could be expressed in terms of algebra. We adopt the now standard way of introducing boolean logic by using truth tables for the three basic operations not, and, and or.

Negation

If a statement is true, then its negation is false, and conversely, if a statement is false, then its negation is true. For example, the statement 2 + 2 = 4 is true, and its negation 2 + 2 ≠ 4 is false. Instead of giving concrete examples of statements, we often let the symbols P, Q, and R stand in for them. So, for example, 2 + 2 = 4 might be represented by P. The symbol 11860_e006_001.jpg stands for not; if P represents the statement 2 + 2 = 4, then 11860_e006_002.jpg stands for 2 + 2 ≠ 4. We can then summarize the basic properties of negation using our symbols: If P is true, then 11860_e006_003.jpg is false. If P is false, then 11860_e006_004.jpg is true.

To make things even more concise we can use the symbols T and F to denote true and false respectively. We can then define the properties using a table.

P 11860_e006_005.jpg
T F
F T

And

The symbol for and is 11860_e006_006.jpg. If we have two statements P and Q, we can combine them to form 11860_e006_007.jpg. The statement11860_e006_008.jpg is true if and only if both of the component statements P and Q are true. We define and by the following table, where the first two columns give the possibilities for the truth-values of P and Q and the third column gives us the corresponding truth-value of 11860_e006_009.jpg.

P Q 11860_e006_010.jpg
T T T
T F F
F T F
F F F

Or

The symbol for or is 11860_e006_011.jpg and is defined by the following table.

P Q 11860_e006_012.jpg
T T T
T F T
F T T
F F F

Notice that 11860_e006_013.jpg is true if both P and Q are true, so 11860_e006_014.jpg is true if either one of P or Q is true and also if both are true. This is the or that is used in mathematics and is sometimes called the inclusive or. The exclusive or is defined to be true if either one, but not both, of P and Q is true. It is false if they are both false, but is also false if they are both true. The exclusive or is denoted by 11860_e006_015.jpg. Its truth table is below.

P Q 11860_e006_016.jpg
T T F
T F T
F T T
F F F

(Later we will see why the symbol for the exclusive or resembles a plus sign—it corresponds to addition modulo two.)

Boolean Algebra

We start by showing how to construct the truth table for any binary expression. For concreteness we will construct the truth table of 11860_e006_017.jpg. This is done in several steps. First we write down the table for the possibilities for P and Q.

P Q
T T
T F
F T
F F

Then we attach columns for 11860_e006_018.jpg and 11860_e006_019.jpg, writing in the appropriate truth-values in each case.

image

Next we add a column for 11860_e006_022.jpg. This is true only in the case when both 11860_e006_023.jpg and 11860_e006_024.jpg are true.

image

Finally, we get to the column associated with 11860_e006_028.jpg. This statement is true if and only if 11860_e006_029.jpg is false.

image

Omitting the intermediate columns corresponding to the intermediate steps gives the following table.

P Q 11860_e006_034.jpg
T T T
T F T
F T T
F F F

Logical Equivalence

Notice that the truth-values in the table for 11860_e006_035.jpg are identical to the truth-values in the table for 11860_e006_036.jpg. They have exactly the same truth-values in every case. We say that the statements 11860_e006_037.jpg and 11860_e006_038.jpg are logically equivalent. We write:

11860_e006_039.jpg

This means that we need never use 11860_e006_040.jpg. Every case where or occurs can be replaced using expressions involving 11860_e006_041.jpg and 11860_e006_042.jpg.

What about the exclusive or, which we write with 11860_e006_043.jpg? Can we replace this by an expression involving only the use of 11860_e006_044.jpg and 11860_e006_045.jpg? We can, and we will now show how to do this.

We consider the truth table for 11860_e006_046.jpg

P Q 11860_e006_047.jpg
T T F
T F T
F T T
F F F

We look for entries of T in the third column. The first occurs when P has value T and Q has value F. An expression that gives us a value of T only for those particular truth-values of P and Q is 11860_e006_048.jpg.

The next value of T in the third column occurs when P has value F and Q has value T. An expression that gives us a value of T only for those particular truth-values of P and Q is 11860_e006_049.jpg.

These are the only places where T occurs in the third column. To get an expression equivalent to the one that we want, we now join all the expressions we have generated so far using 11860_e006_050.jpg s, so

11860_e006_051.jpg.

We know

11860_e006_052.jpg.

Using this to replace 11860_e006_053.jpg gives

11860_e006_054.jpg.

Again, this means that we that we need never use 11860_e006_055.jpg. Every case where 11860_e006_056.jpgoccurs can be replaced using expressions involving 11860_e006_057.jpg and 11860_e006_058.jpg. The method we have just used for replacing 11860_e006_059.jpg using 11860_e006_060.jpg and 11860_e006_061.jpg works quite generally.

Functional Completeness

We can think of the logical operators that we have introduced as functions. For example, 11860_e006_062.jpg is a function that has two inputs, P and Q, and gives us one output; 11860_e006_063.jpg has one input and one output.

We could invent our own function that has a number of inputs that take on values of T and F and in each of the cases gives us a value of either T or F; such a function is called a boolean function. To make things more concrete, we will invent a function that has three inputs that we will label P, Q, and R. We call our function 11860_e006_064.jpg. To define our function, we have to complete the third column in the following table.

image

There are eight values that need to be filled in. There are two choices for each value, giving us a total of 11860_e006_066.jpg possible functions. We will show that no matter how we choose our function, we can find an equivalent expression that uses only the functions 11860_e006_067.jpg and 11860_e006_068.jpg.

We use exactly the same method that we used to show that

11860_e006_069.jpg

We begin by looking for values of T in the last column. To help make things easier to follow we will use the specific function given by the following table, but the method we use will work for any boolean function.

image

The first T occurs when P and R have values T, and Q has value F. A function that gives us a value of T for only this set of truth-values is 11860_e006_071.jpg. The next T occurs when P and R have values F and Q has value T. A function that gives us a value of T for only this set of truth-values is 11860_e006_072.jpg. The final T occurs when P, Q, and R all have value F. A function that gives us a value of T for only this set of truth-values is 11860_e006_073.jpg.

An expression that takes on value T in just these three cases is

11860_e006_074.jpg,

so

11860_e006_075.jpg.

The final step is to replace 11860_e006_076.jpg using the fact that

11860_e006_077.jpg.

Replacing the first occurrence gives

11860_e006_078.jpg.

Replacing the second occurrence tells us that 11860_e006_079.jpg is logically equivalent to

11860_e006_080.jpg.

This method works in general. If f is a function that is defined by a truth table, then f is logically equivalent to some expression that involves only the functions 11860_e006_081.jpg and 11860_e006_082.jpg. Since we can generate any boolean function whatsoever using just these two functions, we say that 11860_e006_083.jpg is a functionally complete set of boolean operators.

It might seem surprising that we can generate any function defined by a truth table using just 11860_e006_084.jpg and 11860_e006_085.jpg, but incredibly, we can do even better. There is a binary operator called Nand, and any boolean function is logically equivalent to some expression that only uses the Nand operator.

Nand

Nand is a portmanteau word formed from combining not and and. It is denoted by 11860_e006_086.jpg. It can be defined by

11860_e006_087.jpg,

or, equivalently, by the following truth table:

P Q 11860_e006_088.jpg
T T F
T F T
F T T
F F T

We know that 11860_e006_089.jpg is a functionally complete set of operators, so to show that Nand by itself is functionally complete—that any boolean operator can be rewritten as an equivalent function that just uses Nand — we just need to show that both and and not have equivalent expressions that are written solely in terms of Nand.

Consider the following truth table, which considers just the statement P, then 11860_e006_090.jpg and finally 11860_e006_091.jpg.

P 11860_e006_092.jpg 11860_e006_093.jpg
T T F
F F T

Notice that the final column has the same truth-values as 11860_e006_094.jpg, telling us

11860_e006_095.jpg,

but 11860_e006_096.jpg is just 11860_e006_097.jpg, so

11860_e006_098.jpg.

This shows that we can replace all occurrences of not with Nand. We now turn our attention to and.

Observe that

11860_e006_099.jpg.

Now, 11860_e006_100.jpg, so

11860_e006_101.jpg.

We can now replace not using the preceding identity to obtain

11860_e006_102.jpg.

Henry M. Sheffer, in 1913, first published the fact that Nand by itself is functionally complete. Charles Sanders Peirce also knew this fact in the late nineteenth century, but like much of his highly original work it remained unpublished until much later. (Sheffer used the symbol | for Nand. Many authors use, or have used, Sheffer’s symbol instead of 11860_e006_103.jpg. It is called the Sheffer stroke.)

Boolean variables take on one of two values. We have been using T and F for these, but we can use any two symbols. In particular, we can use 0 and 1. The advantage of replacing T and F by 0 and 1 is that we can then think of boolean functions as operating on bits. This is what we will do from now on.

There are two choices for how we could do the substitution. The convention is that 0 replaces F and 1 replaces T, and this what we shall use. Notice that conventionally we list T before F, but 0 before 1. Consequently, truth tables written in terms of 0 and 1 reverse the order of the rows when written in terms of T and F. This shouldn’t cause any confusion, but just to hammer home the point, here are the two tables for 11860_e006_104.jpg.

image

Gates

Various people realized that if logic could be expressed in terms of algebra, then machines could be designed to perform logical operations, but the most influential by far was Claude Shannon, who showed that all of boolean algebra could be performed using electrical switches. This is one of the fundamental ideas underlying the circuit design of all modern computers. Remarkably, he did this while still a master’s student at MIT.

At discrete time intervals either a pulse of electricity is transmitted or it is not. If at the appropriate time interval we receive a pulse of electricity, then we think of this as representing the truth-value T or, equivalently, bit value 1. If at the appropriate time interval we do not receive a pulse of electricity, then we think of this as representing the truth-value F or, equivalently, bit value 0.

The combinations of switches that correspond to our binary operators are called gates. The common gates have special diagrams associated to them. We look at some of these.

The NOT Gate

Figure 6.1 shows the symbol for the NOT gate. This can be thought of as a wire entering from the left and leaving from the right. If we input 1, we get output 0. If we input 0, we get output 1.

11860_006_fig_001.jpg

Figure 6.1 The NOT gate.

The AND Gate

Figure 6.2 shows the symbol for the AND gate. Again, it is read from left to right. It has two inputs that can be either 0 or 1 and one output. Figure 6.3 shows the four cases.

11860_006_fig_002.jpg

Figure 6.2 The AND gate.

11860_006_fig_003.jpg

Figure 6.3 The four possibilities for inputs to the AND gate.

The OR Gate

Figure 6.4 shows the symbol for the OR gate, along with the inputs and output for the four cases.

11860_006_fig_004.jpg

Figure 6.4 The OR gate.

The NAND Gate

Figure 6.5 shows the symbol for the NAND gate, along with the inputs and output for the four cases.

11860_006_fig_005.jpg

Figure 6.5 The NAND gate.

Circuits

We can connect the gates together to form a circuit. Despite the name, there is nothing circular about circuits. They are linear and are read from left to right. We input our bits into the wires on the left and read the output from the wires on the right. We will look at examples that correspond to the boolean functions that we looked at earlier.

We start with the boolean expression 11860_e006_107.jpg The corresponding circuit can be given using gates. This is shown in figure 6.6, where the wires entering and leaving the gates have been labeled with the appropriate expressions. Recall that 11860_e006_108.jpg, so the circuit in figure 6.6 is equivalent to the OR gate.

11860_006_fig_006.jpg

Figure 6.6 A circuit for 11860_e006_171.jpg

Our next example is 11860_e006_109.jpg. We want to enter the same value, P, into both the inputs of our NAND gate. Splitting the input signal into two by connecting an additional wire achieves this. This process of splitting a signal into multiple copies is called fan-out. Figure 6.7 shows the circuit.

11860_006_fig_007.jpg

Figure 6.7 A circuit for 11860_e006_172.jpg.

We know that 11860_e006_110.jpg, so the circuit in figure 6.7 is equivalent to the NOT gate.

Our final example is the binary expression 11860_e006_111.jpg. To get the two copies of 11860_e006_112.jpg, we again need to use fan-out. Figure 6.8 shows the circuit.

11860_006_fig_008.jpg

Figure 6.8 A circuit for 11860_e006_173.jpg.

We know that 11860_e006_113.jpg, so the circuit in figure 6.8 is equivalent to the AND gate.

NAND Is a Universal Gate

Earlier we showed that the boolean function Nand was functionally complete. In this section we repeat the argument using gates.

Our argument started by showing that we could replace any occurrence of or by using the identity

11860_e006_114.jpg

The corresponding circuit, shown in figure 6.6, shows that we need never use the OR gate.

The argument continued by showing that any boolean function could be constructed using combinations of not and and. Consequently, we can construct a circuit that computes any boolean function using just NOT and AND gates.

Then we showed that both not and and could be generated by Nand showing that Nand by itself was functionally complete. The analogous statement is true for the NAND gate. You can implement any boolean function using a circuit that just uses NAND gates. Instead of using the term functionally complete, the standard term for gates is universal, so NAND is a universal gate. But let’s look at this in a little more detail.

The circuits in figures 6.7 and 6.8 show how to get rid of NOT and AND gates, replacing them with NAND gates. But notice that we also have to use fan-out. This operation takes one bit of information and outputs two output bits that are identical to the input bit. It might seem obvious that we can do this; it just requires connecting one piece of wire to another, but we will see later that we cannot perform this operation when it comes to quantum bits.

Gates and Computation

Gates are the fundamental building blocks of the modern computer. In addition to performing logical operations we can use gates to compute. We won’t show how this can be done. (The interested reader should see the wonderful book Code by Charles Petzold, where he starts with switches and shows how to construct a computer.) But we will give an example to help illustrate how the ideas underlying addition can be implemented.

Recall the exclusive or, denoted 11860_e006_115.jpg. It’s defined by:

image

This can be compared to adding odd and even whole numbers. We know:

image

This addition of “oddness” and “evenness” is often called addition modulo 2. If we let 0 stand for “even” and 1 stand for “odd,” addition modulo 2 is given by 11860_e006_120.jpg. This is why the symbol contains a plus sign. (It is often easier to calculate with 11860_e006_121.jpg thinking of addition, rather than use the exclusive or.)

The exclusive or gate is called XOR and is denoted by the symbol shown in figure 6.9.

11860_006_fig_009.jpg

Figure 6.9 The XOR gate.

We will use this gate to construct what is called a half-adder. This is a circuit that adds two binary digits. To understand what is going on we will compare it to a decimal half-adder. If we have two digits that sum to less than ten, then we just add them. So, for example, 2 + 4 = 6, 3 + 5 = 8.

If the digits sum to more than ten, however, we write down the appropriate digit, but we must remember to carry one for the next step in the computation. So, for example, 7 + 5 = 2, and we have a carry of 1.

A binary half-adder does the analogous computation. We can construct it using an XOR gate and an AND gate. The XOR gate computes the digit part, and the AND gate computes the carry.

0 + 0 = 0, with carry = 0;
0 + 1 = 1, with carry = 0;
1 + 0 = 1, with carry = 0;
1 + 1 = 0, with carry = 1.

A circuit that performs this is shown in figure 6.10. (In this picture the crossings of the wires that have dots indicate fan-out operations. The crossings without dots mean that the wires cross one another, but are not connected.)

11860_006_fig_010.jpg

Figure 6.10 A half-adder circuit.

The reason that this is called a half-adder, and not just an adder, is that it doesn’t take into account that we might have a carry coming in from the step before. We look at an example where we are adding standard decimal numbers. Suppose that the calculation is to add the following four-digit numbers, where the stars represent unknown digits.

**6*
+ **5*

To add the 6 and 5 we might get a digit of 1 and a carry of 1, but it is possible that we might have a carry of 1 from the first step of the calculation, in which case, the digit would be 2 and the carry 1. A full adder takes into account the possibility of an incoming carry from the step before.

We won’t draw the circuit for a full binary adder, but it can be done. Since all of our gates can be replaced with NAND gates, we can build an adder just using NAND gates and fan-outs. Indeed, we can build a whole computer just using these two components.

Memory

We have shown how to use gates for logic and indicated how we can use gates to do arithmetic, but to build a computer we also need to be able to store data. This can also be done using gates. It will take us too far afield to describe in detail how to do this, but the key idea is to build a flip-flop. These can be built out of gates using feedback. The outputs of the gates are fed back into inputs. An example using two NAND gates is shown in figure 6.11. We won’t describe how to implement these, but we will end by commenting that once we start using feedback it is important to get the timing of inputs and outputs exactly right. This is where the clock comes in, sending pulses of electricity at constant time intervals.

11860_006_fig_011.jpg

Figure 6.11 A flip-flop using two NAND gates.

Reversible Computation

Now that we have given some idea of how a computer can be built from classical gates, we begin our study of reversible gates.

Gates can be considered as boolean functions. For example, the AND gate takes two boolean inputs and gives a boolean output. Often the easiest way of representing this is through a table. (This table is exactly the same as what we have been calling a truth table.)

AND
Input Output
0 0 0
0 1 0
1 0 0
1 1 1

We can also represent the half-adder using a table. This time there are two inputs and two outputs.

image

In this section we will look at reversible gates. These correspond to invertible functions. Given an output, can we determine what the input was? If we can in every case, the function is invertible—the gate is reversible.

Looking at AND, if we get an output of 1, then we know that the input values must have been both 1, but if we get an output value of 0 there are three pairs of input values that have this output, and if we are not given any other information, we have no way of knowing which one of the three possibilities was actually input. Consequently, AND is not a reversible gate.

The half-adder is also not reversible. There are two pairs of input values that give a digit of 1 and a carry of 0. In both of these cases we have two bits of input, but are not getting two bits of output. We have lost some information doing the computation.

The study of reversible gates and reversible computation began by looking at the thermodynamics of computation. Shannon defined entropy for information. Entropy is also defined in thermodynamics. In fact, this is where Shannon got the idea. How closely are these two entropies related to one another? Can some of the theory of computation be expressed in terms of thermodynamics? In particular, can one talk about the minimum energy required performing a calculation? John von Neumann conjectured that when information was lost energy is expended—it dissipates as heat. Rolf Landauer proved the result and gave the minimum possible amount of energy to erase one bit of information. This amount of energy is called the Landauer limit.

If the computation is reversible, however, no information is lost and theoretically it can be performed with no energy loss.

We will look at three reversible gates: the CNOT, Toffoli, and Fredkin gates.

Controlled Not Gate

The controlled not gate or CNOT gate takes two inputs and gives two outputs. The first input is called the control bit. If it is 0, then it has no effect on the second bit. If the control bit is 1, it acts like the NOT gate on the second bit. The control bit is the first input bit and denoted by x. This bit is not changed and becomes the first output. The second output equals the second input if the control bit is 0, but it is flipped when the control bit is 1. This function is given by 11860_e006_122.jpg

image

Notice that this operation is invertible. Given any pair of output values, there is exactly one pair of input values that corresponds to it.

We can build a circuit that performs this operation using a fan-out and an XOR gate. This is shown in figure 6.12.

11860_006_fig_012.jpg

Figure 6.12 A circuit for CNOT.

This, however, is not the picture that is most commonly used. The usual picture is the simplified version shown in figure 6.13.

11860_006_fig_013.jpg

Figure 6.13 Usual representation of CNOT gate.

The CNOT gate is not just invertible, but it also has the nice property that it is its own inverse. This means that if you put two CNOT gates in series, where the output of the first gate becomes the input of the second gate, the output from the second gate is identical to the input to the first gate. The second gate undoes what the first gate does. To see this, we know that applying the CNOT gate once is given by

11860_e006_124.jpg

Using this output as the input of another CNOT gate gives

11860_e006_125.jpg.

Here we have used the facts that 11860_e006_126.jpg and 11860_e006_127.jpg.

We started with the input 11860_e006_128.jpg and the output after going through the gate twice is 11860_e006_129.jpg, back where we started.

The Toffoli Gate

The Toffoli gate, invented by Tommaso Toffoli, has three inputs and three outputs. The first two inputs are control bits. They flip the third bit if they are both 1, otherwise the third bit remains the same. Since this gate is like the CNOT gate, but has two control bits, it is sometimes called a CCNOT gate. The function describing what this gate does is: 11860_e006_130.jpg

This can also be given in tabular form.

image

The standard diagram for this gate comes from the diagram of the CNOT gate (figure 6.14).

11860_006_fig_014.jpg

Figure 6.14 Toffoli gate.

We can see from the table that the Toffoli gate is invertible—each triple of output values corresponds to exactly one triple of input values. Like the CNOT gate, this gate also has the property that it is its own inverse.

We know that 11860_e006_133.jpg Now, using the output as the new input and applying T again gives:

11860_e006_134.jpg

Here we use the facts that 11860_e006_135.jpg and 11860_e006_136.jpg.

The Toffoli gate is also universal. Recall that we can construct any boolean circuit using just NAND gates and fan-outs. To show that the Toffoli gate is universal, it is enough if we can show how to use it to compute both of these.

The NAND gate is described by 11860_e006_137.jpg, so we want a way of inputting x and y and getting an output of 11860_e006_138.jpg. Since we are using the Toffoli gate, we will be inputting three values and getting an output of three values. Now 11860_e006_139.jpg is logically equivalent to 11860_e006_140.jpg We can choose the third input value to always be 1, and we can ignore extra output values. We use

11860_e006_141.jpg

to show that we can emulate the NAND gate by inputting x and y and reading off the third entry of the output.

We can use a similar idea for fan-out. We want to input just one value x and receive two outputs that are both x. Again, the Toffoli gate has three inputs and three outputs. We can choose the two other inputs apart from x to be fixed and as long as we get xs for two of the outputs we can ignore the third. This can be done by

11860_e006_142.jpg

Consequently, any boolean circuit can be constructed using just Toffoli gates.

These constructions illustrate something that often arises when we use reversible gates. The number of inputs must equal the number of outputs, but often we want to compute things where the number of inputs and outputs differ. We can always do this by adding extra bits, often called ancilla bits, to the inputs, or by ignoring bits that are output. Output bits that are ignored are sometimes called garbage bits. In the example where we showed that fan-out could be done using the Toffoli gate we had 11860_e006_143.jpg The 1 and 0 in the input are ancilla bits, and the 1 in the output is a garbage bit.

The Fredkin Gate

The Fredkin gate also has three inputs and three outputs. The first input is a control bit. If it is 0, the second and third inputs are unchanged. If the control bit is 1, it swaps the second and third inputs—the second output is the third input and the third output is the second input. It is defined by

11860_e006_144.jpg

Equivalently, it is given by the following table.

image

It is easily seen from the table that the Fredkin gate is invertible and that, like both the CNOT and Toffoli gates, it is its own inverse. The table also has the property that the number of 1s for each input is equal to the number of 1s in the corresponding output. We will make use of this fact later when we construct a Fredkin gate using billiard balls. (When constructing billiard ball gates, you want them to have the property that the number of balls entering is equal to the number of balls leaving.) Figure 6.15 shows the diagram for this gate.

11860_006_fig_015.jpg

Figure 6.15 The Fredkin gate.

Notice that 11860_e006_145.jpg and 11860_e006_146.jpg so for both possible values of x,

11860_e006_147.jpg,

telling us that we can use the Fredkin gate for both fan-out and negation. For fan-out, we think of 11860_e006_148.jpg as a garbage bit. For negation, we think of both the xs as garbage bits.

If we put z equal to 0 we obtain:

image

We can write this more succinctly as

11860_e006_149.jpg.

This tells us that we can use the Fredkin gate to construct the AND gate (0 is an ancilla bit, and both x and 11860_e006_150.jpg are garbage bits).

Since any boolean circuit can be constructed using just NOT and AND gates along with fan-out, we can construct any boolean circuit using just Fredkin gates. Like the Toffoli gate, the Fredkin gate is universal.

We defined the Fredkin gate by

11860_e006_151.jpg

but we will give another equivalent definition.

This gate outputs three numbers. The first number output is always equal to the first input x. The second number will be 1 if either x = 0 and y = 1 or if x = 1 and z = 1, which we can express as 11860_e006_152.jpg. The third output will be 1 if either x = 0 and z = 1 or if x = 1 and y = 1, which we can express as 11860_e006_153.jpg. Consequently, we can define this gate by

11860_e006_154.jpg.

This looks somewhat intimidating and seems much more complicated than just remembering that if x = 0, then both y and z are unchanged; if x = 1, then y and z get switched. However, there is one place where this complicated formula is useful, and that is in the next section, when we show how to construct this gate using billiard balls.

Billiard Ball Computing

We haven’t discussed how to actually build gates. They can all be built from switches and wires with electric potential or its absence representing the bits 1 and 0. Fredkin showed that they could also be built using billiard balls that bounce off one another and strategically placed mirrors. A mirror is just a solid wall that the ball bounces off. (They are called mirrors because the angle of incidence is equal to the angle of reflection.) Billiard ball gates are theoretical devices; it is assumed that all collisions are totally elastic—that no energy is lost. An example of a simple gate, called the switch gate, is shown in figure 6.16. In these pictures the solid lines represent walls; the grid lines are drawn to help keep track of the centers of the balls.

11860_006_fig_016.jpg

Figure 6.16 Billiard ball switch gate.

In the picture on the left a ball has just entered via Input 1. Since we haven’t entered a ball into Input 2, the ball just rolls unhindered and exits via Output 1. The picture on the right shows the analogous situation when one ball enters via Input 2 and we don’t send a ball through Input 1: It rolls unhindered out of Output 2A.

There are two other possibilities for sending ball through the two input slots. Unsurprisingly, if we don’t enter any balls, then no balls exit. The final and most complicated case is when balls are sent through both inputs. The assumption is that the balls have the same size, mass, speed and are entered simultaneously. Figure 6.17 indicates what happens.

11860_006_fig_017.jpg

Figure 6.17 Two balls entering switch gate.

First the balls collide with one another, then they both bounce off the diagonal walls (or mirrors), then they collide again. Finally, they exit. One leaves via Output 1 and the other by Output 2 B. (The paths of the centers of the balls are indicated by the bold arrows.)

We can denote the presence of a ball by 1 and the absence by 0, and then we can summarize what the gate does in a table.

image

We can construct a table with the same values using the statements x, y, 11860_e006_155.jpg, and 11860_e006_156.jpg.

image

This enables us to depict the switch as a black box with the inputs and outputs appropriately labeled, as is depicted in figure 6.18.

11860_006_fig_018.jpg

Figure 6.18 Switch gate with inputs and outputs labeled.

This picture tells us where balls enter and leave the gate. If a ball enters via x, a ball must leave via x. If a ball enters via y, a ball will leave via the 11860_e006_159.jpg exit if there is no ball entering via x and will leave via the 11860_e006_160.jpg exit if there is also a ball entering via x. At this point you might be slightly worried by the fact that in the case when two balls enter, the balls get switched because the ball that exits via x is the ball that entered via y and the ball that exits from 11860_e006_161.jpg is the one that entered from x. But this is not a problem. We regard the balls as being indistinguishable—we just keep track of where there are balls, not where the balls originally came from.

We can also reverse the gate as is depicted in figure 6.19. We have to be slightly careful interpreting this. If a ball enters via 11860_e006_162.jpg, then there won’t be a ball entering via x, and so the ball sails directly across. If a ball enters via 11860_e006_163.jpg, then there will be a ball entering via x, and consequently they will collide. One ball exits through the top of the gate and one exits via the output on the left. This means that a ball will exit through the left output if either 11860_e006_164.jpg or 11860_e006_165.jpg, so this exit can be labeled 11860_e006_166.jpg. However, 11860_e006_167.jpg is logically equivalent to y, which means that reversing the gate just reverses the arrows but leaves all the labels the same.

11860_006_fig_019.jpg

Figure 6.19 Switch gate with inputs and outputs interchanged.

We are now in a position to construct a Fredkin gate. Recall that

11860_e006_168.jpg.

We need a construction that inputs x, y and z and outputs x, 11860_e006_169.jpg and 11860_e006_170.jpg. This can be done with four switch gates and a lot of ingenuity. It is depicted in figure 6.20.

11860_006_fig_020.jpg

Figure 6.20 Fredkin gate constructed from switch gates.

In this picture, the right angles in the paths are obtained by bouncing off diagonally placed mirrors. The only other interactions occur in the switch gates. Paths crossing don’t indicate collisions; the balls pass through the intersection points at different times. To make sure that balls don’t collide where they shouldn’t and do collide where they should, we can always add delays to paths by adding little detours to paths using mirrors. For example, we can add a little delay by changing a straight-line path to one like the one depicted in figure 6.21.

11860_006_fig_021.jpg

Figure 6.21 Delay added to straight-line path.

By putting mirrors in the appropriate places and adding delays, we can construct the gate so that the outputs are lined up with the inputs and when balls enter at the same time they leave at the same time. (This is depicted in figure 6.22.) We can then form circuits that contain more than one Fredkin gate.* Since the Fredkin gate is universal, it can be used to construct any boolean circuit. Consequently, any boolean circuit can be constructed using just billiard balls and mirrors.

11860_006_fig_022.jpg

Figure 6.22 Billiard-ball Fredkin gate to be used in circuits.

Fredkin believes that the universe is a computer. He didn’t convince Feynman of this, but the billiard ball computer did impress him. As they both realized, any slight error in the position or velocity of a ball would result in an error that would propagate and get amplified. Collisions are never perfectly elastic; there is always friction and heat is lost. The billiard ball computer is clearly just a theoretical machine, not something that can be constructed in practice. But this machine does conjure images of atoms bouncing off one another, and it led Feynman to consider gates based on quantum mechanics rather than classical mechanics. We look at this idea in the next chapter.