There’s an oft-told tale, apocryphal but nonetheless instructive, of Christopher Columbus dining with several men who are unimpressed by his discovery of the New World. Columbus, not surprisingly, objects to their disdain. Certainly the way to the New World is obvious now, but if finding the way had been so easy, why hadn’t someone done it before? To illustrate what he’d achieved, he asks for an egg and challenges the men at the table to balance it on its head. Each man tries and fails. Columbus then takes the egg and offers a simple solution: he sets the egg down just hard enough to break the end, allowing it to remain upright.
Today we have an expression to describe innovative thinking that transcends unnecessary, but established constraints. We say that innovative problem solvers “think outside the box.” Following Kenyon and Polanyi, I had come to see that differences in internal bonding affinity, either between nucleotide bases or amino acids, did not and could not solve the DNA enigma. But if the internal affinities were inside the box, both proverbially and literally, perhaps the failure of models relying on such forces merely signaled the need for researchers to journey outside the box. And that’s precisely what subsequent self-organizational researchers tried to do.
External Self-Organizational Forces: Just Add Energy?
If internal bonding affinities did not explain the information in DNA and proteins, might there be some ubiquitous external force that caused the bases in DNA (or amino acids in proteins) to align themselves into information-rich sequences? Magnetic forces cause iron filings to align themselves into orderly “lines of force” around the magnet. Gravitational forces create vortices in draining bathtubs. Perhaps some pervasive self-organizing forces external to DNA and proteins could explain the origin of the information-rich biomolecules or other forms of biological organization.
In 1977 the Russian-born Belgian physicist Ilya Prigogine wrote a book with a colleague, Grégoire Nicolis, exploring this possibility. Prigogine specialized in thermodynamics, the science of energy and heat. He became interested in how energy flowing into a system could cause order to arise spontaneously. His work documenting this phenomenon won him the Nobel Prize in Chemistry in 1977, the same year he published his book with Nicolis.
In their book Self-Organization in Nonequilibrium Systems, Prigogine and Nicolis suggested that energy flowing into primitive living systems might have played a role in the origin of biological organization. They characterized living organisms as open systems that maintain their particular form of organization by utilizing large quantities of energy and matter from the environment (and by “dissipating” large quantities of energy and matter into the environment).1 An open system is one that interacts with the environment and whose behavior or structure is altered by that interaction. Prigogine demonstrated that open systems driven far from equilibrium (i.e., driven far from the normal state they would occupy in the absence of the environmental input) often display self-ordering tendencies as they receive an input of energy. For example, thermal energy flowing through a heat sink will generate distinctive convection currents or “spiral wave activity.”
In their book, Prigogine and Nicolis suggested that the organized structures observed in living systems might have similarly “self-originated” with the aid of an energy source. They conceded the improbability of simple building blocks arranging themselves into highly ordered structures under normal equilibrium conditions. Indeed, Prigogine previously had characterized the probability of living systems arising by chance alone as “vanishingly small.”2 But now he and Nicolis suggested that under nonequilibrium conditions, where an external source of energy is supplied, biochemical systems might arrange themselves into highly ordered patterns and primitive biological structures.
Order Versus Information
When I first learned about Prigogine and Nicolis’s theory and the analogies by which they justified it, it did seem plausible. But as I considered the merits of their proposal, I discovered that it had an obvious defect, one that the prominent information theorist Hubert Yockey described to me in an interview in 1986. Yockey pointed out that Prigogine and Nicolis invoked external self-organizational forces to explain the origin of order in living systems. But, as Yockey noted, what needs explaining in biological systems is not order (in the sense of a symmetrical or repeating pattern), but information, the kind of specified digital information found in software, written languages, and DNA.
Energy flowing through a system may produce highly ordered patterns. Strong winds form swirling tornados and the “eyes” of hurricanes; Prigogine’s thermal baths develop interesting convection currents; and chemical elements coalesce to form crystals. But Yockey insisted that this kind of symmetric order has little to do with the specified complexity or information in DNA, RNA, and proteins. To say otherwise conflates two distinct types of patterns or sequences.
As was my habit, I developed a visual illustration to convey this point to my college students. Actually, I borrowed the visual aid from the children of a professor friend who lived in my neighborhood. The homemade toy his children played with was meant to entertain, but I realized that it perfectly illustrated Yockey’s distinction between order and specified complexity and his critique of Prigogine’s self-organizational model.
The toy was made of two one-liter soda bottles that were sealed and fastened together at each opening by a red plastic coupling. The two bottles together made one large hourglass shape. The device also contained a turquoise liquid (probably water with food coloring) and some silver flecks that would sparkle as the liquid swirled around. Liquid from one bottle could flow into the other bottle. The children liked to hold the bottles upright until all the liquid from the top bottle flowed into the bottom one. Then they would quickly turn the whole apparatus over and give it a sudden shake by the narrow neck. Next they would watch as the blue liquid would organize into a swirling vortex in the top bottle and begin to drain into the bottom bottle.
After convincing the children to lend me their toy, I used it in class to illustrate how an infusion of energy could spontaneously induce order in a system. This was an important point to establish with my students, because some of them had heard creationist arguments about how the second law of thermodynamics dictates that order in nature always dissipates into disorder over time. Prigogine had shown that, although disorder will ultimately increase over time in a closed system such as our whole universe, order may arise from disorder spontaneously when energy enters into smaller (than the universe) open systems. I used the big blue vortex maker to demonstrate that order can, indeed, arise from an infusion of energy (in this case, a sudden shake and flipping of the apparatus) into an open system.
Nevertheless, I also wanted my students to understand that there was a difference between order and specified complexity and why that distinction called into question the ultimate relevance of Prigogine’s ideas. To illustrate this, I would ask them to focus on the individual flecks sparkling within the swirling blue liquid. Could they see any interesting arrangements of these flecks that performed a communication function? Did the flecks spell any messages or encode any digital information? Obviously, the answer was no. Students could see a highly random arrangement of sparkling flecks. They also could see an orderly pattern in the motion of the liquid as a whole as the swirling blue water formed the familiar funnel shape of a vortex. Nevertheless, they could not detect any specified or functional information, no interesting patterns forming sequences of alphabetic or digital characters.
My students had no trouble comprehending the point of my somewhat crude illustration. Energy flowing through an open system will readily produce order. But it does not produce much specified complexity or information.
The astrophysicist Fred Hoyle had a similar way of making the same point. He famously compared the problem of getting life to arise spontaneously from its constituent parts to the problem of getting a 747 airplane to come together from a tornado swirling through a junk yard. An undifferentiated external force is simply too blunt an instrument to accomplish such a task. Energy might scatter parts around randomly. Energy might sweep parts into an orderly structure such as a vortex or funnel cloud. But energy alone will not assemble a group of parts into a highly differentiated or functionally specified system such an airplane or cell (or into the informational sequences necessary to build one).
Kenyon’s self-organizational model had already encountered this problem. He came to realize that, although internal chemical affinities might produce highly repetitive or ordered sequences, they certainly did not produce the information-rich sequences in DNA. Now a similar problem reemerged as scientists considered whether lawlike external forces could have produced the information in DNA. Prigogine’s work showed that energy in an open system can create patterns of symmetrical order. But it provided no evidence that energy alone can encode functionally specified information-rich sequences—whether biochemical or otherwise. Self-organizational processes explain well what doesn’t need explaining in life.
It’s actually hard to imagine how such self-organizing forces could generate or explain the specificity of arrangement that characterizes information-rich living systems. In my vortex maker, an externally induced force infused energy through the system, sweeping all the constituents of the system along basically the same path. In Prigogine’s convection baths, an energy source established a pattern of motion throughout the system that affected all the molecules in a similar way, rather than arranging them individually and specifically to accomplish a function or convey a message. Yet character-by-character variability and specificity of arrangement are hallmarks of functional information-rich sequences. Thus, as Yockey notes: “Attempts to relate the idea of order…with biological organization or specificity must be regarded as a play on words that cannot stand careful scrutiny. Informational macromolecules can code genetic messages and therefore can carry information because the sequence of bases or residues is affected very little, if at all, by [self-organizing] physicochemical factors.”3
The Limits of the Algorithm
As a result of these difficulties, few, if any, scientists now maintain that Prigogine and Nicolis solved the problem of the origin of biological information. Nevertheless, some scientists continued to hope that further research would identify a specific self-organizational process capable of producing biological information. For example, biophysicist Manfred Eigen suggested in 1992 that “Our task is to find an algorithm, a natural law that leads to the origin of information.”4
This sounded good, but I began to wonder whether any lawlike process could produce information. Laws, by definition, describe events that repeatedly and predictably recur under the same conditions. One version of the law of gravity states that “all unsuspended bodies will fall.” If I lift a ball above the earth and let it go, it will fall. Every time. Repeatedly. Another law states that “water heated to 212 degrees Fahrenheit at sea level will boil and produce steam.” Apply heat to a pan of water. Watch and wait. Bubbles and steam will appear. Predictably. Laws describe highly predictable and regular conjunctions of events—repetitive patterns, redundant order. They do not describe the kind of complexity necessary to convey information.
Here’s another way to think of it. Scientific laws often describe predictable relationships between antecedent conditions and consequent events. Many scientific laws take the form, “If A occurs, then B will follow, given conditions C.” If the conditions C are present, and an event of type A occurs, then an event of type B will follow, predictably and “of necessity.” Thus, scientific laws describe patterns in which the probability of each successive event (given the previous event) approaches one, meaning the consequent must happen if the antecedents are present. Yet, as noted previously, events that occur predictably and “of necessity” do not convey information. Instead, information arises in a context of contingency. Information mounts as improbabilities multiply. Thus, to say that scientific laws generate complex informational patterns is essentially a contradiction in terms. If a process is orderly enough to be described by a law, it does not, by definition, produce events complex enough to convey information.
Of course, lawlike processes might transmit information that already exists in some other form, but such processes do not generate specified information. To see why, imagine that a group of small radio-controlled helicopters hovers in tight formation over a football stadium, the Rose Bowl in Pasadena, California. From below, the helicopters appear to be spelling a message: “Go USC.” At halftime with the field cleared, each helicopter releases either a red or gold paint ball, one of the two University of Southern California colors. The law of gravity takes over and the paint balls fall to the earth, splattering paint on the field after they hit the turf. Now on the field below, a somewhat messier but still legible message appears: “Go USC.”
Did the law of gravity, or the force described by the law, produce this information? Clearly, it did not. The information that appeared on the field already existed in the arrangement of the helicopters above the stadium—in what physicists call the “initial conditions.” Neither the force of gravity nor the law that describes it caused the information on the field to self-organize. Instead, gravitational forces merely transmitted preexisting information from the helicopter formation—the initial conditions—to the field below.
Sometimes when I’ve used these illustrations I’ve been asked: “But couldn’t we discover a very particular configuration of initial conditions that generates biological information? If we can’t hope to find a law that produces information, isn’t it still possible to find a very particular set of initial conditions that generates information in a predictable law-like way?” But this objection just restates the basic self-organizational proposal in new words. It also again begs the question of the ultimate origin of information, since “a very particular set of initial conditions” sounds precisely like an information-rich—a highly complex and specified—state. As I would later discover, however, this wasn’t the only proposal to beg the question about the ultimate origin of information. Instead, attempts to explain the origin of information by reference to some prior set of conditions invariably shifted—or displaced—the problem someplace else. My first inkling of this problem came as I reflected on perhaps the most innovative—“outside the box”—self-organizational proposal of all.
The Kauffman Model
If neither internal bonding affinities between the constituents of DNA nor ubiquitous external forces acting upon those constituents can account for the specific sequence of the DNA bases, then what was left? It didn’t seem to me that there could be much left, since “forces external” and “forces internal” to the molecule seemed to exhaust the set of possibilities. Yet, even so, I knew that another player was about to step onto the stage, Stuart Kauffman.
Kauffman is a brilliant scientist who trained as a physician at the University of California, San Francisco, and later worked as professor of biochemistry and biophysics at the University of Pennsylvania before leaving to help head up the Santa Fe Institute, a research institute dedicated to the study of complex systems. During the early 1990s, as I was beginning to examine the claims of self-organizational theories, I learned that Kauffman was planning to publish a treatise advancing a new self-organizational approach. His new book promised to bring the progress made at Santa Fe to bear on the problem of the origin of life and indeed to make significant steps toward solving that problem within a self-organizational framework. His long-anticipated book was titled The Origins of Order: Self-Organization and Selection in Evolution.
I remember the day Kauffman’s book finally arrived at the college where I taught. I quickly opened the package in my office only to discover a rather imposing seven-hundred-page behemoth of a book. After I scanned the table of contents and read the opening chapters, it became clear that much of Kauffman’s book provided a rather generalized discussion of the mathematical properties of complex systems. His specific proposal for explaining the origin of life occupied less than eighty pages of the book and was mainly confined to one chapter. My curiosity got the better of me. I decided to read this section first and return to the rest of the book later.
Kauffman had, in keeping with his reputation as an “outside the box” thinker, made a bold and innovative proposal for explaining the origin of life. Rather than invoking either external lawlike forces or internal bonding affinities to explain how biological information had self-organized, his hypothesis sought to transcend the problem altogether. He proposed a self-organizational process that could bypass the need to generate genetic information.
Kauffman attempted to leapfrog the “specificity” (or information) problem by proposing a means by which a self-reproducing metabolic system might emerge directly from a set of “low-specificity” catalytic peptides and RNA molecules in a prebiotic soup, or what he called a “chemical minestrone.”5 A metabolic system is a system of molecules that react with each other inside a living cell in order to sustain a vital function. Some metabolic systems break down molecules to release energy; others build molecules to store energy (as occurs during ATP synthesis) or information (as occurs in DNA replication or protein synthesis). In extant forms of life, these reactions usually involve or are mediated by a number of highly specific enzyme catalysts and other proteins. As such, Kauffman’s proposal represents a kind of protein-first theory similar in some respects to Kenyon’s earlier model.
Nevertheless, Kauffman suggests, unlike Kenyon, that the first metabolic system might have arisen directly from a group of low-specificity polypeptides. He proposes that once a sufficiently diverse set of catalytic molecules had assembled (in which the different peptides performed enough different catalytic functions, albeit inefficiently), the ensemble of individual molecules spontaneously underwent a kind of phase transition (akin to crystallization) resulting in a self-reproducing metabolic system. Kauffman envisions, as the historian of biology Iris Fry puts it, “a set of catalytic polymers in which no single molecule reproduces itself but the system as a whole does.”6 In this way, Kauffman argues that metabolism (and the proteins necessary to it) could have arisen directly without genetic information encoded in DNA.7
Kauffman’s model was clearly innovative and arguably more sophisticated than many previous self-organizational models. Unlike previous models, Kauffman’s claims that an ensemble of relatively short and “low-specificity” catalytic peptides and RNA molecules would together be enough to establish a metabolic system. He defends the biochemical plausibility of his scenario on the grounds that some proteins can perform enzymatic functions with low specificity and complexity. To support his claim, he cites a class of proteins known as proteases (including one in particular, called trypsin) that cleave peptide bonds at single amino-acid sites.8 But is he right?
As I thought more about Kauffman’s proposal and researched the properties of proteases, I became convinced that his proposal did not solve or successfully bypass the problem of the origin of biological information. Kauffman himself acknowledges that, as yet, there is no experimental evidence showing that such autocatalysis could occur. But, beyond that, I realized that Kauffman had either presupposed the existence of unexplained sequence specificity or transferred the need for specified information out of view. In fact, Kauffman’s model has at least three significant information-related problems.
First, it does not follow, nor is it the case biochemically, that just because some enzymes might function with low specificity, that all the catalytic peptides (or enzymes) needed to establish a self-reproducing metabolic cycle could function with similarly low levels of specificity and complexity. Instead, modern biochemistry shows that most and usually all of the molecules in a closed interdependent metabolic system of the type that Kauffman envisions require high-complexity and-specificity proteins. Enzymatic catalysis (which his scenario would surely require) needs molecules long enough to form tertiary structures. Tertiary structures are the three-dimensional shapes of proteins, which provide the spatial positioning of critical amino-acid residues to convey particular or specialized functions.9 How long a protein chain needs to be to form one of those shapes depends on its complexity. For the very simplest shapes, something like forty or fifty amino acids are needed. More complicated shapes may require several hundred amino acids. Further, these long polymers require very specific three-dimensional geometries (which in turn derive from sequence-specific arrangements of monomers) in order to catalyze necessary reactions. How do these molecules acquire their specificity of sequencing? Kauffman does not address this question, because his model incorrectly suggests that he does not need to do so.
Second, I discovered that even the allegedly low-specificity molecules (the proteases) that Kauffman cites to illustrate the plausibility of his scenario are actually very complex and highly specific in their sequencing. I also discovered that Kauffman confuses the specificity and complexity of the parts of the polypeptides upon which the proteases act with the specificity and complexity of the proteins (the proteases) that do the enzymatic acting. Though trypsin, for example, acts upon—cleaves—peptide bonds at a relatively simple target (the carboxyl end of two separate amino acids, arginine and lysine), trypsin itself is a highly complex and specifically sequenced molecule. Indeed, trypsin is a non-repeating 247-amino-acid protein that possesses significant sequence specificity as a condition of function.10
Further, trypsin has to manifest significant three-dimensional (geometric) specificity in order to recognize the specific amino acids arginine and lysine, at which sites it cleaves peptide bonds. By equivocating in his discussion of specificity, Kauffman obscures from view the considerable specificity and complexity requirements of the proteases he cites to justify his claim that low-specificity catalytic peptides will suffice to establish a metabolic cycle. Thus, Kauffman’s own illustration, properly understood (i.e., without equivocating about the relevant locus of specificity), shows that for his scenario to have biochemical plausibility, it must presuppose the existence of many high-complexity and-specificity polypeptides and polynucleotides. Where does this information in these molecules come from? Kauffman again does not say.
Third, Kauffman acknowledges that for autocatalysis to occur, the molecules in the chemical minestrone must be held in a very specific spatial-temporal relationship to one another.11 In other words, for the direct autocatalysis of integrated metabolic complexity to occur, a system of catalytic peptide molecules must first achieve a very specific molecular configuration (or what chemists call a “low-configurational entropy state”).12 This requirement is equivalent to saying that the system must start with a large amount of specified information or specified complexity. By Shannon’s theory of information, information is conveyed every time one possibility is actualized and others are excluded. By admitting that the autocatalysis of a metabolic system requires a specific arrangement of polypeptides (only one or a few of the possible molecular arrangements, rather than any one of the many possibilities) Kauffman tacitly concedes that such an arrangement has a high-information content. Thus, to explain the origin of specified biological complexity at the systems level, Kauffman has to presuppose a highly specific arrangement of those molecules at the molecular level as well as the existence of many highly specific and complex protein and RNA molecules (see above). In short, Kauffman merely transfers the information problem from the molecules into the soup.
In addition to these problems, Kauffman’s model encounters some of the same problems that Kenyon’s protein-first model and other metabolism-first models encounter. It does not explain (a) how the proteins in various metabolic pathways came into association with DNA and RNA or any other molecular replicator or (b) how the information in the metabolic system of proteins was transferred from the proteins into the DNA or RNA. And it gives no account of (c) how the sequence specificity of functional polypeptides arose (given that the bonding affinities that exist among amino acids don’t correlate to actual amino-acid sequences in known proteins).
Robert Shapiro, a leading chemist at New York University, has recently proposed that origin-of-life researchers begin to investigate metabolism-first models of the kind that Kauffman proposed. Shapiro argues that these models have several advantages that other popular origin-of-life scenarios (particularly RNA-first models, see Chapter 14) don’t.13 Though Shapiro favors these metabolism-first approaches, he acknowledges that researchers have not yet identified what he calls a “driver reaction” that can convert small molecules into products that increase or “mobilize” the organization of the system as a whole. He also notes that researchers on metabolism-first models “have not yet demonstrated the operation of a complete [metabolic] cycle or its ability to sustain itself and undergo further evolution.”14 In short, these approaches remain speculative and do not yet offer a way to solve the fundamental problem of the origin of biologically relevant organization (or information).
In any case, I concluded that Kauffman’s self-organization model—to the extent it had relevance to the behavior of actual molecules—presupposes or transfers, rather than explains, the ultimate origin of the specified information necessary to the origin of a self-reproducing metabolic cycle. I wasn’t the only one to find Kauffman’s self-organizational model insufficient. Other scientists and origin-of-life researchers made similar criticisms.15 Though many origin-of-life researchers have expressed their admiration for Kauffman’s innovative new approach, few, if any, think that his model actually solves the problem of the origin of information or the origin of life. Perhaps, for this reason, after 1993 Kauffman proposed some new self-organizational models for the origin of biological organization. His subsequent proposals lacked the biological specificity of his bold, if ill-fated, original proposal. Nevertheless, Kauffman’s later models did illustrate just how difficult it is to explain the origin of information without presupposing other preexisting sources of information.
Buttons and Strings
In 1995 Kauffman published another book, At Home in the Universe, in which he attempted to illustrate how self-organizational processes might work using various mechanical or electrical systems, some of which could be simulated in a computer environment.16 In one, he conceives a system of buttons connected by strings. The buttons represent novel genes or gene products, and the strings represent the lawlike forces of interaction between the gene products, namely, the proteins. Kauffman suggests that when the complexity of the system (as represented by the number of buttons and strings) reaches a critical threshold, new modes of organization can arise in the system “for free”—that is, without intelligent guidance—after the manner of a phase transition in chemistry, such as water turning to ice or the emergence of superconductivity in some metals when cooled below a certain temperature.
Another Kauffman model involves a system of interconnected lights. Each light can flash in a variety of states—on, off, twinkling, and so forth. Since there is more than one possible state for each light and many lights, there are many possible states the system can adopt. Further, in his system, rules determine how past states will influence future states. Kauffman asserts that, as a result of these rules, the system will, if properly tuned, eventually produce a kind of order in which a few basic patterns of light activity recur with greater than random frequency. Since these patterns represent a small portion of the total number of possible states for the system, Kauffman suggests that self-organizational laws might similarly produce a set of highly improbable biological outcomes within a much larger space of possibilities.
But do these simulations accurately model the origin of biological information? It’s hard to think so. Kauffman’s model systems are not constrained by functional considerations and, thus, are not analogous to biological systems. A system of interconnected lights governed by preprogrammed rules may well settle into a small number of patterns within a much larger space of possibilities. But since these patterns need not meet any functional requirements, they fail at a fundamental level to model biological organisms. A system of lights flashing “Warning: Mudslide Ahead” would model a biologically relevant self-organizational process, at least if such a message arose without agents previously programming the system with equivalent amounts of functional information. Kauffman’s arrangements of flashing lights are not of this sort. They serve no function, and certainly no function comparable to the information-rich molecules found in the biological realm.
Kauffman’s model systems differ from biological information in another striking way. The series of information-bearing symbols we find in the protein-coding regions of DNA, in sophisticated software programs, or in the sentences on this page are aperiodic. The sequences of characters do not repeat in a rigid or monotonous way. Kauffman’s model, in contrast, is characterized by large amounts of symmetrical order or internal redundancy interspersed with aperiodic sequences (mere complexity) lacking function.17 Getting a law-governed system to generate repetitive patterns of flashing lights, even with a certain amount of variation, is clearly interesting, but not biologically relevant. Since Kauffman’s models do not produce functional structures marked by specified aperiodic symbol sequences such as we find in DNA, they do not serve as promising models for explaining the origin of biological information.
But there is another fundamental problem with Kauffman’s model systems, one he tacitly acknowledges. To the extent that Kauffman’s systems do succeed in producing interesting nonrandom patterns, they do so only because of an unexplained intervention of information. For example, Kauffman notes that if his system of flashing lights is properly “tuned,”18 then it will shift from a chaotic regime to an orderly regime that will produce the outcomes or patterns he regards as analogous to processes or structures within living systems. By “tuning,” Kauffman means the careful setting of a particular “bias parameter” to make his system shift from a chaotic regime into one in which order is produced. In other words, the tuning of this parameter ensures that certain kinds of outcomes are actualized and others precluded. Such an act constitutes nothing less than an infusion of information. When someone tunes a radio dial or a musical instrument, he or she selects a certain frequency and excludes many others. Yet Shannon defined an informative intervention as precisely electing one option and excluding others.
In his system of flashing lights, Kauffman briefly mentions that two of his collaborators—physicists Bernard Derrida and Gerard Weisbuch—were responsible for the “tuning” that produced the patterns he thinks analogous to order in living systems. Nevertheless, he does not think that any such agency played a role in the origin of life. Thus, even assuming for the sake of argument that his system of flashing lights manifests features analogous to those in living systems, his model still falls prey to what Dembski calls the “displacement problem”—the problem of explaining the origin of one (putatively) information-rich system only by introducing another unexplained source of information.
Conclusion: The Displacement Problem
As I examined Kauffman’s model, it occurred to me that I was beginning to see a pattern. Self-organizational models for the origin of biological organization were becoming increasingly abstract and disconnected from biological reality. Model systems such as Prigogine’s or Kauffman’s did not even claim to identify actual chemical processes that led in a life-friendly direction. Instead, these models claimed to describe processes that produced phenomena with some limited similarity to the organization found in living systems. Yet upon closer inspection these allegedly analogous phenomena actually lacked important similarities to life, in particular, the presence of specified complexity, or information.
But beyond that, I realized that self-organizational models either failed to solve the problem of the origin of specified information, or they “solved” the problem only at the expense of introducing other unexplained sources of information. Kauffman’s models provided only the best illustration of this latter “displacement problem.” In addition, the earlier models such as Kenyon’s or Prigogine’s, relying as they did on processes that produced order rather than complexity, each fell prey to both empirical and conceptual difficulties. They not only failed to explain the origin of information; they did so in a highly instructive way—one that helped to clarify our understanding of the nature of information and why it stands conceptually distinct from redundant order and the lawful processes that produce such order.
Thus, despite the cachet associated with self-organizational theories as the “new wave” of thinking in evolutionary biology, I came to reject them as complete nonstarters, as theories that were unlikely ever to succeed regardless of the outcome of future empirical studies. In my view, these models either begged the question or invoked a logical contradiction. Proposals that merely transfer the information problem elsewhere necessarily fail because they assume the existence of the very entity—specified information—they are trying to explain. And new laws will never explain the origin of information, because the processes that laws describe necessarily lack the complexity that informative sequences require. To say otherwise betrays confusion about the nature of scientific laws, the nature of information, or both.
As I reflected on the failure of these models, my interest in the design hypothesis increased. But the reason for this was not just that self-organizational scenarios had failed. Instead, it was that self-organizational theories failed in a way that exposed the need for an intelligent cause to explain the relevant phenomena. Remember my magnetic chalkboard demonstration? I had used that demonstration to show that chemical forces of attraction don’t explain the specific arrangements of bases in DNA any more than magnetic forces of attraction explained the arrangement of letters on my letter board. But what did explain the arrangement of magnetic letters on that board? Obviously, an intelligent agent. Might the arrangement of bases in DNA have required such a cause as well?
Recall Kauffman’s model systems. In each case, they explained the origin of information by reference to an unexplained source of information. These scenarios too lacked just what the design hypothesis provided: a cause known to be capable of generating information in the first place. In one case, Kauffman presupposes that his system will work only once it had been “tuned.” But how? Was intelligence necessary to do what self-organizational processes alone could not?
A similar thought had occurred to me earlier when reflecting on chance elimination. Many events that we would not credit to chance—in particular, highly improbable events that matched independent patterns—were actually best explained by intelligent design. The improbable match between the two college papers that led my colleagues and me to exclude the chance hypothesis also led us to conclude plagiarism, a kind of design.
Even Christian de Duve, in explaining why the origin of life could not have occurred by chance, acknowledged (if inadvertently) that the kind of events that lead us to reject chance also suggest design. Recall that de Duve pointed out that a “string of improbable events” such as someone winning the lottery twice in a row (a kind of pattern match) “do not happen naturally.”19 Of course, de Duve went on to state that the failure of the chance hypothesis implied, for him, that life must be “an obligatory manifestation of matter”—one that had self-organized when the correct conditions arose.20
Having examined the leading self-organizational theories in detail, I now doubted this. I also later learned from de Duve himself that he felt compelled to elect a self-organizational model, because he was unwilling to consider design as an alternative to chance. He thought invoking design violated the rules of science. As he explains: “Cells are so obviously programmed to develop according to certain lines…that the word design almost unavoidably comes to mind,…[but] life is increasingly explained strictly in terms of the laws of physics and chemistry. Its origin must be accounted for in similar terms.”21 As I explain later (see Chapters 18 and 19), I saw no reason to accept this prohibition against considering the design hypothesis. To me, it seemed like an unnecessary restriction on rational thought. So the failure of chance and self-organizational models—as well as the way these models failed—only made me more open to intelligent design as a possible hypothesis.
Indeed, the design hypothesis now seemed more plausible to me than when I first encountered it and certainly more plausible than it had before I had investigated the two most prominent alternative categories of explanation, chance and necessity. But I knew that there was another category of explanation that I needed to investigate more fully, one that combined chance and necessity. Some of the most creative proposals for explaining the origin of biological information relied on the interplay of lawlike processes of necessity with the randomizing effects of “chance” variations. So I needed to examine this class of theories as well. The next two chapters describe what I discovered about them.