IT

The world is the totality of facts, not of things.

—Ludwig Wittgenstein

The subject matter of research is no longer nature in itself, but nature subjected to human questioning …

—Werner Heisenberg   



I’d like to end this book as I began it, by invoking the image of an hourglass or a tree to denote the relationship between the mind and the universe. The nexus of these two domains—the throat of the hourglass, the trunk of the tree—is an active, dynamic region where energy flows in both directions. Sense data are conveyed to the brain from the wider world, but the eye and the rest of the brain, rather than passively recording images, actively select and manipulate them. Perception is an act; as the English neuroanatomist J. Z. Young notes, we “go around actively searching for things to see and … ‘see’ mainly those things that were expected.” We act, in turn, on the outer world, projecting our concepts and theories and manipulating nature in accordance with our models of her. It seems to me that many philosophers have gone astray by assuming that the traffic at the neck of the hourglass goes in one direction only. Thus realists assert that the universe is just as we perceive it (yet the book you are holding in your hands is a black vacuum, with storms of neutrinos howling through it) while idealists say that it’s all just thought (yet a falling rock you never saw coming can strike you dead). I’d prefer to set such dogmatic assertions aside, and concentrate instead on the act of observation itself. Specifically, I’m going to outline how a philosophy of science may be constructed from observational data, rather than on more derivative concepts such as space, time, matter, and energy. Such a philosophy would portray the observed universe as made not of atoms or molecules, quarks or leptons, but of discrete units (“bits”) of information.

I will describe this approach as “information theory.” The term is usually employed more narrowly, to describe a theory concerned with communications and data processing, but I’m anticipating its expansion into a wider account of nature as we behold her.

Given that our observations represent, at best, only a small and distorted part of the whole, we naturally wonder to what extent we can ascertain what “really” is out there. To this vital question physics has proffered two visions of how mind and nature interact—the classical view, ascendant in the nineteenth century, and the more recent quantum view. In practice physics employs both: Classical concepts are applied to large-scale phenomena (roughly from the level of molecules on up), while quantum mechanics rules the small-scale realm of atoms and subatomic particles.

The classical outlook rests upon three commonsensical assumptions that like many another decree of common sense are not altogether true. The first is that there is but a single, objective reality to each event: Some one thing happens—electrical current flows through a wire, say, deflecting a compass needle—and while each observer may witness only part of the entire phenomenon, all can agree on exactly what happened. The second classical assumption is that the act of observation does not in itself influence what is observed; the classical scientist observes nature as if from behind a sheet of plate glass, recording phenomena without necessarily interfering with them. The third assumption is that nature is a continuum, which means that objects can in principle be scrutinized to any desired degree of accuracy; observational errors and uncertainties are ascribed to the limitations of the experimental apparatus.

The classical approach fared well so long as physicists concerned themselves with big, hefty things like stones, steam engines, planets, and stars. Such objects can be observed without obviously being perturbed by the act of observation. We know today that the influences are there—when a nature photographer takes a flash photo of a wasp, for instance, the light from the flashgun buffets the wasp a bit, and adds fractionally to its mass—but as these intrusions have no discernible effect on the macroscopic scale, they usually can be ignored. And ignored they long were; classical physics can be defined as the physics of objects that are not noticeably altered by observation.

The classical view started to break down, however, once physicists began investigating subatomic phenomena like the behavior of electrons in atoms or the collisions of protons in particle accelerators. Subatomic systems are perturbed by every act of observation; to try to count the number of electrons in a cloud of gas by taking a flash photograph of them is rather like counting the number of pupils attending a lecture by blasting them out of the classroom with a fire hose. We can no more comprehend the world of the very small without taking the act of observation into account than we can investigate the destruction of a china shop without paying attention to the bull that did the damage.

Thus arose quantum mechanics, in which the information obtained from observations is seen to vary according to the way the observations are conducted, so that the answers we derive from an experiment depend on the questions we choose to ask. In the quantum world, the classical pane of glass is replaced by an elastic membrane that shudders and flexes at the touch of each observation; peering at the dancing lights and shadows of this soap-bubble interface, we cannot always be certain which phenomena are properly to be ascribed to the outer world and which were stirred up by the act of interrogation.

The erosion of the classical outlook dates from the German physicist Werner Heisenberg’s enunciation, in 1927, of the “uncertainty” principle. Heisenberg found that there is an intrinsic limitation to the amount of accurate information one can obtain about any subatomic phenomenon. This limitation arises from the fact that neither we nor anyone else in the universe can observe subatomic particles without interfering with them in one way or another. If we want to determine exactly where a neutron is, we might let it slam into a target (which will stop it in its tracks) or take a photograph of it (which means clobbering it with photons that will send it flying away on a new trajectory), or elect to use some other procedure, but in every case we will have destroyed information about what the neutron might have done had we let it alone. And this situation pertains universally in the quantum realm: To learn one thing about a subatomic phenomenon means to be ignorant of something else. The Heisenberg limitation does not depend on conventional experimental error, or the inadequacies of any particular technology; it is fundamental to every act of observation, whether conducted with sealing wax and bailing wire on Earth or by gleaming machines on the most technically advanced planet in the Virgo Supercluster.

The uncertainty principle makes it clear that on the small scale, at least, the only unperturbed phenomena are the ones that go unobserved! The observed universe therefore cannot rightly be regarded as having a wholly independent, verifiable existence, since its apprehension requires the intrusion of an observer, whose actions inevitably influence the data that the observation yields. (As to the unobserved universe we are well advised to heed the counsel of the philosopher Ludwig Wittgenstein, that “whereof one cannot speak, thereof one must be silent.”)

At first blush, the realization that we cannot observe the outer world without influencing it might not seem to threaten the classical assumption that there is an objectively knowable universe out there all the same. Classical physicists could (and did) take refuge in the argument that there can still be but one true reality, even if the observer cannot directly access it, just as there must be but one correct verdict in a murder trial even though the jurors can never know all the facts about the case being tried. But the better one becomes acquainted with quantum physics, the more even the simplest physical events begin to look like Rashomon, the Ryunosuke Akutagawa story about a rape trial in which each witness presents a plausible but incompatible version of the crime. In the quantum domain, every answer is tinted the color of the question that elicited it.

The famous “dual slit” experiment illustrates how quantum physics upsets the classical assumption of objective reality. The question posed by the dual slit experiment is whether subatomic particles like protons, electrons, and photons are particles or waves. All subatomic particles behave like particles under some circumstances and like waves under others; physicists use mathematically equivalent particle and wave equations in dealing with them, depending on which is more convenient in solving a specific problem. But particles and waves have mutually exclusive properties. Waves spread out as they travel across space, and interfere with one another when they intersect. Particles, in contrast, maintain their discrete, individual identities—individual particles do not spread out—and when clouds of particles intersect they mostly fly right past one another, with a few odd collisions. The dual slit experiment forces the question: If classical physics is right there can be but one verdict, either particle or wave.

To familiarize ourselves with the issues involved, let’s first set up the experiment using macroscopic (i.e., classical) objects. We erect a wall containing two parallel, vertical slits, place a target behind it, and fire a machine gun at the wall. After a while, the bullets that pass through the slits will have inscribed two vertical stripes on the target. If we close off one slit, we will get one vertical stripe on the target. A physicist, shown only the targets and a diagram of the experimental apparatus, will conclude that what we fired at the target were particles. Now we submerge the wall so that the slits are half under water, send a succession of waves toward the intervening wall, and make a target out of some material that can record wave impacts (beach sand will do). When each wave passes through the slits it generates two new sets of waves on the other side of the wall, one radiating from each slit. Where these new waves strike one another they create an interference pattern—the wave pattern is reinforced wherever wave peaks overlap other wave peaks and troughs overlap troughs, and is canceled out where waves meet troughs. As a result our sandy target will be inscribed with a series of bands. Our referee, shown this interference pattern on the target, will correctly conclude that it was made by waves.

So far so good. But watch what happens when we venture into the quantum realm. We replace the machine gun with a device that emits subatomic particles—electrons, say—and use as our target a phosphorescent screen that glows when struck by electrons. (That’s how a TV tube works.) If we leave both slits open and fire a lot of electrons at the obscuring screen, we find that the target displays an interference pattern. Electrons therefore look like waves—so long as we leave both slits open. But if we close one slit, suddenly we get a line on the target; now the electrons are impersonating particles.

This seems strange, but stranger still is what happens if we turn down the emitter power until it fires only a single electron at a time. Now we close one slit, and record but a single impact on the target: Fair enough, the electron is a particle. But if we leave both slits open, and fire but a single electron, we get—an interference pattern!

This is exceedingly weird. If we regard the electron as a particle we must conclude, absurdly, that it somehow manages to split in two and pass through both slits, when and only when both slits are open. If we regard the electron as a wave, then we must imagine that it somehow folds up and imitates a particle, when and only when one of the slits is closed. And the experiment can be made even more mind-boggling: Let’s wait until after the electron has been fired, then quickly open or close one of the slits while the electron is on its way. Here we enter the domain of the so-called “delayed choice” experiments, and again the results are the same: The electron behaves like a particle whenever one slit is open, and like a wave whenever both slits are open.

So long as we cling to the classical assumption that electrons are “really” either particles or waves, the dual slit experiment results in paradox. This is what makes the experiment so difficult to grasp; as the Danish physicist and philosopher of science Niels Bohr remarked, when a student complained that quantum mechanics made him feel giddy, “If anybody says he can think about quantum problems without getting giddy, that only shows he has not understood the first thing about them.”

Bohr offered a way to escape the paradox, via what today is called the “Copenhagen interpretation” of quantum physics. We concede that the electron is not “really” either a particle or a wave, but assert that it assumes one or the other costume depending upon how it has been interrogated. Quantum physics thus teaches that the identity of (small) objects depends on the act of observation—that our conceptions of the foundations of physical reality result from a dialogue between the observer and the observed, between mind and nature.

True, quantum physics is confined to the realm of the very small; only recently have physicists managed to write a quantum mechanical description of something as large as a single molecule, and molecules are a billion times smaller than human beings. But this does not mean we can ignore the implications of quantum observer-dependency for the macroscopic world. For one thing, the physics of the minuscule has always commanded particular respect in the philosophy of science, inasmuch as big things are made of small things; surely we have learned something important when we discover that apples are made of atoms. For another, quantum effects do influence the macroscopic world; the sun, for instance, wouldn’t shine were it not for quantum tunneling and quantum leaps and various other manifestations of quantum uncertainty. Everything is observer-dependent to some degree.

If, then, we accept that the questions the observer asks influence what he has the right to say about what he observes, we are led to consider that we live in a participatory universe, one where the knowable behavior of subatomic systems depends on the methods we employ to study them. Might it be possible to construct our scientific conceptions of the world on the basis of this realization?

I think so. I think, specifically, that both the quantum and the classical approach can be subsumed into the broader paradigm that I am calling information theory—or IT for short. IT accepts that our knowledge of nature always devolves from a partnership between the observer and the observed; it therefore banishes from science all questions about what things “really” are, and focuses instead on the observational data themselves, restricting models of the universe to what is in fact knowable. All else is regarded as beyond the province of science: If I thump my fist on the table and declare that electrons are particles and not waves, I’m talking philosophy, not science.

At this point the philosophically astute reader, suspecting that I am dressing old philosophies in new clothes, may object along something like these lines: “Is not the position you are staking out here simply the logical positivism of the Vienna Circle, those philosophers who dismissed as meaningless all statements that cannot be empirically verified? And are you not perhaps flirting with solipsism, denying the independent existence of the universe and making it all depend on your puny observations?”

Well, maybe so, but for the purposes of this discussion I want to set aside all the “isms,” and with them the supposition that we human beings sit on some cosmic court of appeals empowered to decide what does and does not exist. My point is simply that scientific statements about the universe, to the extent that they depend on observation, cannot be employed to make statements about what nature is like independent of the act of observation. I am not arguing for what has been called “quantum solipsism,” the assertion that nothing exists except when it is observed: I assume that there are things out there, but I reject as presumptuous any scientific attempt to declare once and for all what they are. The concept of “things” is itself derived from observational data; therefore data are more fundamental than things. What we call facts about nature are inductions from the data, and it is in this spirit that I invoke Wittgenstein’s aphorism that the observable universe is made of facts, not things.

Let me first sketch the background of IT, then describe how it might be expanded into a philosophy of science.

Information theory may be said to date from the year 1929, when the Hungarian-born physicist Leo Szilard wrote a paper identifying entropy as the absence of information. Entropy is a measure of the amount of disorder in a given system. The second law of thermodynamics declares that in any “closed” system—i.e., one to which no energy is being added—entropy will tend to increase with the passage of time. A drink with an ice cube in it is in a low-entropy state. Leave the drink alone, and the entropy increases: The ice cube melts, its water dissipates through the drink, and soon the whole system consists of but one substance, a (watery) drink.

To the thermodynamicists of the nineteenth century, the important thing about low entropy was that it meant you could get work out of a system. An ice cube can do some work. It can cool a drink, for one thing, and it can do other sorts of work as well: If Martians were to dispatch a tiny probe to Earth and land it in the drink, they could, if they were clever about it, use the thermal gradient in the drink to recharge the batteries on their space probe. A drink at room temperature in which the ice has melted, however, can do no such work. We can extract some of it, freeze it, and regain the capacity to do work, but this requires that we put some energy into the system. One always must pay to decrease entropy; that’s the second law of thermodynamics.

But to Szilard, the interesting thing was that the drink starts out with more information. It contains, for instance, several distinct realms—one cold (inside the ice cube), another relatively warm (far from the cube), and other, intermediate thermal domains. As the ice cube melts, the amount of information declines, until at maximum entropy the drink has but a single story to tell: “I’m at room temperature.” More entropy, Szilard saw, means less information.

Information has a price; there’s no such thing as a free lunch, and every time we learn something about a given system we increase its entropy. The price of information, however, is wonderfully small: To extract one bit of data costs only 10-16 of a degree of temperature of heat on the Kelvin scale.* That’s a minuscule number—a penny is more than 10-16 of the U.S. national debt—and the fact that it is minuscule is the reason we can live in an information society today. Low entropy cost means that phone lines don’t need to carry high voltages and that communications satellites can run on modest amounts of solar power. It is because information adds so little entropy to the systems we use to transmit it that we can afford to telecast soccer matches around the world, buy books, send electronic mail by computer, and make long-distance phone calls; in each case the cost in entropy per bit of data communicated is low enough to keep the bills manageable.

That is also why we can afford interstellar communication. But before getting into all that, let me outline how information theory works, and offer a few examples of how it can bring fresh perspectives to scientific research.

Information theory originally was applied to practical technological problems, such as designing computers and predicting the signal-to-noise ratio of telephone lines. In a typical IT equation one begins with a data input A, traces what happens to the data when they are manipulated or communicated in a given system B (e.g., to what extent data are lost due to noise in a communications channel), and predicts the form in which they will arrive at an output stage C. This process has properties that can be quantified mathematically. Claude Shannon of Bell Labs found in the 1940s that the accuracy of any information channel can be improved, without decreasing the data rate, by properly encoding the signal. This discovery, known as Shannon’s second theorem, is today employed in many sorts of communications; the clarity of the photographs that the Voyager spacecraft transmitted back to Earth from the remote planet Neptune owed a lot to Shannon. But the ultimate significance of Shannon’s second theorem resides in its universality: The theorem pertains to every kind of communications channel, embracing not only telephones and computers but brain circuits and perhaps even the mechanism of biological reproduction. Information theory proffers a common ground for understanding every branch of science, insofar as each involves an input stage (data from the outer universe), a communications or data-processing system (the brain), and an output stage (a scientific theory or hypothesis, which then forms a kind of communications loop when projected back onto nature).

If information theory is to unify science, however, there must be a common language, shared by all the various sciences and applicable to every field of scientific investigation. The key to this language, I submit, is digitization, the breaking down of data into bits.

The term bit is short for “binary digit,” the kind of numbers employed by modern digital computers. The binary system expresses all numbers in terms of only two digits, 0 and 1. That makes it much simpler than the base-ten numbering system we learn in school, which requires ten symbols (0, 1, 2, 3, 4, 5, 6, 7, 8, and 9). Here is how the first five digits of the familiar ten-based system translate into binary numbers:

… and so on. As the numbers grow larger, their binary translations begin to look unwieldy to our eyes—the number 4096, for instance, expressed in binary terms is 1000000000000—but computers thrive on binary numbers, because they can be expressed by on-off switches, which are among the most exquisitely simple of all mechanical devices. A computer that employed ten-based numbers would have to have ten settings at each of its millions of circuit junctures, plus a storage system with ten possible states at every point, but a binary computer requires only that each of these millions of switches and encoding points have two states—0 = off and 1 =on. These states may be represented by the presence or absence of punch holes, as was done with the cards and paper tape employed in the 1950s, or of magnetically charged dots on a disc, as in the floppy discs and hard discs of the seventies and eighties, or of dark dots on the optical discs that promise to become the data storage standard of the nineties. Whatever the medium may be, it’s all bits—zeros and ones, off and on states—to a computer.

Anything that can be quantified can be digitized, including sound (a compact disc containing nothing but zeros and ones can reproduce Mozart and Harry Partch), pictures (bits inscribed on laser discs can replicate Hollywood movies or paintings in the Louvre), and abstractions ranging from computer models of rotating galaxies to the EKG patterns of heart attack victims. It is because binary digits act as common currency for every quantifiable phenomenon that an ordinary desktop computer can be applied to an enormous variety of tasks, from calculating bank balances and designing shoes to flying spacecraft and guiding tunnel-digging machines under the English Channel. And that is why scientists, too, whether they are engaged in sequencing frog DNA or imaging distant quasars, increasingly find that their time is spent manipulating bits of data.

Information theory is still in its infancy, and has many shortcomings. One glaring limitation is that IT cannot yet be employed to access the significance of a piece of information. Presented with two telegrams dispatched from Tokyo on September 2, 1945—one reading, “The war is over!” and the other, “The cat is dead!”—IT declares that since the bit count of each telegram is approximately equal, both contain about the same amount of information, even though the first telegram obviously would have meant more to most readers than the second (unless the second message was a code; coding is an important question in information theory, but one that I’ll not go into here). By relating information to thermodynamics, IT postulates that no system can generate more than the total amount of information put into it; there is, in other words, a law of conservation of information, comparable to the conservation of energy. Fair enough, but if we regard the brain as an information-processing system, the conservation law implies that Beethoven’s string quartets, say, contain no more information than the total of everything Beethoven had learned plus the entropy bill paid by the meals he ate and the air he breathed while composing them. This may be true in a way, but it’s not very illuminating. Leon Brillouin, a physicist whose writings did much to call attention to the significance of information theory, tried to quantify the way that human creativity seems to reduce the amount of entropy in the subjects it addresses, but his effort was probably premature and in any event it failed. Information theory is hardly alone in this, however; human thought is a dark continent to every science.

Yet even in its infancy, IT can contribute to explaining the dialogue between mind and nature. Consider what it has to say about questions concerning the brain, biological systems more generally, and quantum physics.

The human nervous system can be analyzed as a data-processing system, with intriguing results. Much of the current excitement about “neural networks”—artificial-intelligence computer systems set up to model the brain—derives from the fact that neurons in the brain, like microswitches in a computer, have but two fundamental states: At any given time each either fires or does not, and so is in a state equivalent to either 1 or 0. This may provide the IT basis for the proof, published by Warren S. McCulloch and Walter Pitts in the 1940s, that the brain is a “Turing machine,” meaning that it can do anything a computer can do.

When neurologists inform us that the 125 million photosensitive receptors in the human eye have a total potential data output of over a billion bits per second, and that this exceeds both the carrying capacity of the optic nerve and the data processing rate of the brain’s higher cortical centers, we can by using information theory alone hypothesize that the eye must somehow reduce the data it gathers before sending them through the optic nerve to the brain. And, indeed, clinical experiments with human vision indicate that the eye does resort to various data-reduction tricks. To get a sense of just how successful these tactics can be—though this particular deception is relatively trivial—cover your left eye, look at the page number atop the left page of this book, and place a coin near the gutter between the pages. Keeping your right eye fixed on the page number and your left eye covered, move the coin right and left; you will find that there is a spot where the coin vanishes. This is the blind spot in the eye. It represents the hole-rather a large hole, actually—where the optic nerve exits through the retina. Note that where the coin disappears you perceive not a black hole but white paper. Yet there is no paper there: The coin is there. What the eye is doing, evidently, is filling in the hole with whatever color—in this case, white—surrounds the hole. (Try the experiment on a red sheet of paper, and you will “see” that the blind spot is red.) The fact that the eye must employ some such data-reduction tactics can be predicted by information theory independently of the clinical case studies.

IT offers similar insights into memory. The brain’s shortterm memory can store only about seven digits in the ten-based system; that’s why people have trouble remembering telephone numbers more than seven digits long, and resist efforts by the post office to employ postal zip codes longer than seven digits. IT postulates that data overload results not in the loss of just a few extra digits, but in a general corruption of the data in memory. And that’s what we experience: When we try to remember a long telephone number we don’t normally forget just the last few digits, but are apt to scramble much of the number. Teachers, familiar with the danger of memory overload, take care to explain basic concepts before building on them, lest their students become totally confused and “turn off.”

Biological reproduction, too, can be likened to a communications channel, one that has evolved through natural selection to maximize its data capacity and minimize error. The DNA molecule, that basis of all terrestrial life, encodes bits of information in triplets of four chemical substances, the nucleotide bases. (The bases are Adenine, Guanine, Cytosine, and Thymine; their triplets correspond to the twenty primary amino acids from which proteins are built.) DNA molecules use this code to synthesize proteins by employing the appropriate sequence of amino acids. It turns out that the error rate in DNA replication approaches the best that information theory permits; biological evolution can be viewed as an ongoing effort to minimize the amount of “noise” in the DNA-RNA communications channel that transmits genetic data down through the generations.

Before information theory can be incorporated into quantum physics, however, we will need to identify a binary code of some sort in the subatomic world—the equivalent, in all matter and energy, of the two-based numbering system employed by digital computers and by the brain. Quantum theory implies that this may be possible. The word quantum (from the Greek for “how much”) reflects the fact that matter and energy as we observe them are not continuous, but present themselves in discrete units, the quanta. Quantum mechanics can be construed as meaning that not only matter and energy but knowledge is quantized, in that information about any system can be reduced to a set of fundamental, irreducible units. Quanta alone, however, cannot yet provide us with the two-based numbering system we’d like to have in order to interpret the entire physical world in terms of bits, because quanta as we currently understand them have not two but many different states. An ingredient is missing.

In search of a binary code through which information theory could be universally applied to physics, the physicist John Archibald Wheeler looks to yes-or-no choices made by the observer—like the choice, in the dual slit experiment, of whether we ask the electron to represent itself as a wave or a particle. Wheeler suggests that all the concepts we apply to nature, including the concept of objects, may be built up from on-off decisions made by the scientist in the way he or she chooses to set up each experimental apparatus. He encapsulates this dynamic in the slogan “It from bit”:

Every it—every particle, every field of force, even the spacetime continuum itself—derives its function, its meaning, its very existence entirely … from the apparatus-elicited answers to yes or no questions, binary choices, bits.

Quantum physics deals with the classically-engendered paradox of what things “really are” by declining to assign an identity to any phenomenon until it has been observed. As Wheeler likes to say, paraphrasing Bohr, “No phenomenon is a phenomenon until it is an observed phenomenon.”

An observation, in turn, is defined as consisting of two operations. First, we “collapse the wave function.” This means that energy (and with it some potential information) is collected, as by intercepting light from a star or X-rays from a high-energy particle collision. Second, there must be an irreversible act of amplification that records the observational data, as when the starlight darkens silver grains on a photographic plate or the X-rays trigger an electronic detector. The second part of the definition clearly is necessary; otherwise, all the starlight that has ever washed over the lifeless lava planes of the moon could be said to have been observed, which would be to generalize the concept of observation into meaninglessness. But the idea of amplification is also intriguingly open-ended: It implies that for an observation to qualify as an observation, the data must not only be recorded but also be communicated somehow.

Suppose that an automatic telescope, run by a computer at an unmanned mountaintop observatory, records the light of an exploding star—a supernova—in a distant galaxy. The wave function has been collapsed but not fully amplified, for it has not yet been communicated to an intelligent being.* (To argue otherwise we should have to say that any record of a process constitutes an observation, in which case we would be obliged to contend that every time a cosmic ray etches a path in a moon rock it has been observed, and that seems absurd.) The next morning an astronomer visits the observatory and views on a computer screen the dot etched by the exploding star. Now we have an act of observation, no?

Maybe not; here things get strange. Let’s say that the astronomer goes to the telephone to call a colleague and tell her that he has discovered a supernova—but before he can do so an avalanche buries his laboratory, killing him and destroying his data. Has an observation occurred? As there is now no more information about the supernova than there was before the astronomer arrived on the mountaintop (less, in fact) the only correct answer appears to be no! As Wheeler puts it, an observer is “one who operates an observing device and participates in the making of meaning” (emphasis added). If the sole observer is dead, no meaning has been adduced. There is no observation without communication—and no observation means no phenomenon in the known universe, which according to the view I have been espousing means no phenomenon, period.

Quantum physics thus confronts us with a nest of Chinese boxes. For any given observation there is a conceptual “box” within which an observation has been made. The wave function is in there, along with the apparatus that amplified it and the intelligent being who participated in giving it meaning. But this box, in turn, is enclosed within an infinite number of larger boxes wherein the news of the event has not yet been received and interpreted. For residents of these larger boxes, the phenomenon has not (yet) occurred.*

Which leads me to a final look at the question of interstellar communications.

Imagine, sad thought, that the sun were to blow up tomorrow, destroying all the world’s knowledge, and that no creature intercepted the broadcasts weakly and inadvertently leaked into space by terrestrial radio and television transmitters prior to our planet’s demise. This scenario parallels that of the unfortunate astrophysicist buried in the avalanche: No act of observation! All human science, then, would in the long run have added up to nothing. Having made no lasting contribution to science on the panstellar scale, we would have bequeathed nothing to the totality of the perceived universe.

How do we avoid the pointlessness of such a dismal denouement? By contributing what we know to other, alien intelligences—either by sending information to them directly or dispatching it to be stored in an interstellar communications network. That act of amplification would insure that our observations were not hostaged to the fate of our one species, but instead had been added to the sum of galactic and inter galactic knowledge, stretching far across space and into the future. As Wheeler writes, “How far foot and ferry have carried meaning-making communication in fifty thousand years gives faint feel for how far interstellar propagation is destined to carry it in fifty billion years.”

When speculating about interstellar communication one gets the odd feeling that there is something natural and intuitive about it—that we are meant to do it, as we are meant to write poetry, love our children, fret about the future and cherish the past. Perhaps this inchoate connotation of appropriateness, sustaining as it does so many SETI researchers through their long and daunting quest to make contact with life elsewhere among the stars, derives from this: That by participating in interstellar communication we would not just be exchanging facts and opinions and art and entertainment, but would be adding to the total of cosmic understanding. If we have companions in the universe, then the cosmic tree is not rooted in earthly soil alone. Wherever there is life and thought the roots may thrive, until in their grand and growing extent they begin to match the glory of the tree’s starry crown.

Why, then, are a lonely few astronomers hunched over the consoles of the radio telescopes, forever listening, seeking, hoping? Perhaps because in some sense we suspect that the known universe is being built out there, in countless minds, and that we can help it flourish. We who came down from out of the forest seek to grow a forest of knowing among the stars.

*The relevant equation is S = k log W in which S denotes the entropy of a given system, W the number of accessible microstates, and k Boltzmann’s constant, equal to 1.381 × 10-16 erg/Kelvin. This formula, one of the most wide-reaching in all science, was the work of Ludwig Boltzmann, who decreed that it be inscribed on his tombstone.

*It does not matter who makes and communicates the observational data; any sensate being can qualify, whether he or she or it is a Harvard astrophysicist or a silicon network terminal embedded in an asteroid.

*Chinese boxes show up in classical, macroscopic physics, too, owing to the fact that no information may be transmitted faster than the velocity of light. Suppose that the unstable star Eta Carinae, thousands of light years from Earth, “already” has exploded, and that astronomers on planets near Eta Carinae have photographed the explosion. Nevertheless the explosion has not yet occurred so far as we are concerned, because light from the explosion has not yet reached us. (At least it had not as of 3 hours Greenwich Mean Time on the night of September 11, 1991.)