10
The Persistence of Memory
Memory as Existence
In a 2005 speech, the CEO of Google, Eric Schmidt, for the first time offered an estimate of the size of the Internet in total bytes of memory.1
He put the total at 5 million trillion bytes—or, based on the average size of a computer byte—about 50,000,000,000,000,000,000,000 (50 sextillion) bits. All of this memory, representing a sizable portion of all human knowledge and memory, was stored virtually in more than 150 million websites, and physically in an estimated 75 million services located around the world. None of these numbers were accurate, other researchers added, and some might be off in either direction by a factor of five.
Of this total size of the Internet, Schmidt estimated that Google, by far the world’s leading search-engine company, had managed after seven years to index 200 trillion bytes (terabytes), or just .004 percent of the Net. Most of the rest, he admitted, was essentially terra incognita—a vast region of unexplored data that might never be fully known. At the current rate, Google would need 300 years to index the entire Internet—and that, Schmidt added, was only assuming the impossible: that the Internet wouldn’t grow by a single byte in those three centuries. In fact, the Net probably grew by several trillion bytes just during the course of Schmidt’s speech.
By 2010, these unimaginable contents of the Net were accessed by just short of 1 billion personal computers, nearly 700 million smartphones (the total for 2011), and several hundred million other devices large and small, in the hands of an estimated 2 billion users worldwide. Some of these users, mostly from the developed world, arrived in cyberspace using powerful computers and handheld devices, linked via wireless networks or broadband cable, and stored hundreds of gigabytes of memory of their own.
Others, newer to the Web and often from developing nations, had reached the Internet any way they could: dial-up modems, cell phones rented from corner stands, desktop computers stationed in classrooms and local libraries, Internet cafés of the kind long gone from the West. But they’d made it at last, and whether they were selling goods on eBay or following bloggers covering events their censored national media wouldn’t touch or taking online classes at distant universities they would never see, they were the first generation to have access to the world’s accumulated memory. And because of that, they inhabited a unique new reality that none of their ancestors had ever experienced. For the first time, these billions (and 2 billion more are expected to join this global conversation within the next decade) had access to almost everything every human has ever known. And it was at their fingertips. And it was as good as free.
MOVING OUT
Memory is the guardian of all things. So wrote the author of Rhetorica Ad Herennium. That author, even if he was a master of human relations like Cicero, could have never imagined a world in which the hoi polloi, even the people he considered slaves, could have access to something almost indistinguishable from omniscience. Nor could Isidore of Seville, for all of his knowledge of men’s souls. Nor Aristotle, for all of his vision of how the world works. Nor Giordano Bruno, whose memory theater, in the end, was an attempt to achieve this kind of universal knowledge. Not even Gordon Moore, as he sat with his graph paper and extrapolated out the future of technology and watched the curve go vertical, could have guessed the revolutionary shift in mankind’s relationship to its own memory that would happen in his lifetime—and in which he would play a central role.
The old alchemists—Bruno, Paracelsus, Roger Bacon, even Isaac Newton—famously searched for the “Philosopher’s Stone” (lapis philosophorum) that in the best-known stories was a substance that could convert base metals into silver and gold. But as with almost everything else in the hermetic tradition, the story is much more complicated than that. Like the Grail legend, with the Philosopher’s Stone it can be difficult to separate the literal from the allegorical. Thus, the stone was also believed to be the “elixir of life,” capable of staving off mortality for centuries. It was also the symbol of enlightenment.
The search for the Philosopher’s Stone was part of a larger quest called the “Great Work” (which, by the way, also included the Grail). In the words of the nineteenth-century French occultist Eliphas Lévi:
The Great Work is, before all things, the creation of man by himself, that is to say, the full and entire conquest of his faculties and his future; it is especially the perfect emancipation of his will.2
Though its practitioners might not agree, beyond the obvious attraction of gaining divine knowledge and its attendant power, part of the appeal of the Great Work was the sheer impossibly of achieving its goals. The quest itself had its own cultural power—it could even get you burned at the stake … and a statue raised to your memory.
The irony of this five-thousand-year quest was that, even as each generation of occultists spent their lives in fruitless search, the path to this infinite knowledge was being forged, inch by inch, by the least likely (and least mystical) of explorers: scribes and printers, tinkerers and engineers. The difference between these two groups of questers could not have been starker, and that difference was no more apparent than in 1969.
To have believed the media coverage that year, it was a turning point in human history. At Woodstock, the counterculture had its coming-out party, celebrating the new power of youth and (it was said) ushering in a new age of love and enlightenment. And as Apollo 11 landed on the moon, the world rejoiced at man’s first giant leap into space.
Yet, with the hindsight of decades, as the baby boomer generation grew old and NASA, having stopped visiting other worlds in 1972, eventually abandoned even the Space Shuttle program, it was apparent that this new age was over almost as soon as it had begun.
Meanwhile, men with crew cuts instead of shoulder-length hair, white shirts and skinny ties instead of tie-dye, and lab coats instead of spacesuits were buried in laboratories and offices creating a real point of inflection in the story of the human race. We can see now that it was the invention of the microprocessor and the creation of the Internet that made 1969 a true year of miracles.
After millennia of continuous improvement and innovation in the gathering, preservation, organization, and presentation of memory, these two breakthroughs and all of the many inventions of the digital age that supported them—magnetic memory, computers, networks, displays, and so on—had created a wholly new and unexpected kind of Philosopher’s Stone, a vast global Aleph of memory.
In the late 1990s, as the implications of the World Wide Web became more clear, a prophesy in the form of a thought problem briefly circulated in Silicon Valley. It asked:
What if you had a small box—an Answer Box—that contained all of the world’s knowledge and memories? No matter what question you asked it, it would not only provide the answer but present it in any way you wanted it—audio, video, tactile—directly into your brain. What would you ask it?
Behind that question was the implication that now that this Answer Box, the dream of mankind—almost since mankind could dream—was seemingly within our reach, had we prepared ourselves for it? And if not, could there be a greater tragedy than to have all the answers waiting for us … and not be able to formulate the right questions?
Had humanity at last built a machine that was beyond our capacity to use it?
And would the easy availability of knowledge and memory cheapen its perceived value?
Those were not comfortable questions to ask, nor easy ones to answer, and the Answer Box paradox disappeared as quickly as it appeared. But the problem still remained, and thanks to Moore’s Law, it grew closer by the year. And the search for the answers to those questions has only been postponed. Once we take them up again, they will inevitably lead us back to where we began: the human brain and its capabilities.
After the ancient art of memory enjoyed its revival during the Renaissance—and was generally considered a failure—the study of human memory faded in importance to the occasional and anecdotal (the savant, the rare individual with a photographic memory, amnesia cases) as progress in artificial memory proceeded at breakneck speed. After all, why spend years perfecting memorization techniques when books were becoming cheap enough to fill a middle-class home library—and one could access thousands of volumes in the growing number of free, public libraries?
Not surprisingly, the memorization of key texts, which had been a centerpiece of education not only in the ancient world but in the Middle Ages and Renaissance, slowly faded from the curriculum. By the Enlightenment, rote memorization not only seemed a sign of intellectual rigidity but also a waste of time that could be better spent reading more books. Our grandparents, parents, and even many of us in our youth, in what we now often think of as repressive classroom environments, were required to know pieces of the text of a few historic documents (for example, the preamble to the U.S. Constitution), speeches (the Gettysburg Address), songs, and poems (“Paul Revere’s Ride,” “The Charge of the Light Brigade”). Today, in most schools in the developed world, even that little bit of memory work is gone, leaving only the memorization of a few mathematical and scientific equations and perhaps the lines to a part in a school play—and even that is seen as onerous.
The typical modern school test is often taken either with open notes and textbook or with a calculator. And why not? Memory is now free, ubiquitous, and almost infinite; what matters now is not one’s ownership of knowledge but one’s skill at accessing it and analyzing it. The last great argument for memorization—that is, what would you do if you found yourself in a situation without a calculator or the right manual?—became almost meaningless in a world when both were now readily available online anywhere on the planet, from the Serengeti to Antarctica.
What value remained in one’s private memory no longer came from what might be called “common” knowledge; it was usually far more accurate to search the Web than one’s own memories for information about episodes from old television shows, the lyrics of hit songs, and the precise chronology of past events than it was to consult one’s own incomplete and biased personal memories. Indeed, it often seemed that the only “brain memories” that still really mattered were those that were intensely personal. By the twenty-first century, as the Web, security cameras, and social networking sites increasingly made the even the most intimately personal into a shared public experience, it began to seem that the only important personal, biological memories that had value were those so quotidian, small, and inconsequential that the rest of the world simply wouldn’t be interested. Francis Bacon was still right: Knowledge—memory—was still power, but it wasn’t our knowledge or our memory. In the world of microprocessors and servers, social networks and the World Wide Web, power was now access to the most valuable caches of memory.
GRAY MATTERS
Ironically, even as the value of the individual human brain diminished, the understanding of the power and complexity of that brain was increasingly understood thanks to the rise of experimental science. Physicians and scientists throughout the nineteenth century, working mostly with stroke victims, the mentally ill, and brain-damaged war veterans had slowly begun to piece together a model of the human brain and a map of its various functions. Then, as now, these researchers were most haunted by amnesiacs—otherwise normal people who had (temporarily or permanently) lost all of their accumulated memories and found themselves in the living hell of being without a past and without an identity.
In the last decades of that century, the Austrian neurologist Sigmund Freud, working from an idea first proposed by the German philosopher Theodor Lipps, began to study the functional operation of the human brain through a process of deep conversation and dream analysis—psychotherapy—with his psychologically troubled patients. What Freud discovered, and what made him one of the most influential scientific forces in the coming century, was that the brain, whatever its underlying physiological structure, in action was an incredibly complex organism that operated at least as much below the surface of consciousness as above. And it was this “subconscious,” often containing memories so embarrassing or traumatic that the brain had repressed them into this hidden location, that continued to secretly work its damage on a person’s behavior.
Carl Jung, Freud’s erstwhile colleague, looked at this same unconscious and believed he saw hidden memories—“archetypes”—that seemed to be common to all mankind past and present, and with each of us from birth. Jung suggested that this “collective unconscious” might represent a very primitive sort of universal mind that might have superhuman powers—a notion largely dismissed as yet the latest eruption of hermeticism. That is, until the rise of the Internet.
In the first half of the twentieth century, even as the general public was assimilating Freud’s and Jung’s theories, brain research moved from the therapist’s couch into the laboratory. There, scientists such as the Russian Ivan Pavlov in the 1920s and the American B. F. Skinner in the 1930s studied how the mind learns behavior by repeatedly accessing memories hidden in the unconscious.
By the middle of the twentieth century, thanks to a whole spectrum of new medical analytic tools made possible by the digital revolution, scientists were increasingly able not only to probe the structure of the brain through targeted X-ray and magnetic resonance imaging but, through the tracking of electrical stimulation, to actually see the brain in action. The result, beginning in the 1960s and continuing to this day, is an increasingly sophisticated and nuanced model of a brain that is anything but simple and monolithic. Here, in summary, is what we now know:
The average human brain weighs about 1.5 kilograms (3 pounds) and has a volume of about 1,200 cc. Brain size is related to body size, so male brains are typically about 100 cc. larger than female brains; and while in extreme cases brain size can be indicative of severe retardation, in normal brains there is little correlation between size and intelligence.
Structurally, the human brain contains just over 200 billion nerve cells. Half of these are glial cells, which provide support for an equal number of neurons, the latter doing the work of thinking. In most of the brain, these glial cells are teamed one to one with neurons, acting as everything from insulators to transmission managers; in the upper brain, the “gray matter” of the cerebrum, that ratio is one to two. The cerebrum also contains 10 billion high-performance pyramidal neurons.
Unlike computers, where the transistors in chip memory and the magnetic bit locations in disk memory are arranged basically in a linear manner, animal brain neurons have connectors (ganglions) arrayed like the roots and branches of a tree that connect with the similar arrays of numerous other nearby ganglions—connections that are strengthened with use. This multiplexing enables the average human brain to exhibit as many as 1,000 trillion—fifty quadrillion—connections. You’ll notice this means that just fifty thousand people have as many brain connections as there are total bytes on the global Internet.
The brain itself consists of several large regions. The main mass of the brain consists of two mirror-image hemispheres, themselves consisting of the “white” (or light-gray) matter of the basic mammal brain; the cerebrum, resting atop the brain stem which connects to the spinal cord; and in the back at the bottom is the cerebellum, whose furrowed surface resembles twisted rope. The cerebrum manages the basic mental operations of the brain; the cerebellum, the direct descendant of the brain of older animal phyla, manages the body’s motor functions; and the brain stem carries messages to and from the body’s muscles, organs, and glands to and from the brain.
The cerebrum itself is covered by a comparatively thin, but heavily convoluted (to increase surface area), cerebral cortex. Roughly speaking, the more intelligent the animal, the more convoluted its cortex, with man having the most “wrinkled” brain of all. The cerebral cortex, as noted earlier, is surprisingly large when unwrinkled and laid out flat—more than 2.5 square feet. And it directs the higher thinking found mostly in primates. In human beings that includes speech, language, logical thinking, vision, fine motor skills, metaphor, analogy, and so on.
To simplify matters, the cerebral cortex is usually divided into four general regions—“lobes”—on each hemisphere and named for the skull bones that encase them: frontal (ambition, reward, attention, planning, and short-term memory tasks); parietal in the top back (ties together sensory information relating to spatial sense and movement); occipital in the far back (vision); and temporal on the lower sides (hearing and speech).3
THE GEOGRAPHY OF MEMORY
In light of the narrative of this book, the obvious question to ask at this point is: Where does memory fit in all of this?
The answer, researchers have found, is that it fits almost everywhere. Memories appear to be stored throughout the brain in a manner, and according to rules, that have yet to be fully explained. Moreover, as anyone who has ever tried to dial a telephone number after just hearing it, or crammed for an exam, or suddenly remembered some trivial detail out of a far-distant past, human memory is not a monolithic process.
In fact, neurologists have identified three primary memory activities and three primary memory types. The first three are implicit to the nature of memory itself, and thus can be found in both human and artificial memory: encoding, the capture and preparation of information for preservation; storage, the recording and archiving of that information; and retrieval, the locating and removal of that information from storage.
But the architecture and form of the organic human brain is very different from artificial computer memory. Though there is a superficial similarity between cache, ROM, and RAM and what scientists call the brain’s sensory, short-term, and long-term memory, they have radically different purposes and causes. In the computer, cache memory is essentially a waiting room for processing, ROM is the home of operating tools that are protected from modification, and RAM is a vast warehouse of undifferentiated memory denoted only by address.
By comparison, the brain’s sensory memory—the ability to capture and hold on to an enormous amount of information taken in by the senses in what has been determined to be less than a half-second—appears to be a genetic response to the complicated natural world. That is, to “see” more than you actually see in case it is hiding prey … or a threat. Tests have found that human beings can capture up to twelve items at a glance … but forget most of them in less than a second. Importantly, it seems that it is impossible to improve the direction of one’s sensory memory with practice.
Short-term memory, as already noted, is typically stored in the frontal lobe. It has its own limitations—as anyone knows who has tried to hang on to a name or address from the time you hear it until you try to write it down even a moment later—especially when there is even the slightest interruption.
In 1956, George Miller, a cognitive scientist working at Bell Labs at the same time as William Shockley, published one of the most cited papers in the history of psychology. Entitled “The Magical Number Seven, Plus or Minus Two,” it made the case, based on studies with test subjects asked to remember lists of words, numbers, letters, and images, that the human brain was able to briefly—meaning up to a minute without rehearsing—remember about seven items on a list, plus or minus two items. Later research has put that number closer to the lower end of that range.4
There are some tricks to increasing both the size of short-term memory and the duration of its storage. The first, as noted earlier in this book, is “chunking,” which takes advantage of the brain’s ability to treat small clusters of information (usually no more than three items) as a single chunk of memory, which is why humans can often remember a phone number better by breaking it up (in the United States) into the area code and local prefix—each three numbers—and then the final four digits into two-number pairs.
As for duration, the solution, as every student knows, is repetition. Short-term memory appears to be a largely chemical process that fades quickly. Thus, if that memory can be quickly pumped up again to a full charge before it disappears—and this process is repeated continuously—information can be retained in short-term memory for an extended period. Better yet, the constant reinforcement of short-term memory seems to be the brain’s primary criteria for transferring that information into long-term memory.
As for long-term memory, it is a whole different creature indeed. What makes it astonishing is that, at least by a human scale, it seems both infinite and immortal. For example, there seems to be almost no limit to the number of memories that the human brain can hold—remember that massive number of connections. It is possible that every memory you ever experienced that made its way into your long-term memory is still buried somewhere in your head, and it is just the insufficiently powerful catalog and search tools in your brain that keeps you from finding them. We’ve all had the experience of concentrating on remembering something, then giving up … only to have the answer pop into our minds hours, even days later, suggesting that the search took longer than we expected. By the same token, all of us have had the experience of thinking about something … only to have some completely different long-forgotten experience or memory pop into our minds, suggesting that it was accidentally captured along with an adjacent, targeted memory.
By the same token, once an item is stored in long-term memory, it seems to last forever unless it is in some way destroyed by injury, disease, or death. A memory from the crib, if strong enough to persist, can be remembered a century later by an aged centenarian as vividly as the day it was forged. Were we to suddenly live five hundred years, there is no reason that same memory from a half-millennium before wouldn’t still be fresh and bright.
OTHER MINDS, OTHER MEMORIES
It’s an extraordinary organ, the human brain and its memory. It doesn’t seem so absurd now that when the ancients attempted to take memory to a higher level, they chose to pursue that goal internally and organically rather than externally and artificially. That they failed doesn’t diminish their attempt; rather, just imagine how the different course of human history would have been had they succeeded.
But they did fail. And for thousands of years, we have pursued a different path; one that is outside of our skulls and that, for all of its power, must forever find a way back inside, with all of the associated compromises of access and translation.
Now, after all of the intervening centuries, the two paths seem again to be converging.
In recent years, machines, especially those that have jumped aboard the rocket of Moore’s Law, are achieving a level of raw intelligence that approaches—and in some cases even exceeds—that of the human brain. At the same time, this artificial intelligence is spreading far from its traditional home in computational devices and test-and-measurement instruments to every corner of daily life. And that includes sensors, pattern-recognition devices, vision systems, nanomachines, and hundreds of other technologies that lend themselves to supporting—as the human body does with the human brain—the interconnection of computer intelligence and the natural world. Most of these peripheral devices exhibit performance well beyond that found in even the most proficient human beings.
There is another factor as well: Compared to the human brain, these digital devices are also breathtakingly fast. The basic clock of the animal world is the heartbeat, and it is a rule of thumb that most living things have within them about 109 (1 billion) heartbeats. Thus, animals with rapid heartbeats (insects) have short lives; those with comparatively slow heart rates (primates, tortoises, parrots) have long ones. By comparison, as this is being written, modern state-of-the-art microprocessors have clock speeds approaching 5 gigahertz—or 5 billion cycles per second. In other words, these chips—and the devices they run—experience the equivalent of several human “lives” every second.
Finally, to this mix add the Internet, by many orders of magnitude the largest repository of memory ever created—and optimized for navigation by computer intelligence. Indeed, unlike a library, the World Wide Web is a place that can only be entered accompanied by a computer or other digital device.
Once again, memory is power. And the history of humanity can been seen as the long, long story of the increasing distribution of the ownership of memory—and thus liberty—from the few to the many, from shamans and kings to everyone, including the most wretched of mankind. Memory liberates, so are the next subjects of that liberation our machines? After all, they now control most of the world’s memories.
None of these questions have been lost on humanity. On the contrary, in the two centuries since Mary Shelley’s Frankenstein, and especially since the rise of science fiction in the twentieth century, we have been increasingly obsessed with the idea of intelligent machines—at best as our loyal compatriots, at worst as our evil overlords. And if we ponder that story about the Answer Box long enough, it becomes apparent that the real problem may not be what question we would ask it, but whether it needs us to do the asking at all.
This is not to suggest that a world of humans and conscious, independent machines casually interacting on an everyday basis—or worse, a dystopian world in which humans are enslaved by far superior silicon-based life forms—is anywhere in our near (or distant) future. However, the increasing convergence of the two forms of memory, natural and artificial, suggests that some kind of reckoning lies just a generation or two ahead.
MASTER, PARTNER, SELF
What will this reckoning look like? There are three likely scenarios: living machines, assisting machines, and human machines.
Living Machines
As we’ve seen, human beings have been trying to make their creations look and act like living things ever since the ancient Greeks and the Chinese. But it was de Vaucanson and the automaticists who first created mechanical devices that could effectively mimic a wide range of animal and human behaviors. But imitation isn’t actuality, and no matter how stunningly real the Flute Player or the Digesting Duck might seem in a controlled setting, its repertoire was small; and no matter how great the skill of the builder, none of the creations ever exhibited any of the traits we think of as being alive, from reproduction to adaptability to their surroundings to self-maintenance. They did the same tricks over and over until they broke, all the while looking out at the world with dead eyes.
Though they failed at their immediate purpose, the immense influence of these automatons has been almost immeasurable. Not only, as noted, were their toothed-wheel controllers an important marker on the path to modern computing, but their actual gear, axle, and pulley mechanisms were crucial to the development of twentieth-century cybernetics. When, in the 1920s, playwrights like Karel Čpek (with his humanlike androids in the play R.U.R that coined the word “robot”), filmmakers like Fritz Lang (with Metropolis and its beautiful Maschinenmensch), and inventors like biologist Makoto Nishimura (with his robotic bust of Gakutensoku that could laugh and cry and turn its head) began to create the image of the modern robot, it was that of an automaton with a tabulating machine for a brain.
The assumption was, in a kind of presaging of Moore’s Law, that it was only a matter of time until the mechanical systems governing the motion of robots were sufficiently precise and reliable that they would be, to the naked eye, all but indistinguishable from living organisms. By the same token, as the tabulating machines became computers, and the computers developed the complexity of organic brains, it was also believed that robots would also begin to “think” like living things … and eventually “wake up” to a kind of servile consciousness.
In the end, we got most of the first and not much of the second. Today’s robots, especially those created in university laboratories, do a very good job of re-creating bipedal motion, or carefully picking up objects, identifying unique patterns, recognizing spoken words (especially from a single speaker), and constructing verbal sentences in an intelligible voice. But as impressive as these constructions may be, they are still disconcertingly far from truly autonomous creatures. And worse, the closer they come to achieving their goals, the deeper they seem to sink into what has been called the “uncanny valley,” in which the more lifelike an artificial form becomes the less alive it seems. Thus, Mickey Mouse still seems more real to us than the soulless creatures created by the latest computer graphics programs to look almost identical to real human beings on-screen.
This quest to re-create life in an artificial form has taken a backseat to the real business of modern robotics: the construction of mechanical slaves to take on tasks that are too dangerous or repetitive to still be done by human beings—wrapping wiring harnesses, welding truck quarter panels, picking up newly cut integrated circuit chips and welding interconnects, grabbing items from sea beds, and, increasingly, performing tasks in surgery and dentistry. These robots are, for the most part, fractional entities—giant arms, precise fingers, motorized tracks following buried wires. They are also single-minded in their purpose; most are programmed via a local-area network and have little or no contact with the Internet. If we wait for these machines to “wake up,” we may wait forever.
But what of the big multiprocessing supercomputers? They certainly match or surpass the human brain in many areas of performance, including processing speed. Will they begin to think autonomously sometime soon, establish their own identity, and achieve some kind of will and consciousness? Predictions of big computers thinking on their own—perhaps even exerting control over mere mortals—are as old as mainframes themselves, and seem to gain new speculative life with every new generation of “Big Iron.”
And yet, other than a few anecdotes—the best known being the famous 1996 and 1997 matches between world chess champion Garry Kasparov and the IBM Deep Blue supercomputer, after which Kasparov said he sensed a mind at work in his opponent—there is no indication that a computer has ever, even for a second, accomplished “thought” as we conceive of it in living things, much less achieved a consciousness of its own existence.
That could change someday—perhaps sooner than we think. The Blue Brain Project, begun in 2005 at the Swiss École Polytechnique, is using an IBM supercomputer to replicate the actual mammalian brain, right down to the structure of its neurons. Speaking just yards from the Bodleian Library at Oxford, Blue Brain director Henry Markram announced, “It is not impossible to build a human brain, and we can do it in ten years”5 To the BBC he added, “If we build it correctly it should speak and have an intelligence and behave very much as a human does.”6
Time will tell. And what of the Internet itself? With its wireless and dial-up links mixed in with its ultrabroadband trunk lines, it is slower than the human brain but a thousand times more powerful, and it features much of the same multiplexing that is found in animal neurons. Does the Internet think? And if so, and it becomes, as H. G. Wells predicted in 1938, “a world brain,” will it have at its command all of human memory and knowledge? Will we really, as Wells claimed, embrace it because “we do not want dictators, we don’t want oligarchic parties or class rule, we want a widespread world intelligence conscious of itself”?7
Perhaps—and perhaps not. But it is hard not to dispute the prescience of the rest of Wells’s prediction:
The whole human memory can be, and probably in a short time will be, made accessible to every individual.… This new all-human cerebrum need not be concentrated in any one single place. It need not be vulnerable as a human head or a human heart is vulnerable. It can be reproduced exactly and fully, in Peru, China, Iceland, Central Africa, or wherever else seems to afford an insurance against danger and interruption. It can have at once, the concentration of a craniate animal and the diffused vitality of an amoeba.8
In 1997, George Dyson, son of the noted physicist Freeman Dyson, published Darwin Among the Machines. In it he looked positively upon the idea of sharing the world with intelligent machines and warmly anticipated what he thought to be their impending arrival. He approvingly quotes the essayist Garet Garrett, who wrote in 1926:
Man’s further task is Jovian. That is to learn how best to live with these powerful creatures of his mind, how to give their fecundity a law and their functions a rhythm, how not to employ them in error against himself.9
Dyson then asked: “Is the diffusion of intelligence among machines any more or less frightening? Would we rather share our world with mindless or minded machines?”10 For George Dyson, the answer was clear: Artificial intelligence, even consciousness, was inevitable, and by the right of successful evolution (even if it was by man himself), machines had to be allowed to fulfill their own destiny. As Dyson put it, “We are brothers and sisters with our machines … in the game of life and evolution there are three players at the table: human beings, nature and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.”11
But if the Internet was ever going to awaken, like a great digital Leviathan (to use one of Dyson’s favorite analogies), and embrace that destiny, it probably should have begun stirring by now. And yet … nothing.
Or more accurately, nothing yet.
INSIDE JOB
Assisting Machines
In 2011, teacher Michael Chorost published World Wide Mind: The Coming Integration of Humanity, Machines, and the Internet. Its subject was in its title, but the implicit message of the book was: Why wait for our machines to awaken and come to us? Instead, let’s meet them halfway.…
What if we built an electronic corpus collosum [the linkage between the two hemispheres of the brain] to bind us together? What if we eliminated the interface problem—the slow keyboards, the sore fingers, the tiny screens, the clumsiness of point-and-click—by directly linking the Internet to the human brain? It would become seamlessly part of us, as natural and simple to use as our own hands.12
In particular, what Chorost proposed was that, using a complex technique combining viruses to alter the DNA of brain neurons, optogenetics (using light to control cell functions), and the implantation of nanowiring, it should be possible to install a wireless modem directly into the human brain. The process would be difficult and time consuming, Chorost admitted, and the time needed to actually control this new part of the brain might run into months … but in the end, the owner of this modified brain would be able to communicate with other human beings with similar brains in a manner not unlike telepathy, or at least like people instant messaging each other with their cell phones.
But Chorost went one step further. He argued that if all human beings had their brains modified in this way, they would be able to link together in a vast mental network—a “hive mind” is the term applied to social insects—that would be greater than the sum of its parts, and would result in deeper human relationships, larger and more successful group endeavors, and greater mutual understanding.
Chorost came with unique credentials. Born nearly deaf from a case of rubella, he continued to lose the rest of his hearing into adulthood. Finally, in 2001, unable to get by with just hearing aids, he underwent the still-experimental surgery of a cochlear implant—a device that combines a microphone, a digital speech processor, and transmitter—to capture and filter sound and then transfer it directly from the inner ear to the auditory nerve to the brain.
From that life-transforming experience, Chorost wrote Rebuilt: How Becoming Part Computer Made Me More Human in 2005. His yearlong experience in learning how to hear again, and the transformative effect this restored sense had on his life, led Chorost to investigate what it would take to insert even more powerful technology into the human brain.
To date, about a quarter-million people have received cochlear implants. Thousands more have received deep-brain implants for brain and vagus nerve stimulation to help them fend off the effects of Parkinson’s disease and depression. Others have received brain “pacemakers” to manage epilepsy. Others are presenting themselves as test patients for miniature camera brain-implant systems to restore sight.
These brain implants seem to point the way toward even more transformative uses—not just Chorost’s dream of a World Wide Mind, but something more personal and individual: the ability to add new memories, knowledge, skills, and talents directly into the human brain from artificial sources.
Brain implants have been around for a surprisingly long time. As early as 1870, German researchers Eduard Hitzig and Gustav Fritsch had implanted an electrode into parts of a dog’s brain and stimulated it to repeat certain movements—a technique that was reproduced in the human brain. In fact, what we now know as the map of the human brain was largely discovered through the use of these implants.
By the mid-twentieth century this implantation mapping technique had become very sophisticated and capable of identifying and diagnosing certain forms of mental illness. But the discipline really took off with the arrival of computers, magnetic-resonance imaging, and three-dimensional imaging; now brain function could not only be statically mapped but also studied in real time as different regions lit up with electrons in use. Indeed, it was even eventually possible to make fairly accurate guesses what patients were thinking about given the unique pattern of their firing brain neurons.
In one of the most remarkable studies, undertaken in 1999 by a team at the University of California–Berkeley, researchers implanted 177 electrodes into the thalamus region of a cat’s brain (the part of the brain that translates sensory data into brain signals)—in particular the part connected to the optic nerve. They then tracked the firing neurons in the cat’s thalamus and ran the results through a computer using a process called linear decoding. They were astonished by what they saw. It was the world as seen through a cat’s eyes, including their own faces.13
The next decade saw significant—and often controversial—progress in this emerging field of thought identification. Using a new brain-scanning technology called functional magnetic resonance imaging (fMRI) to track the changes in blood flow resulting from neural activity, researchers were able to do what was heretofore considered magic: to predict human action, such as pressing a lever, before a subject knew he or she would do it—that is, reading the unconscious brain making a decision (and the decision it would make) before that choice ever reached the conscious mind.
This predicting of human intention was controversial enough, with the accompanying major ethical concerns regarding privacy and philosophical implications about free will. But the researchers had just begun. The discovery that all human brains respond to the same images in the same way meant that numerous images could be shown to subjects, the fMRI patterns tracked and cataloged in computers, and a vast encyclopedia of brain images created and then accessed in real time. And that, in turn, made it possible to “read” a patient’s thoughts and memories in real time.
By 2007, Barbara Sahakian, a professor of neuropsychology at Cambridge University, was able to say with (chilling) confidence: “A lot of neuroscientists in the field are very cautious and say we can’t talk about reading individuals’ minds, and right now that is very true, but we’re moving ahead so rapidly, it’s not going to be that long before we will be able to tell whether someone’s making up a story, or whether someone intended to do a crime with a certain degree of certainty.”14
So that’s the reading of thoughts and memories in the brain. What about the writing of experiences directly into the brain?
The idea of manipulating memory by putting thoughts (usually false) into the brains of others is at least as old as Descartes. A corollary to his process of stripping away all indisputable knowledge from his brain to reach the only surviving truth—that of his own thinking as proof of his existence (cogito ergo sum)—was the possibility that all of his other memories and observations might not just be untrue, but intentionally false. Descartes imagined it as his brain in a black box with an evil demon controlling everything going into and out of that box. That was 1638, but it remains a notion as current today as the Matrix movie trilogy.
“Brainwashing,” the psychological technique of inserting false memories into others, burst into the public eye in the early 1950s during the Korean War, when North Korean interrogators were accused of using the technique to distort the psyches of captured U.S. soldiers. It was a process made vivid by the movie The Manchurian Candidate. It surfaced again during the sex-abuse hysteria of the 1980s, when a number of child day-care-center operators (notably the Amirault family in Malden, Massachusetts) were accused—through the “recovered memories” of young children—of administering bizarre sexual rites on the children in their care. Ultimately, the charges against these operators were dismissed when it was determined that prosecutors had systematically convinced those children to believe, without evidence, the truth of these claims.
But the idea of implanting empowering, rather than destructive, memories into the human brain first really captured the public’s imagination with the publication of the first “cyberpunk” science-fiction novel, Neuromancer, by William Gibson, in 1984. The world of Neuromancer is filled with characters, mostly mercenaries, who regularly enhance their performance—memory, strength, sight, skill sets—by “jacking in” brain implants to download this knowledge from the universal “matrix.” Gibson’s literary skill, combined with his extraordinary ability to extrapolate current technology into the future (via a Moore’s Law–like view of modern life), almost instantly made the notion of brain implants not only possible but something to be anticipated in the near future.
Unfortunately (or perhaps fortunately), it hasn’t quite turned out that way. In theory, placing digital technology into the brain should be easy. In reality, it has proven to be very difficult. Note how complex Michael Chorost’s model of the brain modem was to execute, requiring genetically engineered viruses (the alternative being dangerous open-skull brain surgery and the precise locating of microscopic wires and other devices). Chorost’s cochlear implant, like other sensory restoration techniques, is comparatively easy because it mostly sits on the outside of the brain, or even the skull, not inside the brain itself.
That’s why reading the brain is so much easier than writing on it. You can slice the brain with fMRI and watch it work, but you can’t do that continuously day after day without risking injury. By the same token, you can wrap the skull with a cap embedded with scores of electrodes—and leave it on permanently—but now you can only read the operations of the brain within, and not very precisely.
In other words, even reading the brain is tough, and it requires serious compromises between precision and permanence. Writing on the brain is far tougher. Experimental brain implants have proven to be very precise and powerful about turning on and off different actions and memories. But to electronically implant a new memory is not yet possible, and is many magnitudes greater in difficulty—after all, the brain scatters memories all over the place. And even then, there are serious questions about how long the apparatus will work. The brain is a living organism and shares the body’s immune system, and past experience with electrodes has found they begin to fail after a few weeks as the brain begins to surround them with scar tissue. Even if we could selectively turn off the immune system in the brain to keep these implants functional, we’d then be opening the door to infection and possibly cancer.
So, as thrilling as the notion is of having a small slot in the back of one’s head where you can swap in and out a knowledge of French, automobile repair, or American history as easily as you can a memory card into a digital camera or a thumb drive into a laptop computer, that reality is a long ways off. And even if there were to be a breakthrough in machine-brain interfacing, it is not self-evident just how all of that new memory would be used. Would it all be poured into the brain at once, or would the brain regularly talk to the memory card? And just what kind of translating of this data and training of the brain would be needed to make it work?
Based on his own experience of needing months to make full use of his cochlear implant—and he wasn’t fully deaf from birth but had experience actually hearing—Michael Chorost suggests that learning to use his brain modem might take a year or more. That’s a lot of pain and commitment for a result that might soon fade away. The obvious solution, assuming all of the other obstacles are overcome, would be to embed the “memory plug” into a newborn—or, better yet, a fetus. But in a world where removing a baby boy’s foreskin is becoming criminalized, what is the likelihood of legally putting a plug into a newborn’s brain?
However, as unpleasant and painful as brain implants may sound, we can still assume that if the technology even approaches practical implementation there will be a small army of volunteers willing to take on the misery and risk of being a pioneer of the brain memory implant. Since the turn of the millennium, as ever greater numbers of people have received not just brain-oriented devices such as cochlear implants but also artificial limbs, bones, tools for locomotion, and so on, a cultural movement—transhumanism—has emerged that is dedicated not just to the advancement of human-machine technology, but its celebration. Transhumanists see the use of machines and computers not as a last-ditch effort to restore a failed biological system but rather as a means to enhance human existence. In their vision, at George Dyson’s table there will one day be only two players present: nature and a hybrid of man and machine … and Mother Nature will learn to love her flesh-and-metal child.
A SINGULAR TURN OF EVENTS
Human Machines
On the farthest shores of this new world of artificial/natural memory lies the strangest vision of all, one to which many of the transhumanists aspire … and one that some of the world’s most brilliant computer scientists believe is both inevitable and immanent.
It is called the Singularity.
The term “Singularity” has a lot of different definitions in mathematics, cosmology, and quantum physics, but all share a common attribute of characterizing a moment or location or event where everything undergoes a massive change so complete that comparing before and after is almost a meaningless exercise. The same is true for what has been predicted to be a Singularity in technology: It will transform the meaning of what it is to be human or a machine, of natural and artificial memory, life and death, and ignorance and knowledge so completely that, from this side of that event it is literally impossible to predict what will take place on the other side.
Compared to the other two scenarios, the Singularity is a relatively new concept. Presaged in the 1960s by any number of movies, TV episodes, and science-fiction stories predicting a future in which computers and robots suddenly break through an invisible barrier and become self-aware, and self-improving at a blinding speed, the idea of the Singularity was first proposed by the British statistician (and one of Alan Turing’s old compatriots) Irving Good in 1965:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.15
The Singularity itself received its first formal definition in a 1995 essay entitled “The Coming Technological Singularity: How to Survive in the Post-Human Era” by mathematics professor and science-fiction writer Vernor Vinge.16
The title says it all. For Vinge, the Singularity would be an explosion of artificial intelligence that would ultimately result in a “superintelligence” whose transformation of the future would be as complete and inexplicable as the event horizon on the edge of a cosmic black hole. Thus, the first of these superintelligent machines will be mankind’s last and greatest invention. After that, humanity will be largely superfluous.
Vinge’s most famous quote about this Singularity is: “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”17
In Vinge’s scenario, at the Singularity one or more of our computers (or, pace Dyson, the Internet itself) becomes so intelligent and competent that it starts re-creating and upgrading itself or begins building other computers even more capable than itself … and in no time, mankind is left in the dust. Or, in the most dystopian case, we become slaves to our new digital masters. In fact, when you begin with Vinge’s Singularity, things can get really ugly really fast. For example, consider this nasty little example of the Law of Unintended Consequences at the Singularity, courtesy of Nick Bostrom, philosopher and director of the Future of Humanity Institute, which just happens to be across the street from the Bodleian Library:
When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.18
Hold on, says Ray Kurzweil, the renowned inventor and the figure currently most associated with the Singularity. Why does mankind need to be left behind on this side of the Singularity? Why can’t we go forward as part of our machines?
Kurzweil, who was raised in Queens, New York, first learned about computers from his uncle, who was an engineer at Bell Labs. In 1963, at just fifteen, Kurzweil wrote his first computer program and in short order was winning national and international science fairs with his inventions. Over the next thirty years, Kurzweil made his reputation with one invention after another, developing tools that enabled the blind to read, synthesizers that finally reproduced the sound of traditional instruments, speech-recognition computers, and virtual-reality training tools for medical professionals.
Then, beginning in 1990, Kurzweil embarked on a series of three books of predictions about the technological future. These books, and the theories they present, have dominated Kurzweil’s career ever since—and given him a reputation as one of the world’s leading futurists. At the heart of all of these works is Kurzweil’s belief in Moore’s Law as not only the defining force of our time but the most powerful tool available to predict the new world—in other words, the Singularity—that Kurzweil believes (and has convinced millions of others to believe) is likely to arrive within most of our lifetimes.
The titles of the three books—The Age of Intelligent Machines (1990), The Age of Spiritual Machines (1999), and The Singularity Is Near (2005)—show both the development, and the increasing optimism, of Kurzweil’s thinking. Intelligent Machines was essentially an extrapolation using Moore’s Law from the existing pre-Internet technology of the late 1980s into the decade or two ahead. Though some of his later claims about his predictions are a bit far-fetched (such as having predicted in 1986, when he began the book, the impending fall of the Soviet Union) he proved as prescient as anyone in anticipating the explosion not just of the Web, but also of wireless telecommunications. He also predicted the future defeat of a human chess champion by a computer program (which took place in 1997 with Kasparov’s defeat by Deep Blue).
With Spiritual Machines, Kurzweil grows much more ambitious in his predictions and his time horizons. The book is largely cast in the form of a conversation between Kurzweil and “Molly,” a fictional foil. The book begins in 1999 with Molly as an average young woman, largely uninformed about the information revolution and a little flirty. The book ends four hundred pages and a century later, with Molly now evolved into a conscious, noncorporeal self embedded within a powerful computer. She is now brilliant, curious, and ready to experience the entire world.
PLACING BETS
Part of the fun (and the courage) of Spiritual Machines comes at the end, where Kurzweil makes predictions, by the decade, for the century ahead. With the first set, for 2009, he offered 108 different predictions—the rise of portable computing; the decline of disk drives; movies, books and music increasingly being delivered digitally; tele-medicine; and so on. Time has proven him to be impressively accurate, though more with his technological than cultural and economic predictions.
Looking further into the future, Kurzweil’s predictions for the end of the twenty-first century are pretty radical: thousand-dollar computers as powerful as the human brain by 2019; food commonly assembled by nanomachines by 2049. By 2099, Kurzweil sees a world in which artificial intelligence is not only superior to human intelligence but one in which AI dominates the landscape, with humans embedding implants in their brain just for the chance to be part of the AI world … and the remaining “traditional” humans protected by machines like exotic wildlife. Meanwhile, those humans who have turned themselves into artificial life forms regularly make backup copies of themselves to obtain a kind of immortality.
Kurzweil finishes his predictions by casting out into the millennia, writing as his most distant prediction: “Intelligent beings consider the fate of the Universe.” Our machines are now gods.
Six years after Spiritual Machines, in what Kurzweil considered an “update” of the first two books, he published the bestselling The Singularity Is Near. Now he adopts the concept of the Singularity and makes it his own—or, more precisely, makes it everyone’s. Kurzweil is still convinced that Moore’s Law will deliver us to this Promised Land, though the date has been moved out a couple decades. Now, instead of giving human beings the choice of either begging for a chance to join the great global intelligence or end up as protected zoo animals, Kurzweil is ready to let them lead the parade. At the Singularity, it won’t necessarily be the computers that become conscious, but rather we humans will become the machines.
Why the shift? It may be that Kurzweil had noticed, like everyone else, that even the world’s most powerful supercomputers had still not shown the slightest sign of stirring themselves to self-awareness. What he now proposed was a new definition of the Singularity, one that seemed aware of the promises (and limitations) of the Blue Brain Project and its virtual re-creation of the human brain.
Kurzweil began with a collection of premises: that the Singularity could be achieved by human beings; that because of Moore’s Law it was accelerating toward us from the future; that we can understand, down to the level of neurons and electrons, how the brain functions; and that thanks to medical advances, Kurzweil’s generation of baby boomers would live long enough to reach the Singularity … and then have a very good shot at immortality.
How would this occur? By using increasingly sophisticated tools to map the location of every neuron in the human brain and its contents and then load them into a computer where it can operate as a duplicate, virtual brain already loaded with all of our memories. This way, when the Singularity hits, we will already be aboard, Descartes’s ghost in the machine, as those machines accelerate away toward their own destiny to control the universe. Thus, the race by baby boomers to stay alive until the mid-twenty-first century is not just to add a few more years to the end of a long life but a chance to become immortal, omniscient, and increasingly omnipotent.
It would be the most astonishing finish imagineable to mankind’s million-year relationship with its own memories: to become memory itself. To reverse the equation—from our identities being defined by memory to memory (in some anonymous computer) being our identity. More than ever before, memory would truly be the guardian of all things.
BITS OF MY LIFE
There are already individuals racing to embrace this tantalizing vision. No one more than yet another computer genius, Gordon Bell, the man whose design for the Digital Equipment VAX minicomputer served as the model for the architecture of the Intel 4004, the first microprocessor.
Beginning in 1999, not long after he became a fellow at Microsoft and reaching the usual retirement age, Bell reinvented himself and embarked on a celebrated project—MylifeBits—to use the latest miniaturized, wearable digital camera, audio recorder, computer, and communications technology to document every bit of his life as he lived it. Being Gordon Bell, he also wrote the software to tie together all of these memories and archive them—a task that soon grew to a thousand photographs, several videos, hours of audio, scores of e-mails and recorded phone calls … per day. And even as he strained the capacity of his office computer’s disk drive, Bell also embarked on the task of capturing and recording every surviving record of his past—from school report cards from his childhood in Kirksville, Missouri, to his founding (with wife, Gwen) of the Computer History Museum in 1979 to his receipt of the National Medal of Technology in 1991.
As Bell has said, “That is a shitload of stuff.”19
Gordon Bell is now the most documented human being in history. As Bell claims in his book, Total Recall, the memory wake he will leave behind is the largest ever.20 And yet to see Bell in person—he is a septuagenarian now—is to realize that he looks no more overburdened by his electronic appliances than the other Silicon Valley men and women sitting around him in the Stanford University coffee shop.
In a world where nearly 700 million people constantly record the fine details of their lives on Facebook, it is easy to see Gordon Bell as a pioneer of a new way of living, in which all of one’s experiences live forever as a swath of artificial memory. It has been claimed that he is a kind of Kurzweil scout, cutting the path for the rest of us to the Singularity.
The self as memory, and soon, memory as self … it is a perfect ending to our story of human memory.
Maybe too perfect.
It may just be that Gordon Bell’s destiny is not to live out Ray Kurzweil’s vision, but his own. As alluring as Kurzweil’s vision can be—and he has millions of ardent believers and even founded a Singularity University on the site of the old Moffett Naval Air Station in Mountain View, California (almost next door to Bell’s Computer Museum)—it has not escaped considerable criticism as being as much wishful thinking as technological imperative. It may just be that Kurzweil is our Gilgamesh—a proud, accomplished man who dreams of immortality because (like billions before him) he doesn’t think he deserves to die.
As for the immortality offered by Kurzweil and his notion of a Singularity, you can be sure that, should it suddenly seem imminent, there will be no shortage of volunteers: transhumanists, the mortally ill, and the adventurous. In light of this, it is hard not to be haunted by the heroic Seattle dentist Barney Clark, the man who volunteered in 1982 to be implanted with the first artificial (Jarvik) heart and who, after 112 days of confusion, misery, and the unrelenting clicking of the heart valves in his chest and head, pleaded to be allowed to die. What if the first person to wake up in a computer’s memory begs to be erased?
Meanwhile, as Gordon Bell has always claimed for MylifeBits and his relationship to memory, it is not a revolution but an evolution; not an earthshaking transformation in the relationship between mankind and its machines but the more personal one between an individual human being and his or her memories, of the gift (or curse) of never forgetting anything.
As Bell has said of the experience, “It gives you kind of a feeling of cleanliness. I can offload my memory. I feel much freer about remembering something now. I’ve got this machine, this slave, that does it.”21
Bell’s small, tempered vision of a remembered life may not be as sweeping and apocalyptic as Kurzweil’s Singularity, but it doesn’t come without its own problems. The least of these is that, unlike Bell himself, most of us live pretty uneventful lives—and so terabytes upon terabytes of our life memories would probably be the worst imaginable “slides from our summer vacation” hell for others. Will even our descendants want to sift through all of this detritus of a boring life; will there be search engines to dig up the few nuggets of good stuff?
Bell’s vision is also based on the assumption that we want to remember everything. But many people only live happy and fulfilling lives because they have managed to forget certain events in their past. Even a search to find and erase those memories on one’s own MylifeBits might be devastatingly traumatic.
Meanwhile, as Gordon Bell knows as well as anyone, there is a certain phenomenon in the computer industry called the legacy problem. It is that as the years pass, computer lines tend to become less innovative because they have to pull along the burden of old programs and their loyal customers. That’s precisely what happened to IBM with its 360/370 mainframe computers—and why Bell’s own VAX minicomputer was so successful. Do we really want to drag along behind us all of the chains of the past like Jacob Marley? Or will the “clean” feeling of offloading that past into a computer be enough?
Still, there is something in Bell’s vision, a thread that reaches back through the history of mankind to a dream even older than that of immortality. It is for one’s brief time on this Earth to have meaning, for it to echo down through history if only as the faintest memory. It is the oldest human voice on earth whispering, Don’t forget me.
MEMORY LOSS
The Rosicrucian Egyptian Museum, constructed to resemble the ancient Temple of Amon at Karnak, has stood in San Jose, California, for seventy-five years. It is a mile from the warehouse where Rey Johnson and his team built the first disk drive, three miles in different directions from where Al Shugart led the creation of the minidisk drive and Steve Jobs and Apple prototyped the iPod. Another couple miles and you reach the laboratory (now a retail store) where Bob Noyce and the Traitorous Eight invented the integrated circuit. Head from there a couple miles toward San Francisco Bay and you reach the site of Fairchild, where Gordon Moore devised his Law, and a quick hop across the freeway from there and you arrive at Ray Kurzweil’s Singularity University. Like memory itself, most of these places and events are long gone, yet in remembering them, they are still in the present.
The museum is run by the Ancient and Mystical Order Rosae Crucis—the Rose and the Cross—Rosicrucians. The AMORC, though founded in 1915, claims roots dating back to ancient Egypt. Claiming among its past members Francis Bacon, René Descartes, and other figures who have appeared in this story of memory, the Rosicrucians are yet another surviving branch of the occult/secret-knowledge/syncretic belief system that we have seen wax and wane over the millennia.
These days, other than making for some entertaining conspiracy theories, the Rosicrucians, like other mystical groups, have retreated to tiny, self-nurturing communities waiting for the world to turn once again. They are a reminder that to be remembered is to endure.
In one of the exhibit cases at the museum, amidst the mummies, canopic jars, and exquisite lapis lazuli jewelry, is a small coffin bearing the large-eyed, high-checkboned face of a young girl. This coffin is estimated to be almost 2,600 years old—and from its hieroglyphics can be deciphered a name: Ta’awa.
Other than that she came from a wealthy family, her name—and, in fact, just her nickname—is all that we know, and will likely ever know, about Ta’awa. But that is enough. She is still remembered after 2,500 years, in a world she could not have imagined. And her name has survived because it was written on a board inside a buried coffin that, though slightly browned, probably looks as fresh as the day it was made. So, too, do the names on the papyrus fragments in the display cases nearby.
So Ta’awa, despite her brief life, has found her own form of immortality. So has King Gilgamesh, who lives on twelve clay tablets, baked to stone, that rest in the British Museum. Fifty miles away from there in Oxford, in the dark old Duke Humfrey’s Room of the Bodleian Library, the great bestiary, Bodley 764, slowly breathes its way through the centuries, its vellum pages still supple, the colors and gilding of its paintings bright and new.
Israeli-born Arik Paran, who lives in nearby Sunnyvale, knows the Egyptian Museum well. When his three boys were young, touring the Rosicrucian Museum was always an annual elementary-school field trip. And like every other visitor, even though he had seen the comparable antiquities of his own country, Paran was astounded by the sheer age of the artifacts on view.
These days he appreciates their durability more than ever. An engineer by training, a few years ago Paran caught the entrepreneurial bug, quit his job, and founded his own company in San Francisco, Digital Pickle, dedicated to the restoration of old audio and video recordings, as well as computer files; converting obsolete memory media to state-of-the-art new forms. It was a nice little business: Private and corporate customers would bring in old videocassettes, floppy disks, 8-mm films, professional videotapes, microcassettes, and all sorts of other once-popular memory storage media. Paran had the equipment in house to capture stored data, “sweeten” it by adjusting the contrast, heightening the color, pulling voices up through the tape hiss, merging scores of small files on multiple floppies … and then downloading the results onto single DVDs.
It didn’t take long for Paran to realize just how poor the quality of these recordings was. Videocassettes created and sold just a decade ago had already begun to bleach out; voices and songs on cassettes had begun to fade away, and the Mylar plastic in the floppy disks had begun to crack or drop bits of memory. But most distressing was working with the big one-inch and two-inch videotape, the kind used in studios to record important, professional-quality videos. Sometimes Paran or one of his staff would thread the tape into its player and see only static on the screen. They would jump to shut down the player, but inevitably some of the images would be lost forever. And opening the player, Arik would check the read-write head to find that the old magnetic surface layer on the tape had literally peeled off like old paint attacked by a scraper. Sometimes the rest of the surface could be affixed to the tape, but just as often the tape couldn’t be salvaged. He tried not to think about what had been lost.
LOST SOULS
For two thousand years, after parchment and/or rag paper became the artificial memory of choice for China, the Middle East, Europe, and then the rest of the world, scarcity, not preservation, has been the biggest challenge. Once a book was written or printed and put on a library shelf, its life expectancy was measured in centuries.
But all of that ended around the time of Herman Hollerith and his punched cards. Paper tape, made fragile lace with thousands of punched holes, wasn’t designed to last more than a few weeks—just long enough to transfer its data. Early celluloid film stock, as we all know, was terribly volatile, even explosive. An estimated half of all silent films are lost forever, which is why the discovery in South America of missing reels of Metropolis or rumors of the rediscovery of London After Midnight reverberate around the world. The few pioneering Edison films survive mostly because the company made test copies of the frames on paper.
Meanwhile, even early films from the sound era have begun to fade, especially those that used experimental color techniques. The most popular solution—that of creating a “master” that can be carefully preserved—merely shifts the problem to that of the danger of having only one of a kind—as the infamous 1967 MGM movie vault fire, which led to the loss of hundreds of films—shows.
Printing, traditionally the most reliably durable form of mass artificial memory, went through its own transformation, too. With the number of avid readers growing into the millions, books, like newspapers before them, shifted to a lower-cost production medium: wood pulp paper. Cheap and abundant, pulp paper fueled the middlebrow home library boom of the first half of the twentieth century and the paperback boom of the second half. But pulp paper is highly vulnerable to heat and light, as any owner of a brown-papered, crumbling old paperback or newspaper knows.
Magnetic memory, when it appeared, was hailed not just for its breakthrough in capacity, but also its durability. After all, audio and video tape was made from the newest space-age material, which seemed infinitely tougher than anything that had come before. As for hard-disk drives, they were milled out of solid metal and coated with rust—what could be more elemental? Well, actually, silicon semiconductors.
These memory devices were thought to be all but immortal, the first truly worthy replacement for the book. A few decades later, we know better. What is easily erased can usually be easily erased forever. Read-write heads skimming at breakneck speed over the surface of a disk can lose direction and auger in, gouging the oxide like a plow. Accidentally (or purposefully) create a powerful magnetic field and all of those little molecules will align in a different way and forget everything they knew. Silicon chips are tougher, but the lead frames in which they are implanted aren’t. And leave a motherboard stuffed with memory chips running for too long with insufficient cooling and those chips will burn themselves out one by one. Did I mention electrical surges?
And then, of course, there is the potential for catastrophic loss. What if the sun gets testy and the Earth undergoes a solar storm like it did in 1859—what is called the Carrington Event? In that largely pre-electric world, there were enough charged particles emitted by the sun to cover the planet with an aurora borealis bright enough to read by and to cause telegraph equipment to burst into flames.22
If the Earth was to be bombarded now in the same way, it could erase much of the memory that now exists in magnetic storage—not to mention batteries and the power grid, and the electric motors and integrated circuits that power them. So, even if your disk survives, it may take you a long time to find something with which to access it. And just imagine—post-Singularity—if you were to be living inside one of those computers.…
Meanwhile, as all of your magnetic memory erases itself under this onslaught, the world’s books, quietly resting on shelves in private dens and public libraries (at least those that survived de-accessioning and pulping to make room for computers), will be undisturbed other than by the flickering lights and the angry shouts of hobos doing online gambling.
Of course, there are always lasers. CDs and DVDs were initially promoted as lasting almost forever. All of us know by experience that isn’t true; in fact, a scratch in the wrong place on a CD is often more catastrophic than one on the LP record it was designed to replace. Still, the CD and DVD are more resistant to magnets because they operate from a spiral of up to a couple billion pits just beneath their polished surface. A low-power laser reads those pits and converts them to sound or video. But heat, light, and various forms of radiation still affect these various versions of the optical disk, and while manufacturers claim that these disks—read-only and read-write, low-density or Blu-ray—should last as long as fifty years, more objective testers put their life expectancy at half that … and some CD-Rs have degraded after less than two years.
Asks Christopher Mims of Technology Review:
It’s tempting to believe that we live in a special time—this is the root of all apocalyptic thinking—but it’s hard to compare even today’s menaces to the rise of the Third Reich, the fall of the Roman Empire or the Black Death. At least not yet.
But supposing something were to happen, as it does every day in parts of war-torn sub-Saharan Africa—some cascade of environmental and political disasters leading to armed conflict or resource starvation. What happens when all those data centers, housing all that knowledge we digitized without a second thought, go dark?23
If the story of memory teaches us anything, it is that if you wait long enough, the worst will happen. Those worst-case scenarios for humanity Mims presents have only happened in just the last two thousand years of mankind’s 200,000-year history. And only a fool believes they won’t happen again; only an idiot doesn’t prepare for their arrival.
It is often said that civilization depends upon each generation assuming its responsibilities and then passing them on to the next. Whether we realize it or not, memory, at least in the digital age, appears to require the same commitment. We are like the Romans, enjoying the new communications revolution wrought by papyrus—but also recognizing that we must constantly copy and update our fragile scrolls or they will be lost forever. Already, some of the early months of the World Wide Web are lost forever because no one made screen grabs and copies.
As exciting or terrifying as the idea of memory implants, life recording, and the Singularity may be, none of them will ever take place if this fourth scenario—forgetting—arrives first. And if it does, we may get a closer look at the eighth century than we’d like.
We may never want the kind of immortality that requires becoming one with our computers. But for the first time in history, we have the chance to have the memories of all of our lives live on indefinitely after us, to leave wakes in time as great as those once made only by kings. But it will only happen if we don’t forget to remember, to protect the record of our time in this world, and most of all, to find new, more enduring ways to preserve our memories.
Memory is the guardian of all things. But in the end, we are the guardians of memory.