FIVE
THE OPPORTUNITY OF ALL TIME
Whence things have their origin,
Thence also their destruction happens,
As is the order of things;
For they execute the sentence upon one another
— The condemnation for the crime —
In conformity with the ordinance of Time.
— Anaximander81
ANAXIMANDER’S QUOTE MIGHT HAVE been meant for today: our world is changing rapidly, with our future in the balance.
Our global population has grown to seven billion and continues to rise. We are eating away at our supplies of energy, water, fertile land, minerals. We are spoiling our environment and driving species extinct every day. We are caught up in financial and political crises entirely of our own creation. Sometimes it feels as if the technological progress on which we have built our lives and our societies is just leading us towards disaster. There is an overwhelming sense that we are running out of time.
Our personal capabilities have never been greater.Many of us can now communicate instantly with collaborators, friends, and family around the globe. This ability has powered new democratic movements, like those of the Arab Spring, and has allowed the assembly of great stores of collectively curated information, like Wikipedia. It is driving global scientific collaborations and opening online access to quality educational materials and lectures to people everywhere.
But the internet, with all of its attractions, is also profoundly dehumanizing. Increasingly we are glued to our computers and smartphones, building our social and professional lives around email, social media, blogs, or tweets. Overload of digital information turns us into automata, workaholics, passive consumers. Its harsh physical form stresses us and creates a mismatch between our own human nature and the manner in which we are being forced to communicate. Our analog nature is being compressed into a digital stream. Not so surprising then that, as the comedian Louis C. K. recently put it, “Everything is amazing right now and nobody’s happy.”82
Massive economic shifts are also taking place. Past political paradigms are becoming irrelevant, with Western governments intervening to shore up their financial systems, and China overseeing the world’s greatest market-driven economic boom. Information is the new oil, and knowledge-based companies like Google, Amazon, and Facebook are replacing manufacturing industries in many developed Western countries. Instead of the old worker–owner division, Western society is developing new fractures: between an economically active elite and a marginalized remainder.
Short-term thinking is endemic, as is natural when things are moving fast. It is as if we are driving a speeding car through a fog, swerving to avoid potholes, roadblocks, or oncoming vehicles, anxiously anticipating the dangers with no power to predict them. Politicians tend to think no further than the next election, scientists no further than the next grant.
In this chapter, I want to talk about the future of this world of ours. The coming century will see our lives, and those of our children, transformed. What happens will depend on the decisions we take and on the discoveries we make. I won’t make any forecasts. Nor will I try to outline a plan for our survival. That is a pragmatic task requiring the skills and dedication of many people.
Instead, I want to try to step back from all the anxieties and the immediate issues of today and address something more basic and long-term, namely our own human character: how our ideas regarding our place in the universe may develop, and how our very nature may change. Speaking about the future makes us nervous. Einstein said, “I never think of the future; it comes soon enough.” We do not really know who we are or what we are capable of. I feel like a diver standing on the edge of a tall cliff, looking over the precipice and peering through the fog below. Is there a beautiful, cool ocean waiting for me, or only jagged rocks? I don’t know. Nevertheless, I will take the plunge.
As I will explain, scientific advances we can now envisage may bring us, as living, self-conscious beings, much closer to physical reality. The separation of our ideas from our nature, of science from society, of our intellect from our feelings, and of ourselves from the universe may diminish. Not only might we see the universe more clearly, but we may come to know it more deeply. And in time, that knowledge will change who we are. This is an extraordinary prospect, which I hope will encourage us to see a more inspirational future.
THINKING ABOUT THE UNIVERSE might seem like escapism, or a luxury: how will it solve the problem of world hunger, or carbon emissions, or the national debt? But throughout history, from Anaximander and Pythagoras to Galileo and Newton, the universe has been an endless source of wonder, inspiring us to rise above our current circumstances and see what lies beyond. That basic urge continues today, driving the creation of the most powerful ever microscope — the Large Hadron Collider — and the most powerful ever telescope — the Planck satellite. It has resulted in a working mathematical model of all the forces and particles in nature, tested to precision from length scales well below the size of an atomic nucleus up to the entire visible universe. We understand the broad features of the evolution of the cosmos, from its first microseconds up to the spectacular present, where we see hundreds of billions of galaxies stretching across space. The Higgs particle — a manifestation of the mechanism through which matter particles and forces acquire their distinctive characters — has just been discovered, one of physics’ crowning achievements.
Discoveries as basic as this one can take a long time for their full impact to be felt. But the more basic they are, the more profound the impact. Quantum physics was formulated in the 1920s, but it was not until the 1960s that its implications for the nature of our reality began to be more fully appreciated. We think of and discuss the world as if it were an arena filled with definite things, whose state changes from one moment to the next. This is the picture of the classical universe, as developed by Newton, Maxwell, and Einstein, evolving according to deterministic physical laws.
Quantum theory makes predictions that are inconsistent with this picture, and experiment shows them to be right. According to quantum theory, the world is constantly exploring all of its possible classical states all of the time, and is only appearing to us as any one of them with some probability. The conceptual machinery that underlies this view of quantum reality involves strange mathematical concepts like the square root of minus one, for which we have little intuition. And only now are the technological implications of these basic discoveries becoming apparent.
At the same time, fundamental research continues to identify new avenues for expanding the boundaries of our knowledge. As successful and far-reaching as our modern picture of the universe is, our description completely fails at the critical event — the big bang singularity — from which everything we now see around us emerged. Our current understanding likewise offers little explanation for the universe’s strange future. The energy of empty space — the vacuum energy, which is itself controlled by quantum effects — has taken over as the dominant form of energy in the universe. In the coming tens of billions of years, its repulsive gravitational force will speed up the expansion of the universe and carry all the galaxies we now see out of our view. As Anaximander said, our world is transitory, and the physical forces that enabled its emergence are now in the process of taking it away.
The theories of the twentieth century are struggling to tackle these problems — of the emergence of the universe and of its ultimate fate. String theory is the leading contender for a “theory of everything,” possessing exciting mathematical properties that suggest it might include every known force and particle. But string theory comes along with tiny extra dimensions of space, so small they are invisible, whose form fixes the pattern of forces and particles we should see. Unfortunately, the theory does not make any definite prediction for the form of the extra dimensions, and with our present understanding, the number of possible configurations seems almost uncountable. For each one of these configurations, the universe consisting of the visible dimensions and the particles and forces within them would appear very different. String theory therefore seems to predict a “multiverse” rather than a universe. Instead of being a theory of everything, it is more like a theory of anything.
String theory’s lack of a definite prediction for the vacuum energy, combined with the puzzling observation that the vacuum energy takes a tiny positive value, has encouraged many scientists to embrace what seems to many of us like an unscientific explanation: that every one of these universes is possible, but the one we find ourselves in is the only one that actually allows us to exist. Sadly, this idea is at best a rationalization. It is hard to imagine a less elegant or convincing explanation of our own beautiful world than to invent a near-infinite number of unobservable worlds and to say that, for some reason we cannot understand or quantify, ours was “chosen” to exist from among them.
Most string theorists have likewise avoided the problem of the big bang singularity, although every one of their hypothesized worlds possesses such a starting point. Typically, they are content to assume the universe sprang into existence in one of the plethora of allowed forms, just after the singularity, and to discuss its evolution from there. So indeed, from the most widely accepted viewpoints, the beginning and the end of the universe seem to be brick walls beyond which physics cannot go.
The puzzles of the beginning and the future of the universe are, in my view, the critical clues which may help us rise above current paradigms and find a better vantage point. As I discussed in Chapter Three, we do have ways of conceptually taking the universe into the quantum domain, and these now suggest a very different picture, in which we may traverse the big bang singularity to a universe before it and likewise pass beyond our vacuous future into the next big bang to come. If this suggestion is correct, the implication is that there was no beginning of time nor will there be an end: the universe is eternal, into the past and into the future.
· · ·
OUR SOCIETY HAS REACHED a critical moment. Our capacity to access information has grown to the point where we are in danger of overwhelming our capabilities to process it. The exponential growth in the power of our computers and networks, while opening vast opportunities, is outpacing our human abilities and altering our forms of communication in ways that alienate us from each other. We are being deluged with information through electrical signals and radio waves, reduced to a digital, super-literal form that can be reproduced and redistributed at almost no cost. The technology makes no distinction between value and junk. The abundance and availability of free digital information is dazzling and distracting. It removes us from our own nature as complex, unpredictable, passionate people.
The “ultraviolet catastrophe” that physics encountered at the end of the nineteenth century serves as a metaphor for physics today, as I have already suggested, and also for our broader crisis. Maxwell’s theory of electromagnetic radiation and light was a triumph, being the most beautiful and powerful application of mathematics to describing reality. Yet it implied that there were waves of all wavelengths, from zero to infinity. In any realistic context, where heat and electromagnetic energy are constantly being exchanged between objects, this feature of Maxwell’s theory leads to a disaster. Any hot object, or any electron in orbit around an atom, can radiate electromagnetic waves at an unlimited rate, leading to a disastrous instability of the world.
Planck was forced to tackle this problem by taking a step back from a literal, classical world as envisaged by Newton, Maxwell, and Einstein. Ultimately, we had to give up the idea of a definite reality comprising a geometrical arena — spacetime — inhabited by entities in the form of particles and waves. We had to give up any notion of being able to picture things as they really are, or of being able (even in principle) to measure and predict everything there is to know. These ideas had to be replaced with a more abstract, all-encompassing theory, which reduced our capacity to “know” or “visualize” reality, while giving us a powerful new means of describing and predicting nature.
In the same way, I believe we now need to step back from the overwhelming nature of our “digital age.” One can already see a tendency among many people to “surf” across the ocean of information on the internet. This behaviour seems to replace a desire for a deeper or more rounded understanding of anything. Mastery seems unfeasible in a world awash with information. However, higher level thinking is needed now more than ever. We need to develop more refined skills of awareness and judgement to help us filter, select, and identify opportunities. Collaboration will increasingly be the name of the game as people around the world work, share ideas, write books, and even construct mathematical proofs together.
Viewed in this light, our modes of education at school and university seem terribly outmoded. Young people don’t need to memorize known facts any more — they are all readily accessible on the internet. The skills they need most are to think for themselves, to choose what to learn, to develop ideas and share them with others. How to see the big picture, how to find just what they need in an ocean of knowledge, how to collaborate and how to dig deep in an entirely new direction.
It seems to me we need to create a modern version of the ancient Greek philosophers’ fora or Scotland’s educational system in the late eighteenth century, where the principles and foundations of knowledge were questioned and debated, and where creativity, originality, and humility before the truth were the most highly prized qualities in a student.
Our society has been shaped by physics’ past discoveries to an extent that is seldom appreciated. The mechanical world of Newton led to mechanical ways of learning, as well as to the modern industrial age. We are all very aware of how the digital revolution is transforming our lives: computers are filling our schools and offices, replacing factory workers, miners, and farmers. They are changing the way we work, learn, live, and think. Where did this new technology come from? It came, once again, from our capacity to understand, invent, and create: from the Universe Within.
THE STORY OF HOW physics created the information age begins at the turn of the twentieth century, when electricity was becoming the lifeblood of modern society. There was galloping demand to move current around — quickly, predictably, safely — in light bulbs, radios, telegraphs, and telephones. Joseph John (J. J.) Thomson’s discovery of the electron in 1897 had explained the nature of electricity and launched the development of vacuum tubes.
For most of the twentieth century, amplifying vacuum tubes were essential components of radios, telephone equipment, and many other electrical devices. They consist of a sealed glass tube with a metal filament inside that releases lots of electrons when it is heated up. The negatively charged electrons stream towards a positively charged metal plate at the other end of the tube, carrying the electrical current. This simple arrangement, called a “diode,” allows current to flow only one way. In more complicated arrangements, one or more electrical grids are inserted between the cathode and anode. By varying the voltage on the grids, the flow of electrons can be controlled: if things are arranged carefully, tiny changes in the grid voltage result in large changes in the current. This is an amplifier: it is like controlling the flow of water from a tap. Gently twiddling the tap back and forth leads to big changes in the flow of water.
Vacuum tubes were used everywhere — in radios, in telephone and telegraph exchanges, in televisions and the first computers. However, they have many limitations. They are large and have to be warmed up. They use lots of power, and they run hot. Made of glass, they are heavy, fragile, and expensive to manufacture. They are also noisy, creating a background “hum” of electrical noise in any device using them.
In Chapter One, I described the Scottish Enlightenment and how it led to a flowering of education, literature, and science in Scotland. James Clerk Maxwell was one of the products of this period, as were the famous engineers James Watt, William Murdoch, and Thomas Telford; the mathematical physicists Peter Guthrie Tait and William Thomson (Lord Kelvin); and the writer Sir Walter Scott. Another was Alexander Graham Bell, who followed Maxwell to Edinburgh University before emigrating to Canada, where he invented the telephone in Brantford, Ontario — and in so doing, launched global telecommunications.
Bell believed in the profound importance of scientific research, and just as his company was taking off in the 1880s, he founded a research laboratory. Eventually christened Bell Labs, this evolved into the research and development wing of the U.S. telecommunications company AT&T, becoming one of the most successful physics centres of all time, with its scientists winning no fewer than seven Nobel Prizes.83
At Bell Labs, the scientists were given enormous freedom, with no teaching duties, and were challenged to do exceptional science. They were led by a visionary, Mervin Kelly, who framed Bell Labs as an “institute for creative technology,” housing physicists, engineers, chemists, and mathematicians together and allowing them to pursue investigations “sometimes without concrete goals, for years on end.”84 Their discoveries ranged from the basic theory of information and communication and the first cellular telephones to the first detection of the radiation from the big bang; they invented lasers, computers, solar cells, CCDs, and the first quantum materials.
One of quantum theory’s successes was to explain why some materials conduct electricity while others do not. A solid material consists of atoms stacked together. Each atom consists of a cloud of negatively charged electrons orbiting a positively charged nucleus. The outermost electrons are farthest from the nucleus and the least tightly bound to it — in conducting materials like metals, they are free to wander around. Like the molecules of air in a room, the free electrons bounce around continuously inside a piece of metal. If you connect a battery across the metal, the free electrons drift through it in one direction, forming an electrical current. In insulating materials, there are no free electrons, and no electrical currents can flow.
Shortly after the Second World War, Kelly formed a research group in solid state physics, under William Shockley. Their goal was to develop a cheaper alternative to vacuum tubes, using semiconductors — materials that are poor conductors of electricity. Semiconductors were already being used, for example, in “point-contact” electrical diodes, where a thin needle of metal, called a “cat’s whisker,” was placed in contact with a piece of semiconductor crystal (usually lead sulphide or galena). At certain special points on the surface, the contact acts like a diode, allowing current to flow only one way. Early “crystal” radio sets used these diodes to convert “amplitude modulated” AM radio signals into DC currents, which then drove a headset or earphone. In the 1930s, Bell scientists explored using crystal diodes for very high frequency telephone communications.
During the war, lots of effort had gone into purifying semiconductors like germanium and silicon, on the theory that removing impurities would reduce the electrical noise.85 But it was eventually realized that the magic spots where the crystal diode effect works best correspond to impurities in the material. This was a key insight — that controlling the impurities is the secret to the fine control of electrical current.
Just after the war, Shockley had tried to build a semiconductor transistor, but had failed. When Kelly asked Shockley to lead the Solid State Physics group, he placed the theorist John Bardeen and the experimentalist Walter Brattain under his supervision. The two then attempted to develop the “point-contact” idea, using two gold contacts on a piece of germanium which had been “doped” — seeded with a very low concentration of impurities to allow charge to flow through the crystal.
They were confounded by surface effects, which they initially overcame only through the drastic step of immersing the transistor in water, hardly ideal for an electrical device. After two years’ work, their breakthrough came in the “miracle month” of November–December 1947, when they wrapped a ribbon of gold foil around a plastic triangle and sliced the ribbon through one of the triangle’s points. They then pushed the gold-wrapped tip into the germanium to enable a flow of current through the bulk of the semiconductor. A voltage applied to one of the two gold contacts was then found to amplify the electric current flowing from the other contact into the germanium, like a tap being twiddled to control the flow of water.86
Bardeen, Brattain, and Shockley shared the 1956 Nobel Prize in Physics for their discovery of the transistor, which launched the modern electronics age. Their “point contact” transistor was quickly superseded by “junction” transistors, eventually to be made from silicon. Soon after, the team split up. Bardeen left for the University of Illinois, where he later won a second Nobel Prize. Shockley moved out to California, where he founded Shockley Semiconductor. He recruited eight talented young co-workers who, after falling out with him, left to form Fairchild and Intel, thereby launching Silicon Valley.
Transistors can control the flow of electricity intricately, accurately, and dependably. They are cheap to manufacture and have become easier and easier to miniaturize. Indeed, to date, making computers faster and more powerful has almost entirely been a matter of packing more and more transistors onto a single microprocessor chip.
For the past forty years, the number of transistors that can be packed onto a one-square-centimetre chip has doubled every two years — an effect known as Moore’s law, which is the basis for the information and communication industry’s explosive growth. There are now billions of transistors in a typical smartphone or computer CPU. But there are also fundamental limits, set by the size of the atom and by Heisenberg’s uncertainty principle. Extrapolating Moore’s law, transistors will hit these ultimate limits one or two decades from now.
In modern computers, information consists of strings of 0s and 1s stored in a pattern of electrical charges or currents or magnetized states of matter, and then processed via electrical signals according to the computer program’s instructions. Typically, billions of operations are performed per second upon billions of memory elements. It is crucial to the computer’s operation that the 0s and 1s are stored and changed accurately and not in unpredictable ways.
The problem is that the moving parts of a computer’s memory — in particular, the electrons — are not easy to hold still. Heisenberg’s uncertainty principle says that if we fix an electron’s position, its velocity becomes uncertain and we cannot predict where it will move next. If we fix its velocity, and therefore the electrical current it carries, its position becomes uncertain and we don’t know where it is. This problem becomes unimportant when large numbers of electrons are involved, because to operate a device one only needs the average charge or current, and for many electrons these can be predicted with great accuracy. However, when circuits get so tiny that only a few electrons are involved in any process, then their quantum, unpredictable nature becomes the main source of error, or “noise,” in the computer’s operations. Today’s computers typically store one bit of data in about a million atoms and electrons, although scientists at IBM Labs have made a twelve-atom bit register called “atomic-scale memory.”87
QUANTUM UNCERTAINTY IS THE modern version of the impurities in semiconductors. Initially impurities were seen as a nuisance, and large sums of money were spent trying to clean them away, before it was realized that the ability to manipulate and make use of them was the key to the development of cheap, reliable transistors. The same story is now repeating itself with “quantum uncertainty.” As far as classical computers are concerned, quantum uncertainty is an unremovable source of noise, and nothing but a nuisance. But once we understand how to use quantum uncertainty instead of trying to fight it, it opens entirely new horizons.
In 1984, I was a post-doctoral researcher at the University of California, Santa Barbara. It was announced that the great Richard Feynman was going to come and give a talk about quantum computers. Feynman was one of our heroes, and this was an opportunity to see him first-hand. Feynman’s talk focused on the question of whether there are ultimate limits to computation. Some scientists had speculated that each operation of a computer inevitably consumes a certain amount of energy, and that ultimately this would limit the size and power of any computer. Feynman’s interest was piqued by this challenge, and he came up with a design that overcame any such limit.
There were several aspects to his argument. One was the idea of a “reversible” computer that never erased (or overwrote) anything stored in its memory. It turns out that this is enough to overcome the energy limit. The other new idea was how to perform computations in truly quantum ways. I vividly remember him waving his arms (he was a great showman), explaining how the quantum processes ran forwards and backwards and gave you just what you needed and no more.
Feynman’s talk was entirely theoretical. He didn’t speak at all about building such a device. Nor did he give any specific examples of what a quantum computer would be able to do that a classical computer could not. His discussion of the theory was quite basic, and most of the ingredients could be found in any modern textbook. In fact, there was really no reason why all of this couldn’t have been said many decades ago. This is entirely characteristic of quantum theory: simply because it is so counterintuitive, new and unexpected implications are still being worked out today. Although he did not have any specific examples of the uses of a quantum computer, Feynman got people thinking just by raising the possibility. Gradually, more and more people started working on the idea.
In 1994, there came a “bolt from the blue.” U.S. mathematician Peter Shor, working at Bell Labs (perhaps unsurprisingly!), showed mathematically that a quantum computer would be able to find the prime factors of large numbers much faster than any known method on a classical computer. The result caused a shockwave, because the secure encryption of data (vital to the security systems of government, banks, and the internet) most commonly relies on the fact that it is very difficult to find the prime factors of large numbers. For example, if you write down a random 400-digit number (which might take you five minutes), then even with the best known algorithm and the most powerful conceivable classical computer, it would take longer than the age of the universe to discover the number’s prime factors. Shor’s work showed that a quantum computer could, in principle, perform the same task in a flash.
What makes a quantum computer so much more powerful than a classical one? A classical computer is an automatic information-processing machine. Information is stored in the computer’s memory and then read and manipulated according to pre-specified instructions — the program — also stored in the computer’s memory. The main difference between a classical and quantum computer is the way information is stored. In a classical computer, information is stored in a series of “bits,” each one of which can take just two values: either 0 or 1. The number of arrangements of the bits grows exponentially with the length of the string. So whereas there are only two arrangements for a single bit, there are four for two bits, eight for three, and there are nearly a googol (one with a hundred zeros after it) ways of arranging three hundred bits. You need five bits to encode a letter of the alphabet and about two million bits to encode all of the information in a book like this. Today, a typical laptop has a memory capacity measured in gigabytes, around ten billion bits (a byte is eight bits), with each gigabyte of memory capable of storing five thousand books.
A quantum computer works in an entirely different way. Its memory is composed of qubits, short for quantum bits. Qubits are somewhat like classical bits in that when you read them out, you get either 0 or 1. However, the resemblance ends there. According to quantum theory, the typical state for a qubit is to be in a superposition — a state consisting of 0 and 1 at the same time. The amount of 0 or 1 in the state indicates how probable it is to obtain 0 or 1 when the qubit is read.
The fact that the state of a qubit is specified by a continuous quantity — the proportion of 0 or 1 in the state — is a clue that it can store infinitely more information than a classical bit ever can.88 The situation gets even more interesting when you have more than one qubit and their states are entangled. This means that, unlike classical bits, qubits cannot be read independently: what you measure for one of them will influence what you measure for the other. For example, if two qubits are entangled, then the result you obtain when you measure one of them will completely determine the result you obtain if you measure the other. A collection of entangled qubits forms a whole that is very much greater than the sum of its parts.
Shor used these features to make prime-number factoring go fast. Classically, if you tried to find the prime factors of a large number,89 the brute force method would be to divide it by two as many times as you could, then three, then five, and so on, and keep going until no further divisions worked. However, what Shor realized, in essence, is that a quantum computer can perform all of these operations at the same time. Because the quantum state of the qubits in the computer simultaneously encodes many different classical states, the computations can all occur “in parallel,” dramatically speeding up the operation.
Shor’s discovery launched a global race to build a quantum computer, using a wide range of quantum technologies: atomic and nuclear spins, the polarization states of light, the current-carrying states of superconducting rings, and many other incarnations of qubits. In recent years, the race has reached fever pitch. At the time of writing, researchers at IBM are claiming they are close to producing a “scalable” quantum computing technology.
What will this vast increase in our information-handling capabilities mean? It is striking to compare our situation today, with the vast libraries at our fingertips and far vaster ones to come, with that of the authors of the modern scientific age. In the Wren Library in Trinity College, Cambridge, Isaac Newton’s personal library consists of a few hundred books occupying a single bookcase. This was quite enough to allow him to found modern physics and mathematical science. A short walk away, in the main University Library, Charles Darwin’s personal library is also preserved. His entire collection of books occupies a ten-metre stretch of shelving. Again, for one of the most profound and original thinkers in the history of science, it is a minuscule collection.
Today, on your smartphone, you can access information resources vastly greater than any library. And according to Moore’s law, in a couple of decades your laptop will comfortably hold every single book that has ever been written. A laptop quantum computer will seem more like Jorge Luis Borges’s Library of Babel — a fantastical collection holding every possible ordering of letters and words in a book, and therefore every book that could ever be written. With a quantum library, one might instead be able to search for all possible interesting passages of text without anyone having had to compose them.
Some of the uses of quantum computers and quantum communication are easy to anticipate. Ensuring the security of information is one of them. The codes currently used to protect access to bank accounts, computer passwords, and credit card information rely on the fact that it is hard to find the prime factors of large numbers using a classical computer. However, as Peter Shor showed, quantum computers will be able to quickly find these factors, rendering current security protocols obsolete. Also, quantum information is inherently safer from attack than classical information, because it is protected by the fundamental laws of physics. Whereas reading out classical information does nothing to change it, according to quantum physics, the mere fact of observing a quantum system almost always changes its quantum state. Through this effect, eavesdropping or hacking into quantum information can be detected. Hence quantum information can be made invulnerable to spying in ways that would be classically impossible.
Quantum computers may also transform our capacities to process data in parallel, and this could enable systems with great social benefit. One proposal now being considered is to install highly sensitive biochemical quantum detectors in every home. In this way, the detailed medical condition of every one of us could be continuously monitored. The data would be transmitted to banks of computers which would process and screen it for any signs of risk. The results of any medical treatment or dietary change or any other intervention would be constantly gathered. With access to such vast amounts of data and information-processing power, medicine would be revolutionized. We would all be participants in medical trials, on a scale and with an accuracy and breadth greater than anything seen before.
But by far the greatest impact quantum computers will have is likely to be on ourselves.
· · ·
THE IDEA THAT OUR communication technologies change us was emphasized by the Canadian communications guru Marshall McLuhan. McLuhan’s 1964 book, Understanding Media: The Extensions of Man, kicked off a wave of interest in the uses of mass media in all forms, from pop music and television to major corporations. McLuhan’s writing is more poetic than analytical, but his basic insight was that the information content of all of these forms of mass media — from ads to games, cars, typewriters (remember, no PCs then!), books, telephones, newspapers, and so on — is less important than their physical form and their direct hold on our behaviour. He summed up this idea in his famous aphorism “The medium is the message.” Today, watching people wander around, eyes glued to smartphones, texting or emailing, in the grip of their gadgets and nearly oblivious to their surroundings, you can see what he meant.
McLuhan’s point was that media have been having this effect on us for millennia. If you think for two seconds, it is amazing, and faintly ridiculous, that the mere act of compressing, and so severely limiting, our ideas in writing — in the case of European languages, into words written in an alphabet of twenty-six letters — has proven to be such a powerful and society-dominating technology. Writing is a means of extracting ourselves from the world of our experience to focus, form, and communicate our ideas. The process of committing ourselves to texts — from the scriptures to textbooks, encyclopedias, novels, political pamphlets, laws, and contracts — and then allowing them to control our lives has had an enormous and undeniable effect on who we are. McLuhan argued that print altered our entire outlook, emphasizing our visual sense, thus influencing the fragmentation and specialization of knowledge, and fostering everything from individualism to bureaucracy to nationalistic wars, peptic ulcers, and pornography.
McLuhan saw every mass medium, whether print, photography, radio, or TV, in a similar way: as an extension of our own nervous system, dramatically altering our nature and hence our society. “We have never stopped drastically interfering with ourselves by every technology we could latch on to,” he said in “The Future of Man in the Electric Age.” “We have absolutely disrupted our lives over and over again.” 90
McLuhan accurately foresaw that electronic media would be combined with computers to spread information cheaply and instantly around the world, in a variety of forms. Thirty years before the internet was launched, he wrote: “The next medium, whatever it is — it may be the extension of consciousness — will include television as its content, not as its environment, and will transform television into an art form. A computer as a research and communication instrument could enhance retrieval, obsolesce mass library organization, retrieve the individual’s encyclopedic function and flip into a private line to speedily tailored data of a saleable kind.”91 Furthermore, McLuhan argued optimistically that we might regain the breadth of our senses which the printed word had diminished, restoring the preliterate “tribal balance” between all of our senses through a unified, “seamless web” of experience. As electronic communication connected us, the world would become a “global village” — another of McLuhan’s catchphrases.
McLuhan owed a pronounced intellectual debt to a visionary and mystic who came before him: Teilhard de Chardin. A Jesuit priest, a geologist, and a paleontologist who played a role in the discovery of Peking man, de Chardin took a very big-picture view of the universe and our place within it, a picture that encompassed and motivated some of McLuhan’s major insights. De Chardin also foresaw global communications and the internet, writing in the 1950s about “the extraordinary network of radio and television communication which already link us all in a sort of ‘etherised’ human consciousness,” and “those astonishing electronic computers which enhance the speed of thought and pave the way for a revolution in the speed of research.” This technology, he wrote, was creating a “nervous system for humanity,” a “stupendous thinking machine.” “The age of civilisation has ended,” he said, “and the age of one civilisation is beginning.”92
These ideas were an extension of de Chardin’s magnum opus, The Phenomenon of Man. He completed the manuscript in the late 1930s, but because of his heterodox views, his ecclesiastical order refused throughout his lifetime to permit him to publish any of his writings. So de Chardin’s books, and many collections of his essays, were only published after his death in 1955.
In spite of being a Catholic priest, de Chardin accepted Darwinian evolution as fact, and he built his futuristic vision around it. He saw the physical universe as in a state of constant evolution. Indeed, The Phenomenon of Man presents a “history of the universe” in terms that are surprisingly modern. De Chardin was probably influenced in this by another Jesuit priest, the founder of the hot big bang cosmology, Georges Lemaître.
De Chardin describes the emergence of complexity in the universe, from particles to atoms to molecules, to stars and planets, complex molecules, living cells, and consciousness, as a progressive “involution” of matter and energy, during which the universe becomes increasingly self-aware. Humans are self-aware and of fundamental significance to the whole. De Chardin quotes with approval Julian Huxley, who stated that “Man discovers that he is nothing else than evolution become conscious of itself.”93 Huxley was the grandson of T. H. Huxley, the biologist famously known as “Darwin’s bulldog” for his articulate defence of evolutionary theory in the nineteenth century. He was also one of the founders of the “modern evolutionary synthesis,” linking genetics to evolution. De Chardin took Huxley’s statement to a cosmic scale, envisioning that human society, confined to the Earth’s spherical surface, would become increasingly connected into what would be in effect a very large living cell. With its self-consciousness and its inventions, it would continue to evolve through non-biological means towards an ultimate state of universal awareness, which he called the “Omega Point.”
De Chardin’s arguments are vague, allusive, and (despite his claims) necessarily unscientific, since many key steps, such as the formation of cells and life, and the emergence of consciousness, are well beyond our scientific understanding, as, of course, is the future. His vision is nonetheless interesting for the way in which it sees in evolution a latent potential for progress towards increasing complexity within the physical substance of the world. This potential is becoming increasingly evident as human advancement through technology and collaboration supercedes survival of the biologically fittest as the driver of evolutionary progress. As Huxley says in his introduction to de Chardin’s book, “We, mankind, contain the possibilities of the earth’s immense future, and can realise more and more of them on condition that we increase our knowledge and our love. That, it seems to me, is the distillation of The Phenomenon of Man.”94
McLuhan and de Chardin accurately foresaw the digital age and the future impact of electronic communication on the evolution of society. As McLuhan put it, “The medium, or process, of our time — electric technology — is reshaping and restructuring patterns of our social interdependence and every aspect of our personal life . . . Everything is changing — you, your family, your neighbourhood, your job, your government, your relation to ‘the others.’ And they’re changing dramatically.” He also foresaw some of the features and dangers of the internet and social media. He described an “electrically computerized dossier bank — that one big gossip column that is unforgiving, unforgetful, and from which there is no redemption, no erasure of early ‘mistakes.’”95
These comments are insightful. They point to the clash between digital information and our analog nature. Our bodies and our senses work in smooth, continuous ways, and we most appreciate music or art or natural experiences that incorporate rich, continuous textures. We are analog beings living in a digital world, facing a quantum future.
DIGITAL INFORMATION IS THE crudest, bluntest, most brutal form of information that we know. Everything can be reduced to finite strings of 0s and 1s. It is completely unambiguous and is easily remembered. It reduces everything to black and white, yes or no, and it can be copied easily with complete accuracy. Obviously, analog information is infinitely richer. One analog number can take an infinite number of values, infinitely more values than can be taken by any finite number of digital bits.
The transition from analog to digital sound — from records and tapes to CDs and MP3s — caused a controversy, which continues to this day, about whether a digital reproduction is less rich and interesting to listen to than an analog version. By using more and more digital bits, one can mimic an analog sound to any desired accuracy. The fact remains that analog sound is inherently more subtle and less jarring than digital. Certainly, even in this digital age, analog instruments show no signs of going out of fashion.
Life’s DNA code is digital. Its messages are written in three-letter “words” formed from a four-letter alphabet. Every word codes for an amino acid, and each sentence codes for a protein, made up of a long string of amino acids. The proteins form the basic machinery of life, part of which is dedicated to reading and transcribing DNA into yet more proteins. Although it is indeed amazing that all of the extravagant diversity and beauty of life is encoded in this way, it is also important to realize that the DNA code itself is not in any way alive.
Although the genetic basis for life is digital, living beings are analog creatures. We are made of plasmas, tissues, membranes, controlled by chemical reactions that depend continuously on concentrations of enzymes and reactants. Our DNA only comes to life when placed in an environment with the right molecules, fluids, and sources of energy and nutrients. None of these factors can be described as digital. New DNA sequences only arise as the result of mutations and reshufflings, which are partly environmental and partly quantum mechanical in origin. Two of the key processes that drive evolution — variation and selection — are therefore not digital. The main feature of the digital component of life — DNA — is its persistent, unambiguous character; it can be reproduced and translated into RNA and protein accurately and efficiently. The human body contains tens of trillions of cells, each with an identical copy of the DNA. Every time a cell divides, its DNA is copied.
It is tempting to see the digital DNA code as the fundamental basis of life, and our living bodies as merely its “servants,” with our only function being to preserve our DNA and to enable its reproduction. But it seems to me that one can equally well argue that life, being fundamentally analog, uses digital memory simply to preserve the accuracy of its reproduction. That is, life is a happy combination of mainly digital memory and mainly analog operations.
At first sight, our nerves and brains might appear to be digital, since they either fire or do not in response to stimuli, just as the basic digital storage element is either 0 or 1. However, the nerve-firing rate can be varied continuously, and nerves can fire either in synchrony or in various patterns of disarray. The concentrations and flows of biomolecules involved in key steps, such as the passage of signals across synapses, are analog quantities. In general, our brains appear to be much more nuanced and complex systems than digital processors. This disjuncture between our own analog nature and that of our computers is quite plausibly what makes them so dissatisfying as companions.
Although analog information can always be accurately mimicked by using a sufficient number of digital bits, it is nevertheless a truism that analog information is infinitely richer than digital. Quantum information is infinitely richer again. Just one qubit of quantum information is described by a continuum of values. As we increase the number of qubits, the number of continuous values required to describe them grows exponentially. The state of a 300-qubit quantum computer (which might consist of a chain of just 300 atoms in a row) would be described by more numbers than we could represent in an analog manner, even if we used the three-dimensional position of every single one of the 1090 or so particles in the entire visible universe.
The ability of physical particles to carry quantum information has other startling consequences, stemming from entanglement, in which the quantum state of two particles is intrinsically interlinked. In Chapter Two, I described how, in an Einstein–Podolsky–Rosen experiment, two particles fly apart with their spins “entangled,” so that if you observe both particles’ spin along some particular axis in space, then you will always find one particle’s spin pointing up while the other points down. This correlation, which Einstein referred to as “spooky action at a distance,” is maintained no matter how far apart the particles fly. It is the basis for Bell’s Theorem, also described in Chapter Two, which showed that the predictions of quantum theory can never be reproduced by classical ideas.
Starting in the 1980s, materials have been found in which electrons exhibit this strange entanglement property en masse. The German physicist Klaus von Klitzing discovered that if you suspend a piece of semiconductor in a strong magnetic field at a very low temperature, then the electrical conductance (a measure of how easily electric current flows through the material) is quantized. That is, it comes in whole number multiples of a fundamental unit. This is a very strange result, like turning on a tap and finding that water will flow out of it only at some fixed rate, or twice that rate, or three times the rate, however you adjust the tap. Conductance is a property of large things: wires and big chunks of matter. No one expected that it too could be quantized. The importance of von Klitzing’s discovery was to show that in the right conditions, quantum effects can still be important, even for very large objects.
Two years later, the story took another twist. The German physicist Horst Störmer and the Chinese physicist Dan Tsui, working at Bell Labs, discovered that the conductance could also come in rational fractions of the basic unit of conductance, fractions like 1⁄3, 2⁄5, and 3⁄7. The U.S. theorist Robert Laughlin, working at Stanford, interpreted the result as being due to the collective behaviour of all the electrons in the material. When they become entangled, they can form strange new entities whose electric charge is given in fractions of the charge of an electron.
Ever since these discoveries, solid state physicists have been discovering more and more examples of systems in which quantum particles behave in ways that would be classically impossible. These developments are challenging the traditional picture of individual particles, like electrons, carrying charge through the material. This picture guided the development of the transistor, but it is now seen as far too limited a conception of the possible states of matter. Quantum matter can take an infinitely greater variety of forms. The potential uses of these entirely new states of matter, which, as far as we know, never before formed in the universe, are only starting to be explored. They are likely to open a new era of quantum electronics and quantum devices, capable of doing things we have never seen before.
IN THE EARLY TWENTIETH century, the smallest piece of matter we knew of was the atomic nucleus. The largest was our galaxy. Over the subsequent century, our most powerful microscopes and telescopes have extended our view down to a ten-thousandth the size of an atomic nucleus and up to a hundred thousand times the size of our galaxy.
In the past decade, we have mapped the whole visible universe out to a distance of nearly fourteen billion light years. As we look farther out into space, we see the universe as it was longer and longer ago. The most distant images reveal the infant universe emerging from the big bang, a hundred-thousandth of its current age. It was extremely uniform and smooth, but the density of matter varied by around one part in a hundred thousand from place to place. The primordial density variations appear to take the same form as quantum fluctuations of fields like the electromagnetic field in the vacuum, amplified and stretched to astronomical scales. The density variations were the seeds of galaxies, stars, planets, and, ultimately, life itself, so the observations seem to be telling us that quantum effects were vital to the origin of everything we can now see. The Planck satellite, currently flying and due to announce its results soon, has the capacity to tell us whether the very early universe underwent a burst of exponential expansion. Over the coming decades, yet more powerful satellite observations may be able to tell whether there was a universe before the big bang.
Very recently, the Large Hadron Collider has allowed us to probe the structure of matter on the tiniest scales ever explored. In doing so, it has confirmed the famous Higgs mechanism, responsible for determining the properties of the different types of elementary particles. Beyond the Large Hadron Collider, the proposed International Linear Collider will probe the structure of matter much more accurately on these tiniest accessible scales, perhaps revealing yet another layer of organization, such as new symmetries connecting matter particles and forces.
With experiments like the Large Hadron Collider and the Planck satellite, we are reaching for the inner and outer limits of the universe. Equally significant, with studies of quantum matter on more everyday scales, we are revealing the organization of entangled levels of reality more subtle than anything so far seen. If history is any guide, these discoveries will, over time, spawn new technologies that will come to dominate our society.
Since the 1960s, the evolution of digital computers has been inexorable. Moore’s law has allowed them to shrink and move progressively closer to our heads, from freezer-sized cabinets to desktops to laptops to smartphones held in our hands. Google has just announced Project Glass, a pair of spectacles incorporating a fully capable computer screen. With the screen right next to your eye, the power requirements are tiny and the system can be super-efficient. No doubt the trend will continue and computers will become a more and more integral part of our lives, our bodies, and our selves.
Having access to vast stores of digital information and processing power is changing our society and our nature. Our future evolution will depend less and less on our biological genes, and more and more on our abilities to interact with our computers. The future battle for survival will be to program or be programmed.
However, we are analog creatures based upon a digital code. Supplementing ourselves with more and more digital information is in this sense evolutionarily regressive. Digital information’s strongest feature is that it can be copied cheaply and accurately and translated unambiguously. It represents a reduction of analog information, the “dead” blueprint or memory of life, rather than the alive analog element.
On the other hand, quantum information is infinitely deeper, more subtle and delicate than the analog information familiar to us. Interacting with it will represent a giant leap forward. As I have already explained, a single qubit represents more information than any number of digital bits; three hundred qubits represents more information than could be encoded classically using every particle in the universe. But the flip side is that quantum information is extremely fragile. The laws of quantum physics imply that it cannot be copied, a result known as the “no cloning” theorem. Unlike classical computers, quantum computers will not be able to replicate themselves. Without us, or at least some classical partner, they will not be able to evolve.
So it seems that a relationship between ourselves, as analog beings, and quantum computers may be of great mutual benefit, and it may represent the next leap forward for evolution and for life. We shall provide the definiteness and persistence, while quantum computers embody the more flighty, exploratory, and wide-ranging component. We will ask the questions, and the quantum computer will provide the answers. Just as our digital genes encode our analog operations, we, or our evolutionary successors, shall be the “operating system” of quantum life.
In the same way our DNA is surrounded by analog machinery bringing it to classical life, we will presumably become surrounded by quantum computers, making us even more alive. The best combinations of people and quantum computers will be the most successful, and will survive and propagate. With their vast information-processing capacities, quantum computers may be able to monitor, repair, or even renew our bodies. They will allow us to run smart systems to ensure that energy and natural resources are utilized with optimal efficiency. They will help us to design and oversee the production of new materials, like carbon fibres for space elevators and antimatter technologies for space propulsion. Quantum life would seem to have all the qualities needed to explore and understand the universe.
· · ·
WHILE THE POSSIBILITY OF a coming “Quantum Age” is exciting, nothing is guaranteed about the future: it will be what we make of it. For a sharp dose of pessimism, let us turn to a remarkable woman visionary whose main targets were the Romantic notions of her age, the Victorian “Age of Wonder” and exploration, and the Industrial Revolution.
Mary Shelley was the daughter of one of the first feminists, Mary Wollstonecraft, a philosopher, educator, and the author, in 1792, of A Vindication of the Rights of Women; her father was William Godwin, a radical political philosopher. During childbirth, Wollstonecraft contracted a bacterial infection, and she died soon after. Throughout her life, Shelley continued to revere her mother. She was raised by her father, and when sixteen years old she became involved with Percy Bysshe Shelley, one of England’s most famous Romantic poets. Percy was already married, and their relationship caused a great scandal. After his first wife committed suicide, Percy married Mary. They had four children (two before they were married), although only the last survived. The first was premature and died quickly. The second died of dysentery and the third of malaria, both during their parents’ travels in Italy.
Mary started writing Frankenstein; or, The Modern Prometheus on one of these journeys to Italy with Percy, when she was only eighteen. It was published anonymously, although with a preface by Percy, when Mary was twenty-one. Now recognized as one of the earliest works of science fiction,96 Frankenstein provides a compelling warning about the seductions and dangers of science. Shelley’s reference to Prometheus shows the still-persistent influence of ancient Greek civilization on the most forward thinkers of the time.
Contemporary science set the background for the novel. In November 1806, the British chemist Sir Humphrey Davy gave the Bakerian Lecture at the Royal Society in London. His topic was electricity and electrochemical analysis. In his introduction he said, “It will be seen that Volta [inventor of the battery] has presented to us a key that promises to lay open some of the most mysterious recesses of nature . . . There is now before us a boundless prospect of novelty in science; a country unexplored, but noble and fertile in aspect; a land of promise in philosophy.”97
Public demonstrations and experiments were very popular in London at this time. A particularly notorious example was an attempt by another Italian, Giovanni Aldini, professor of anatomy at Bologna, to revive the body of a murderer six hours after he had been hanged. Aldini’s demonstrations were breathlessly reported in the press: “On the first application of the electrical arcs, the jaw began to quiver, the adjoining muscles were horribly contorted, and the left eye actually opened . . . Vitality might have been fully restored, if many ulterior circumstances had not rendered this — inappropriate.”98
Mary Shelley was likely inspired by events like these, and the general fascination with science, to write Frankenstein. Her novel captures the intensity and focus of a young scientist — Dr. Frankenstein — on the track of solving a great mystery: “After days and nights of incredible labour and fatigue, I succeeded in discovering the cause of generation and life; nay, more, I became myself capable of bestowing animation upon lifeless matter.” Overwhelmed with the exhilaration of his finding, he states, “What had been the study and desire of the wisest men since the creation of the world was now within my grasp.” His success encourages him to press on: “My imagination was too much exalted by my first success to permit me to doubt of my ability to give life to an animal as complex and wonderful as man.”99 Without any thought of the possible dangers, Frankenstein creates a monster whose need for companionship he cannot satisfy and who eventually exacts revenge by murdering Frankenstein’s new bride.
In 1822, four years after Frankenstein appeared, Percy Shelley was drowned in a sailboat accident off Italy. Four years after that, Mary published her fourth novel, The Last Man, foretelling the end of humankind in 2100 as the result of a plague. The book is a damning critique of man’s romantic notions of his own power to control his destiny. Echoing Frankenstein’s reference to Prometheus, The Last Man opens with the discovery of the cave of an ancient Greek oracle at Cumae in southern Italy. (In fact, the cave was actually discovered more than a century after the publication of Shelley’s book.) The narrator recounts finding scattered piles of leaves on which the Sibyl, or prophetess, of the oracle at Cumae, recorded her detailed premonitions. After years of work organizing and deciphering the scattered fragments, the narrator presents The Last Man as a transcription of the Sibyl’s predictions.
In her introduction, Shelley refers to Raphael’s last painting, The Transfiguration. It depicts a stark dichotomy between enlightenment and nobility in the upper half of the painting, and the chaotic, dark world of humanity in the lower half. This conflict, between Apollonian and Dionysian principles, has been one of the most constant themes in literature and in philosophy. Apollo and Dionysus were both sons of Zeus. Apollo was the god of the sun, dreams, and reason; Dionysus was the god of wine and pleasure. Shelley’s reference to the painting is interesting. There is a mosaic copy of Raphael’s Transfiguration in St. Peter’s Basilica in Rome — a sort of digital version of the real painting — and Shelley compares her task of reconstructing the Sibyl’s vision with that of assembling The Transfiguration if all one had were the painted tiles.
The central theme of the book is the failure of Romantic idealism. Mary’s husband, Percy, believed profoundly in the primacy of ideas. Writing about ancient Rome, for example, he stated: “The imagination beholding the beauty of this order, created it out of itself according to its own idea: the consequence was empire, and the reward ever-living fame.”100
Early on in The Last Man, the ambitious, fame-seeking Raymond is elected as Lord Protector: “The new elections were finished; parliament met, and Raymond was occupied in a thousand beneficial schemes . . . he was continually surrounded by projectors and projects which were to render England one scene of fertility and magnificence; the state of poverty was to be abolished; men were to be transported from place to place almost with the same facility as the Princes in the Arabian Nights . . .”101
Before any of these plans came to be realized, war between Greece and Turkey intervenes, and Raymond is killed in Constantinople. Adrian, son of the last king of England, is a leading figure. But he is a hopeless dreamer (clearly modelled after Shelley’s husband, Percy). Following a brief period of peace, he states, “Let this last but twelve months . . . and earth will become a Paradise. The energies of man were before directed at the destruction of his species: they now aim at its liberation and preservation. Man cannot repose, and his restless aspirations will now bring forth good instead of evil. The favoured countries of the south will throw off the iron yoke of servitude [Shelley’s reference to slavery]; poverty will quit us, and with that, sickness. What may not the forces, never before united, of liberty and peace achieve in this dwelling of man?”102
Adrian’s dreams are also soon shattered. A plague is rapidly spreading west from Constantinople, and people come flooding in from Greece, Italy, and France. Raymond’s dithering successor, Ryland, flees his position as the plague enters London. Adrian, the Romantic, assumes command, but his principal strategy is to convince people to pretend that there is no plague. Eventually the truth becomes obvious and he has no choice but to lead the population out of England, onto the Continent, where they slowly and painfully die.
Throughout, Shelley recounts the false optimism of the characters, who are always trying to see a bright future when they are in fact doomed. Their grandiose visions and their delusions about the powers of reason, right, and progress cause them to fail, again and again. Finally, the last survivor, Verney, sails off around the world: “I form no expectation of alteration for the better, but the monotonous present is intolerable to me. Neither hope nor joy are my pilots — restless despair and fierce desire of change lead me on. I long to grapple with danger, to be excited by fear, to have some task, however slight or voluntary, for each day’s fulfilment.”103 The irony is palpable. The Last Man was received poorly on its publication and was forgotten for one and a half centuries, but it has recently come to be seen as Shelley’s second-most important work.
I cannot help also referring to one of the book’s minor characters, the astronomer Merrival. His calculations have told him that in a hundred thousand years, the pole of the Earth will coincide with the pole of the Earth’s orbit around the sun and, in his words, “a universal spring will be produced, and the earth will become a paradise.”104 He pays no attention to the spreading plague, even when it affects his family, busy as he is writing his “Essay on the Pericyclical Motions of the Earth’s Axis.” I hope we scientists are not Merrivals!
NEARLY TWO CENTURIES HAVE passed since Frankenstein. The monster Shelley envisaged Dr. Frankenstein creating has not materialized, nor so far have uncontrollable diseases like the plague she imagined in The Last Man. Nevertheless, the dangers she speaks about are as relevant as ever, and we would do well to pay attention to her concerns. Advances in biology have led to vaccinations, antibiotics, antiretrovirals, clean water, and other revolutionary public health advances. And genetic engineering has not yet produced any monsters. However, the benefits of science have been shared far too unevenly. There have been, and there continue to be, a vast number of deaths and untold suffering from preventable causes. The only guarantee of progress is a continued commitment to humane principles, and to conducting science on behalf of society.
One does not need to look far to find examples where science’s success has encouraged a certain overreach and disconnect. There is a tendency to exaggerate the significance of scientific discoveries, and to dismiss nonscientific ideas as irrelevant.
As an example from my own field of cosmology, let me cite Lawrence Krauss’s recent book, A Universe from Nothing. In it, he claims that recent observations showing that the universe has simple, flat geometry imply that it could have been created out of nothing. His argument is, in my view, based upon a technical gaffe, but that is not my point here. Through a misrepresentation of the physics, he leaps to the conclusion that a creator was not needed. The book includes an afterword by Richard Dawkins, hailing Krauss’s argument as the final nail in the coffin for religion. Dawkins closes with, “If On the Origin of Species was biology’s deadliest blow to supernaturalism [which is what Dawkins calls religion], we may come to see A Universe from Nothing as the equivalent from cosmology. The title means exactly what it says. And what it says is devastating.”
The rhetoric is impressive, but the arguments are shallow. The philosopher David Albert — one of today’s deepest thinkers on quantum theory — framed his response at the right level, in his recent review of Krauss’s book in the New York Times, lamenting that “all that gets offered to us now, by guys like these, in books like this, is the pale, small, silly, nerdy accusation that religion is, I don’t know, dumb.”105 In comparing Krauss’s and Dawkins’s arguments with the care and respectfulness of those presented by Hume in his Dialogues Concerning Natural Religion, all the way back in the eighteenth century, one can’t help feeling the debate has gone backwards. Hume presents his skepticism through a dialogue which allows opposing views to be forcefully expressed, but which humbly reaches no definitive conclusion. After all, that is his main point: we do not know whether God exists. One of the participants is clearly closest to representing Hume’s own doubts: tellingly, Hume names him Philo, meaning “love.”
For another example of the disconnection between science and society, let me quote the final paragraph of U.S. theoretical physicist and Nobel Prize winner Steven Weinberg’s otherwise excellent book The First Three Minutes, describing the hot big bang. He says, “The more the universe seems comprehensible, the more it seems pointless. But if there is no solace in the fruits of our research, there is at least some consolation in the research itself . . . The effort to understand the universe is one of the very few things that lifts human life a little above the level of farce, and gives it some of the grace of tragedy.”106
Many scientists express this viewpoint, that the universe seems pointless at a deep level, and that our situation is somehow tragic. For myself, I find this position hard to understand. Merely to be alive, to experience and to appreciate the wonder of the universe, and to be able to share it with others, is a miracle. I can only think that it is the separation of scientists from society, caused by the focus and intensity of their research, that leads them to be so dismissive of other aspects of human existence.
Of course, taking the view that the universe seems pointless is also a convenient way for scientists to eliminate, as far as possible, any prior prejudices or ulterior motives from their research. They want to figure out how things work without being biased by any thoughts of why they might work that way. It is reasonable to postpone questions of purpose when we have no scientific means of answering them. But to deny such influences is not to deal with them. Scientists are often consciously or unconsciously driven by agendas well outside science, even if they do not acknowledge them.
Many people outside science are interested in exactly the questions that scientists prefer to avoid. They want to know what scientific discoveries mean: in the case of cosmology, why the universe exists and why we are here. I think that if science is to overcome the disconnection with society, it needs to be better able to explain science’s greatest lesson: that for the purpose of advancing our knowledge, it is extremely important to doubt constantly and to live with uncertainty. Richard Feynman put it this way: “This attitude of mind — this attitude of uncertainty — is vital to the scientist, and it is this attitude of mind which the student must first acquire. It becomes a habit of thought. Once acquired, we cannot retreat from it anymore.”107 In today’s soundbite world, intellectual modesty and being frank about uncertainty are not the easiest things to promote. Nevertheless, I suspect scientists will become more, not less, credible if they do so, and society will feel less alienated from science.
My own view is that science should ultimately be about serving society’s needs. Society needs to better understand science and to see its value beyond just providing the next gadget or technical solution. Science should be a part of fulfilling society’s goals and creating the kind of world we would like to inhabit. Building the future is not only about servicing our needs, although those are important. There’s an inspirational aspect of science and of understanding our place in the universe which enriches society and art and music and literature and everything else. Science, in its turn, becomes more creative and fruitful when it is challenged to explain what it is doing and why, and when scientists better appreciate the importance of their work to wider society.
Ever since the ancient Greeks, science has well appreciated that a free exchange of ideas, in which we are constantly trying out new theories while always remaining open to being proved wrong, is the best way to make progress. Within the scientific community, a new student can question the most senior professor, and authority is never acceptable as an argument. If our ideas are any good, it does not matter where they come from; they must stand on their own. Science is profoundly democratic in this sense. While its driver is often individual genius or insight, it engenders a strong sense of common cause and humility among its practitioners. These ways of thinking and behaving are valuable well beyond the borders of science.
However, as science has grown, it has also become increasingly specialized. To quote Richard Feynman again, “There are too few people who have such a deep understanding of two departments of our knowledge that they do not make fools of themselves in one or the other.”108 As science fragments, it becomes less accessible, both to other scientists and to the general public. Opportunities for cross-fertilization are missed, scientists lose their sense of wider purpose, and their science is reduced either to a self-serving academic exercise or a purely technical task, while society remains ignorant of science’s great promise and importance.
There are ways of overcoming this problem of disconnection, and they are becoming increasingly important.
I AM FORTUNATE TO live in a very unusual community in Canada with a high level of public interest in science. Every month, our institute, Perimeter, holds a public lecture on physics in the local high school, in a hall with a capacity of 650. Month in and month out, the lectures are packed, with all the tickets sold out.
How did this happen? The key, I believe, is simply respect. When scientists make a serious attempt to explain what they are doing and why, it isn’t hard to get people excited. There are many benefits: for the public, it is a chance to learn first-hand from experts about cutting-edge research; for scientists, it is a great chance to share one’s ideas and to learn how to explain them to non-specialists. It is energizing to realize that people outside your field actually care. Finally, and most importantly, for young people, attending an exciting lecture can open the path to a future career.
In the heyday of Victorian science, many scientists engaged in public outreach. As we learned in Chapter One, Michael Faraday was recruited into science at a public lecture given by Sir Humphrey Davy at the Royal Institution in London. Faraday went on to succeed Davy as the director of the institution and give many public lectures himself. While a fellow at Cambridge, James Clerk Maxwell helped to found a workingmen’s college providing scientific lectures in the evenings, and he persuaded local businesses to close early so their workers could attend. When he became a professor at Aberdeen and then King’s College, London, he continued to give at least one evening lecture each week at the workingmen’s colleges there.109
Today, the internet provides an excellent medium for public outreach. One of the first students to attend the new master’s program at our institute, Henry Reich, went on to pursue an interest in film. A year later, he launched a YouTube channel called MinutePhysics. It presents cleverly thoughtful, low-tech but catchy explanations of basic concepts in physics, making the ideas accessible and captivating to a wide audience. Henry realized there is a treasure trove of insights, many never before explained to the public, lying buried in the scientific literature. Communicating them well requires a great deal of care, thought, and respect for your audience. When quality materials are produced, people respond. Henry’s channel now has more than three hundred thousand subscribers.
At our institute, we also engage in scientific inreach. The idea is to bring people from fields outside science, from history, art, music, or literature, into our scientific community. Science shares a purpose with these other disciplines: to explore and appreciate this universe we are privileged to inhabit. Every one of these human activities is inspiring, as they stretch our senses in different and complementary ways. However much any of us has learned, there is so much more that we do not know. What we have in common, in our motives and loves and aspirations, is much more important than any of our differences. Looking back on the great eras of discovery and progress, we see that this commonality of purpose was critical, and it seems to me we have to recreate it.
Throughout these chapters, we have looked at the special people, places, and times that produced profound progress. We have looked at ancient Greece, where a great flowering of science, philosophy, art, and literature went hand in hand with new ways of organizing society. The philosopher Epicurus, for example, seems in some respects to have anticipated the arguments of Hume and Galileo, arguing that nothing should be believed without being tested through direct observation and logical deduction; in other words, the scientific method.110 Epicurus is also credited with the ethic of reciprocity, according to which one should treat others as one would like to be treated by them. These two ideas laid the foundations for justice: that everyone has the same right to be fairly treated and no one should be penalized until their crime is proven. Likewise, the methods and principles of scientific discourse were foundational to the creation of our modern democracy. We all have the capacity to reason, and everyone deserves an equal hearing.
We also looked at the Italian Renaissance, when the ancient Greek ideals were recovered and enlightenment progressed once more. In the Scottish Enlightenment, people encouraged each other to see the world confidently and with fresh eyes, to form new ways of understanding and representing it, and of teaching and communicating. These periods represented great liberations for society and great advances for science.
The enlightenments of the past did not begin in the most powerful countries: Greece was a tiny country, constantly threatened from the east and the north; Scotland was a modest neighbour of England. What they had in common was a sense among their people that this was their moment. They were countries that grasped an opportunity to become centres for reason and for progress. They had the courage to shape themselves and the future, and we are all still feeling the impact.
It is tempting to draw parallels between eighteenth-century Scotland, one step removed from its far more powerful colonial neighbour, and today’s Canada, which, compared to the modern Rome to its south, feels like a haven of civilization. Canada has a great many advantages: strong public education and health care systems; a peaceful, tolerant, and diverse society; a stable economy; and phenomenal natural resources. It is internationally renowned as a friendly and peaceful nation, and widely appreciated for its collaborative spirit and for the modest, practical character of its people. There are many other countries and places in the world that hold similar promise, as centres to host the next great flowering of civilization on behalf of the planet. I can think of no better cause than for us to join together to make the twenty-first century unique as the era of the first Global Enlightenment.
· · ·
THE HISTORY OF PHYSICS traces back to the dawn of civilization. It is a story of how we have steadily realized our capacity to discover nature’s deep secrets, and to build the understanding and the technologies that lay the basis for progress. Again and again, our efforts have revealed the fundamental beauty and simplicity in the universe. There is no sign of the growth in our knowledge slowing down, and what lies on the horizon today is every bit as thrilling as anything we have discovered in the past.
Today, we have many advantages over the scientists of earlier times. There are seven billion minds on the planet, mostly those of young people in aspiring, developing countries. The internet is connecting us all, providing instant access to educational and scientific resources. We need to be more creative in the ways we organize and promote science, and we need to allow more people to get involved. The world can become a hive of education, collaboration, and discussion. The entry of new cultures into the scientific community will be a vital source of energy and creativity.
We are better placed, too, to understand our position in the cosmos. We have just mapped the universe and pieced together the story of its emergence from a tiny ball of light some fourteen billion years ago. Likewise, we have detected the vacuum energy which dominates the universe and determines the Hubble length, the largest distance on which we will ever be able to see galaxies and stars. We have just discovered the Higgs particle, a manifestation of the detailed structure of the vacuum, predicted by theory half a century ago. Today, theory is poised to understand the big bang singularity and physics on the Planck length, a scale so tiny that classical notions of space and time break down.
All the indications are that the universe is at its simplest at the smallest and largest scales: the Planck length and the Hubble length. It may be no coincidence that the size of a living cell is the geometric mean of these two fundamental lengths. This is the scale of life, the realm we inhabit, and it is the scale of maximum complexity in the universe.
We live in a world with many causes of unhappiness. In these chapters, I have compared one of these, the information overload from the digital revolution, with the “ultraviolet catastrophe” that signalled classical physics’ demise at the start of the twentieth century. One can draw further parallels with the selfish, individualistic behaviours that are often the root cause of our environmental and financial crises. Within physics, I see the idea of a “multiverse” as a similarly fragmented perspective, representing a loss of confidence in the prospects for basic science. Yet, I believe all of these crises will ultimately be helpful if they force us, like the quantum physicists, to remake our world in more holistic and far-sighted ways.
Through a deeper appreciation of the universe and our ability to comprehend it, not just scientists but everyone can gain. At a minimum, the magnificent cosmos provides some perspective on our parochial, human-created problems, be they social or political. Nature is organized in better ways, from which we can learn. The love of nature can bring us together and help us to appreciate that we are part of something far greater than ourselves. This sense of belonging, responsibility, and common cause brings with it humility, compassion, and wisdom. Society has too often been content to live off the fruits of science, without understanding it. Scientists have too often been happy to be left alone to do their science without thinking about why they are doing it. It is time to connect our science to our humanity, and in so doing to raise the sights of both. If we can only link our intelligence to our hearts, the doors are wide open to a brighter future, to a more unified planet with more unified science: to quantum technologies that extend our perception, to breakthroughs allowing us to access and utilize energy more cleverly, and to travel in space that opens new worlds.
What a privilege it is to be alive. Truly, we are faced with the opportunity of all time.