In 1986 Dan Lynch, an ex-student from UCLA, started a trade fair for computer hardware and software, known as Interop. Until then the number of people linked together via computer networks was limited to a few hundred ‘hardcore’ scientists and academics. Between 1988 and 1989, however, Interop took off: hitherto a fair for specialists, it was from then on attended by many more people, all of whom suddenly seemed to realise that this new way of communicating – via remote computer terminals that gave access to very many databases, situated across the world and known as the Internet – was a phenomenon that promised intellectual satisfaction and commercial rewards in more or less equal measure. Vint Cerf, a self-confessed ‘nerd’, from California, a man who set aside several days each year to re-read The Lord of the Rings, and one of a handful of people who could be called a father of the Internet, visited Lynch’s fair, and he certainly noticed a huge change. Until that point the Internet had been, at some level, an experiment. No more.1
Different people place the origins of the Internet at different times. The earliest accounts put it in the mind of Vannevar Bush, as long ago as 1945. Bush, the man who had played such a prominent role in the building of the atomic bomb, envisaged a machine that would allow the entire compendium of human knowledge to be ‘accessed’. But it was not until the Russians surprised the world with the launch of the Sputnik in October 1957 that the first faltering steps were taken toward the Net as we now know it. The launch of a satellite, as was discussed in chapter 27, raised the spectre of associated technologies: in order to put such an object in space, Russia had developed rockets capable of reaching America with sufficient accuracy to do huge damage if fitted with nuclear warheads. This realisation galvanised America, and among the research projects introduced as a result of this change in the rules of engagement was one designed to explore how the United States’ command and control system – military and political – could be dispersed around the country, so that should she be attacked in one area, America would still be able to function elsewhere. Several new agencies were set up to consider different aspects of the situation, including the National Aeronautics and Space Administration (NASA) and the Advanced Research Projects Agency, or ARPA.2 It was this outfit which was charged with investigating the safety of command and control structures after a nuclear strike. ARPA was given a staff of about seventy, an appropriation of $520 million, and a budget plan of $2 billion.3
At that stage computers were no longer new, but they were still huge and expensive (one at Harvard at the time was fifty feet long and eight feet high). Among the specialists recruited by ARPA was Joseph Licklider, a tall, laconic psychologist from Missouri, who in 1960 had published a paper on ‘man-computer symbiosis’ in which he looked forward to an integrated arrangement of computers, which he named, ironically, an ‘intergalactic network.’ That was some way off. The first breakthrough came in the early 1960s, with the idea of ‘packet-switching,’ developed by Paul Baran.4 An immigrant from Poland, Baran took his idea from the brain, which can sometimes recover from disease by switching the messages it sends to new routes. Baran’s idea was to divide a message into smaller packets and then send them by different routes to their destination. This, he found, could not only speed up transmission but avoid the total loss of information where one line is faulty. In this way technology was conceived that reassembled the message packets when they arrived, and tested the network for the quickest routes. This same idea occurred almost simultaneously to Donald Davies, working at the National Physical Laboratory in Britain – in fact, packet-switching was his term. The new hardware was accompanied by new software, a brand-new branch of mathematics known as queuing theory, designed to prevent the buildup of packets at intermediate nodes by finding the most suitable alternatives.5
In 1968 the first ‘network’ was set up, consisting of just four sites: UCLA, Stanford Research Institute (SRI), the University of Utah, and the University of California at Santa Barbara.6 The technological breakthrough that enabled this to proceed was the conception of the so-called interface message processor, or IMP, whose task it was to send bits of information to a specified location. In other words, instead of ‘host’ computers being interconnected, the IMPs would be instead, and each IMP would be connected to a host.7 The computers might be different pieces of hardware, using different software, but the IMPs spoke a common language and could recognise destinations. The contract to construct the IMPs was given by ARPA to a small consulting firm in Cambridge, Massachusetts, called Bolt Beranek and Newman (BBN) and they delivered the first processor in September 1969, at UCLA, and the second in October, at SRI. It was now possible, for the first time, for two disparate computers to ‘talk’ to each other. Four nodes were up and running by January 1970, all on the West Coast of America. The first on the East Coast, at BBN’s own headquarters, was installed in March. The ARPANET, as it came to be called, now crossed the continent.8 By the end of 1970 there were fifteen nodes, all at universities or think tanks.
By the end of 1972 there were three cross-country lines in operation and clusters of IMPs in four geographic areas – Boston, Washington D.C., San Francisco and Los Angeles – with, in all, more than forty nodes. By now ARPANET was usually known as just the Net, and although its role was still strictly defence-oriented, more informal uses had also been found: chess games, quizzes, the Associated Press wire service. It wasn’t far from there to personal messages, and one day in 1972, e-mail was born when Ray Tomlinson, an engineer at BBN, devised a program for computer addresses, the most salient feature of which was a device to separate the name of the user from the machine the user was on. Tomlinson needed a character that could never be found in any user’s name and, looking at the keyboard, he happened upon the ‘@’ sign.9 It was perfect: it meant ‘at’ and had no other use. This development was so natural that the practice just took off among the ARPANET community. A 1973 survey showed that there were 50 IMPs on the Net and that three-quarters of all traffic was e-mail.
By 1975 the Net community had grown to more than a thousand, but the next real breakthrough was Vint Cerf’s idea, as he sat in the lobby of a San Francisco hotel, waiting for a conference to begin. By then, ARPANET was no longer the only computer network: other countries had their own nets, and other scientific-commercial groups in America had begun theirs. Cerf began to consider joining them all together, via a series of what he referred to as gateways, to create what some people called the Catenet, for Concatenated Network, and what others called the Internet.10 This required not more machinery but design of TCPs, or transmission-control protocols, a universal language. In October 1977 Cerf and his colleagues demonstrated the first system to give access to more than one network. The Internet as we now know it was born.
Growth of the Net soon accelerated. It was no longer purely a defence exercise, but, in 1979, it was still largely confined to (about 120) universities and other academic/scientific institutions. The main initiatives, therefore, were now taken over from ARPA by the National Science Foundation, which set up the Computer Science Research Network, or CSNET, and in 1985 created a ‘backbone’ of five supercomputer centres scattered around the United States, and a dozen or so regional networks.11 These supercomputers were both the brains and the batteries of the network, a massive reservoir of memory designed to soak up all the information users could throw at it and prevent gridlock. Universities paid $20,000 to $50,000 a year in connection charges. More and more people could now see the potential of the Internet, and in January 1986 a grand summit was held on the West Coast and order put into the e-mail, to create seven domains or ‘Frodos.’ These were universities (edu), government (gov), companies (com), military (mil), nonprofit organisations (org), network service providers (net), and international treaty entities (int). It was this new order that, as much as anything, helped the phenomenal growth of the Internet between 1988 and 1989, and which was seen at Dan Lynch’s Interop. The final twist came in 1990 when the World Wide Web was created by researchers at CERN, the European Laboratory for Particle Physics near Geneva.12 This used a special protocol, HTTP, devised by Tim Berners-Lee, and made the Internet much easier to browse, or navigate. Mosaic, the first truly popular browser, devised at the University of Illinois, followed in 1993. It is only since then that the Internet has been commercially available and easy to use.
The Internet has its critics, such as Brian Winston, who in his 1998 history of media technology warns that ‘the Internet represents the final disastrous application of the concept of commodification of information in the second half of the twentieth century.’13 But few now doubt that the Internet is a new way of communicating, or that soon a new psychology will emerge from relationships forged in ‘cyberspace.’14
In years to come, 1988 may be revealed as a turning point so far as science is concerned. Not only did the Internet and the Human Genome Organisation get under way, bringing about the ultramodern world and setting the shape of the twenty-first century, but a book appeared that had the most commercially successful publishing history of any work of science ever printed. It set the seal on the popular acceptance of science but, as we shall see in the epilogue, in some ways marked its apogee.
A Brief History of Time from the Big Bang to Black Holes, by the Cambridge cosmologist Stephen Hawking, had been five years in the making and in some senses was just as much the work of Peter Guzzardi, a New York editor with Bantam Books.15 It was Guzzardi who had persuaded Hawking to leave Cambridge University Press. CUP had been planning to publish Hawking’s book, because they had published his others, and had offered an advance of £10,000 – their biggest ever. But Guzzardi tempted Hawking to Bantam, though it perhaps wasn’t too difficult a choice for the scientist, since the firm’s editorial board had been won over by Guzzardi’s enthusiasm, to the point of offering a $250,000 advance. In the intervening years, Guzzardi had worked hard to make Hawking’s dense prose ever more accessible for a general audience.16 The book was released in early spring 1988 – and what happened then quickly passed into publishing history. More than half a million hardback copies of the book were sold in both the United States and Britain, where the title went through twenty reprints by 1991 and remained in the best-seller lists for no fewer than 234 weeks, four and a half years. The book was an almost equally great success in Italy, Germany, Japan, and a host of other countries across the world, and Hawking quickly became the world’s most famous scientist. He was given his own television series, made cameo appearances in Hollywood films, and his public lectures filled theatres the size of the Albert Hall in London.17
There was one other unusual element in this story of success. In 1988 Hawking was aged forty-six, but since 1963, when he was twenty-one, he had been diagnosed as suffering from amyotrophic lateral sclerosis, ALS, also known (in the U.K.) as motor neurone disease and (in the United States) as Lou Gehrig’s disease, after the Yankee baseball player who died from it.18 What had begun as mere clumsiness at the end of 1962 had progressed over the intervening years so that by 1988 Hawking was confined to a wheelchair and able to communicate only by means of a special computer connected to a voice synthesiser. Despite these handicaps, in 1979 he had been appointed Lucasian Professor of Mathematics at Cambridge, a post that Isaac Newton had held before him, he had won the Einstein medal, and he had published a number of well-received academic books on gravity, relativity, and the structure of the universe. As Hawking’s biographers say, we shall never know to what extent Stephen Hawking’s considerable disability contributed to the popularity of his ideas, but there was something triumphant, even moving, in the way he overcame his handicap (in the late 1960s he had been given two years to live). He has never allowed his disability to deflect him from what he knows are science’s central intellectual concerns. These involve black holes, the concept of a ‘singularity,’ and the light they throw on the Big Bang; the possibility of multiple universes; and new ideas about gravity and the fabric of reality, in particular ‘string theory.’
It is with black holes that Hawking’s name is most indelibly linked. This idea, as mentioned earlier, was first broached in the 1960s. Black holes were envisaged as superdense objects, the result of a certain type of stellar evolution in which a large body collapses in on itself under the force of gravity to the point where nothing, not even light, can escape. The discovery of pulsars, quasars, neutron stars, and background radiation in the 1960s considerably broadened our understanding of this process, besides making it real, rather than theoretical. Working with Roger Penrose, another brilliant physicist then at Birkbeck College in London, this pair first argued that at the centre of every black hole, as at the beginning of the universe, there must be a ‘singularity,’ a moment when matter is infinitely dense, infinitely small, and when the laws of physics as we know them break down. Hawking added to this the revolutionary idea that black holes could emit radiation (this became known as Hawking radiation) and, under certain conditions, explode.19 He also believes that, just as radio stars had been discovered in the 1960s, thanks to new radio-telescopes, so X rays should also be detectable from space via satellites above the atmosphere, which otherwise screened out such rays. Hawking’s reasoning was based on calculations that showed that as matter was sucked into a black hole, it would get hot enough to emit X rays. Sure enough, four X-ray sources were subsequently identified in a survey of the heavens and so became the first candidates for observable black holes. Hawking’s later calculations showed that, contrary to his first ideas, black holes did not remain stable but lost energy, in the form of gravity, shrank, and eventually, after billions of years, exploded, possibly accounting for occasional and otherwise unexplained bursts of energy in the universe.20
In the 1970s Hawking was invited to Caltech, where he met and conferred with the charismatic Richard Feynman.21 Feynman was an authority on quantum theory, and Hawking used this encounter to develop an explanation of how the universe began.22 It was a theory he unveiled in 1981 in, of all places, the Vatican. The source of Hawking’s theory was an attempt to conceive what would happen when a black hole shrank to the point where it disappeared, the troublesome fact being that, according to quantum theory, the smallest theoretical length is the Planck length, derived from the Planck constant, and equal to io–35 metres. Once something reaches this size (and though it is very small, it is not zero), it cannot shrink further but can only disappear entirely. Similarly, the Planck time is, on the same basis, 10–43 of a second, so that when the universe came into existence, it could not do so in less time than this.23 Hawking resolved this anomaly by a process that can best be explained by an analogy. Hawking asks us to accept, as Einstein said, that space-time is curved, like the skin of a balloon, say, or the surface of the earth. Remember that these are analogies only; using another, Hawking said that the size of the universe at its birth was like a small circle drawn around, say, the North Pole. As the universe – the circle – expands, it is as if the lines of latitude are expanding around the earth, until they reach the equator, and then they begin to shrink, until they reach the South Pole in the ‘Big Crunch.’ But, and this is where the analogy still holds in a useful way, at the South Pole, wherever you go you must travel north: the geometry dictates that it cannot be otherwise. Hawking asks us to accept that at the birth of the universe an analogous process occurred – just as there is no meaning for south at the South Pole, so there is no meaning for before at the singularity of the universe: time can only go forward.
Hawking’s theory was an attempt to explain what happened ‘before’ the Big Bang. Among the other things that troubled physicists about the Big Bang theory was that the universe as we know it appears much the same in all directions.24 Why this exquisite symmetry? Most explosions do not show such perfect balance – what made the ‘singularity’ different? Alan Guth, of MIT, and Andrei Linde, a Russian physicist who emigrated to the United States in 1990, argued that at the beginning of time – i.e., T = 10–43 seconds, when the cosmos was smaller even than a proton – gravity was briefly a repulsive force, rather than an attractive one. Because of this, they said, the universe passed through a very rapid inflationary period, until it was about the size of a grapefruit, when it settled down to the expansion rate we see (and can measure) today. The point of this theory (some critics call it an ‘invention’) is that it is the most parsimonious explanation required to show why the universe is so uniform: the rapid inflation would have blown out any wrinkles. It also explains why the universe is not completely homogeneous: there are chunks of matter, which form galaxies and stars and planets, and other forms of radiation, which form gases. Linde went on to theorise that our universe is not the only one spawned by inflation.25 There is, he contends, a ‘megaverse,’ with many universes of different sizes, and this was something that Hawking also explored. Baby universes are, in effect, black holes, bubbles in space-time. Going back to the analogy of the balloon, imagine a blister on the skin of the balloon, marked off by a narrow isthmus, equivalent to a singularity. None of us can pass through the isthmus, and none of us is aware of the blister, which can be as big as the balloon, or bigger. In fact, any number may exist – they are a function of the curvature of space-time and of the physics of black holes. By definition we can never experience them directly: they have no meaning.
That phrase, ‘no meaning,’ introduces the latest phase of thinking in physics. Some critics call it ‘ironic science,’ speculation as much as experimentation, where there is no real evidence for the (often) outlandish ideas being put forward.26 But that is not quite fair. Much of the speculation is prompted – and supported – by mathematical calculations that point toward solutions where words, visual images and analogies all break down. Throughout the twentieth century physicists have come up with ideas that have only found experimental support much later, so perhaps there is nothing very new here. At the moment, we are living at an in-between time, and have no way of knowing whether many of the ideas current in physics will endure and be supported by experiment. But it seems unlikely that some ever will be.
Another theory of scientists like Hawking is that ‘in principle’ the original black hole and all subsequent universes are actually linked by what are variously known as ‘wormholes’ or ‘cosmic string.’27 Wormholes, as conceived, are minuscule tubes that link different parts of the universe, including black holes, and therefore in theory can act as links to other universes. They are so narrow, however (a single Planck length in diameter), that nothing could ever pass through them, without the help of cosmic string – which, it should be stressed, is an entirely theoretical form of matter, regarded as a relic of the original Big Bang. Cosmic string also stretches across the universe in very thin (but very dense) strips and operates ‘exotically.’ What this means is that when it is squeezed, it expands, and when it is stretched, it contracts. In theory at least, therefore, cosmic string could hold wormholes open. This, again in theory, makes time travel possible, in some future civilisation. That’s what some physicists say; others are sceptical.
Martin Rees’s ‘anthropic principle’ of the universe is somewhat easier to grasp. Rees, the British astronomer royal and another contemporary of Hawking, offers indirect evidence for ‘parallel universes.’ His argument is that for ourselves to exist, a very great number of coincidences must have occurred, if there is only one universe. In an early paper, he showed that if just one aspect of our laws of physics were to be changed – say, gravity was increased – the universe as we know it would be very different: celestial bodies would be smaller, cooler, would have shorter lifetimes, a very different surface geography, and much else. One consequence is that life as we know it can in all probability only form in universes with the sort of physical laws we enjoy. This means, first, that other forms of life are likely elsewhere in the universe (because the same physical laws apply), but it also means that many other universes probably exist, with other physical laws, in which very different forms of life, or no forms of life, exist. Rees argues that we can observe our universe, and conjecture others, because the physical laws exist about us to allow it. He insists that this is too much of a coincidence: other universes, very different from ours, almost certainly must exist.28
Like most senior physicists, cosmologists, and mathematicians, Hawking has also devoted much energy to what some scientists call ‘the whole shebang,’ the so-called Theory of Everything. This too is an ironic phrase, referring to the attempt to describe all of fundamental physics by one set of equations: nothing more. Physicists have been saying this ‘final solution’ is just around the corner for more than a decade, but in fact the theory of everything is still elusive.29 To begin with, before the physics revolution discussed in earlier chapters of this book, two theories were required. As Steven Weinberg tells the story, there was Isaac Newton’s theory of gravity, ‘intended to explain the movements of the celestial bodies and how such things as apples fall to the ground; and there was James Clerk Maxwell’s account of electromagnetism as a way to explain light, radiation, magnetism, and the forces that operate between electrically charged particles.’ However, these two theories were compatible only up to a point: according to Maxwell, the speed of light was the same for all observers, whereas Newton’s theories predicted that the speed measured for light would depend on the motion of the observer. ‘Einstein’s general theory of relativity overcame this problem, showing that Maxwell was right.’ But it was the quantum revolution that changed everything and made physics more beautiful but more complex at the same time. This linked Maxwell’s theory and new quantum rules, which viewed the universe as discontinuous, with a limit on how small the packets of electromagnetic energy can be, and how small a unit of time or distance is. At the same time, this introduced two new forces, both operating at very short range, within the nucleus of the atom. The strong force holds the particles of the nucleus together and is very strong (it is this energy that is released in a nuclear weapon). The other is known as the weak force, which is responsible for radioactive decay.
And so, until the 1960s there were four forces that needed to be reconciled: gravity, electromagnetism, the strong nuclear force, and the weak radioactive force. In the 1960s a set of equations was devised by Sheldon Glashow, and built on by Abdus Salam and Steven Weinberg, at Texas, which described both the weak force and electromagnetism and posited three new particles, W+, W– and Z0.30 These were experimentally observed at CERN in Geneva in 1983. Later on, physicists developed a series of equations to describe the strong force: this was related to the discovery of quarks. Having been given rather whimsical names, including those of colours (though of course particles don’t have colours), the new theory accounting for how quarks interact became known as quantum chromodynamics, or QCD. Therefore electromagnetism, the weak force, and the strong force have all been joined together into one set of equations. This is a remarkable achievement, but it still leaves out gravity, and it is the incorporation of gravity into this overall scheme that would mark, for physicists, the so-called Theory of Everything.
At first they moved toward a quantum theory of gravity. That is to say, physicists theorised the existence of one or more particles that account for the force and gave the name ‘graviton’ to the gravity particle, though the new theories presuppose that many more than one such particle exists. (Some physicists predict 8, others 154, which gives an idea of the task that still lies ahead.) But then, in the mid-1980s, physics was overtaken by the ‘string revolution’ and, in 1995, by a second ‘superstring revolution.’ In an uncanny replay of the excitement that gripped physics at the turn of the twentieth century, a whole new area of inquiry blossomed into view as the twenty-first century approached.31 By 1990 the shelves of major bookstores in the developed world were filled with more popular science books than ever before. And there were just as many physics, cosmology, and mathematics volumes as there were evolution and other biology titles. As part of this phenomenon, in 1999 a physics and mathematics professor who held joint appointments at Cornell and Columbia Universities entered the best-seller lists on both sides of the Adantic with a book that was every bit as difficult as A Brief History of Time, if not more so. The Elegant Universe: Superstrings, Hidden Dimensions and the Quest for the Ultimate Theory, by Brian Greene, described the latest excitements in physics, working hard to render very difficult concepts accessible (Greene, not to put his readers off, called these difficult subjects ‘subtle’).32 He introduced a whole new set of physicists to join the pantheon that includes Einstein, Ernest Rutherford, Niels Bohr, Werner Heisenberg, Erwin Schrödinger, Wolfgang Pauli, James Chadwick, Roger Penrose, and Stephen Hawking. Among these new names Edward Witten stands out, together with Eugenio Calabi, Theodor Kaluza, Andrew Strominger, Stein Strømme, Cumrun Vafa, Gabriele Veneziano, and Shing-Tung Yau, about as international a group of names as you could find anywhere.
The string revolution came about because of a fundamental paradox. Although each was successful on its own account, the theory of general relativity, explaining the large-scale structure of the universe, and quantum mechanics, explaining the minuscule subatomic scale, were mutually incompatible. Physicists could not believe that nature would allow such a state of affairs – one set of laws for large things, another for small things – and for some time they had been seeking ways to reconcile this incompatibility, which many felt was not unrelated to their failure to explain gravity. There were other fundamental questions, too, which the string theorists faced up to: Why are there four fundamental forces?33 Why are there the number of particles that there are, and why do they have the properties they do? The answer that string theorists propose is that the basic constituent of matter is not, in fact, a series of particles – point-shaped entities – but very tiny, one-dimensional strings, as often as not formed into loops. These strings are very small – about 10–33 of a centimetre – which means that they are beyond the scope of direct observation of current measuring instruments. Notwithstanding that, according to string theory an electron is a string vibrating one way, an up quark is a string vibrating another way, and a tau particle is a string vibrating in a third way, and so on, just as the strings on a violin vibrate in different ways so as to produce different notes. As the figures show, we are dealing here with very small entities indeed – about a hundred billion billion (1020) times smaller than an atomic nucleus. But, say the string theorists, at this level it is possible to reconcile relativity and quantum theory. As a by-product and a bonus, they also say that a gravity particle – the graviton – emerges naturally from the calculations.
String theory first emerged in 1968–70, when Gabriele Veneziano, at CERN, noticed that a mathematical formula first worked out 200 years ago accidentally seemed to explain various aspects of particle physics.34 Then three other physicists, Yoichiro Nambu, Holger Nielson and Leonard Susskind, showed that these mathematics could be better understood if particles were not point-shaped objects but small strings that vibrated. The approach was discarded later, however, after it failed to explain the strong force. But the idea refused to die, and the first string revolution, as it came to be called, took off in 1984, after a landmark paper by Michael Greene and John Schwarz first showed that relativity and quantum theory could be reconciled by string theory. This breakthrough stimulated an avalanche of research, and in the next two years more than a thousand papers on string theory were published, together showing that many of the main features of particles physics emerge naturally from the theory. This fecundity of string theory, however, brought its own problems. For a while there were in fact five string theories, all equally elegant, but no one could tell which was the ‘real’ one. Once more string theory stalled, until the ‘Strings 1995’ conference, held in March at the University of Southern California, where Edward Witten introduced the ‘second superstring revolution.’35 Witten was able to convince his colleagues that the five apparently different theories were in fact five aspects of the same underlying concept, which then became known as M-theory, the M standing variously for mystery, meta, or ‘mother of all theories.’*
In dealing with such tiny entities as strings, possibilities arise that physicists had not earlier entertained, one being that there may be ‘hidden dimensions’ and to explain this another analogy is needed. Start with the idea that particles are seen as particles only because our instruments are too blunt to see that small. To use Greene’s own example, think of a hosepipe seen from a distance. It looks like a filament in one dimension, like a line drawn on a page. In fact, of course, when you are close up it has two dimensions – and always did have, only we weren’t close enough to see it. Physicists say it is (or may be) the same at string level – there are hidden dimensions curled up of which we are not at present aware. In fact, they say that there may be eleven dimensions in all, ten of space and one of time.36 This is a difficult if not impossible idea to imagine or visualise, but the scientists make their arguments for mathematical reasons (math that even mathematicians find difficult). When they do make this allowance, however, many things about the universe fall into place. For example, black holes are explained – as perhaps similar to fundamental particles, as gateways to other universes. The extra dimensions are also needed because the way they curl and bend, string theorists say, may determine the size and frequency of the vibrations of the strings, in other words explaining why the familiar ‘particles’ have the mass and energy and number that they do. In its latest configuration, string theory involves more than strings: two-, three-, and more dimensional membranes, or ‘branes,’ small packets, the understanding of which will be the main work of the twenty-first century.37
The most startling thing about string theory, other than the existence of strings themselves, is that it suggests there may be a prehistory to the universe, a period before the Big Bang. As Greene puts it, string theory ‘suggests that rather than being enormously hot and tightly curled into a tiny spatial speck, the universe started out as cold and essentially infinite in spatial extent.’38 Then, he says, an instability kicked in, there was a period of inflation, and our universe formed as we know it. This also has the merit of allowing all four forces, including gravity, to be unified.
String theory stretches everyone’s comprehension to its limits. Visual analogies break down, the math is hard even for mathematicians, but there are a few ideas we can all grasp. First, strings concern a world beyond the Planck length. This is, in a way, a logical outcome of Planck’s conception of the quantum, which he first had in 1900. Second, as yet it is 99 percent theory; physicists are beginning to find ways to test the new theories experimentally, but as of now there is no shortage of sceptics as to whether strings even exist. Third, at these very small levels, we may enter a spaceless and timeless realm. The very latest research involves structures known as zero branes in whose realm ordinary geometry is replaced by ‘noncommunicative geometry,’ conceived by the French mathematician Alain Connes. Greene believes this may be a major step forward philosophically as well as scientifically, a breakthrough ‘that is capable of giving us an answer to the question of how the universe began and why there are such things as space and time – a formalism that will take us a step closer to answering Leibniz’s question of why there is something rather than nothing.’39 Finally, in superstring theory we have the virtually complete amalgamation of physics and mathematics. The two have always been close, but never more so than now, as we approach the possibility that in a sense, the very basis for reality is mathematical.
Many scientists believe we are living in a golden age for mathematics. Two areas in particular have attracted widespread attention among mathematicians themselves.
Chaoplexity is an amalgam of chaos and complexity. In 1987 in Chaos: Making a New Science, James Gleick introduced this new area of intellectual activity.40 Chaos research starts from the concept that there are many phenomena in the world that are, as the mathematicians say, nonlinear, meaning they are in principle unpredictable. The most famous of these is the so-called butterfly effect, whereby a butterfly fluttering its wings in, say, the Midwest of America can trigger a whole raft of events that might culminate in a monsoon in the Far East. A second aspect of the theory is that of the ‘emergent’ property, which refers to the fact that there are on Earth phenomena that ‘cannot be predicted, or understood, simply by examining the system’s parts.’ Consciousness is a good example here, since even if it can be understood (a very moot point), it cannot be understood from inspection of neurons and chemicals within the brain. However, this only goes halfway to what the chaos scientists are saying. They also argue that the advent of the computer enables us to conduct much more powerful mathematics than ever before, with the result that we shall eventually be able to model – and therefore simulate – complex systems, such as large molecules, neural networks, population growth, weather patterns. In other words, the deep order underlying this apparent chaos will be revealed.
The basic idea in chaoplexity comes from Benoit Mandelbrot, an applied mathematician from IBM, who identified what he called the ‘fractal.’ The perfect fractal is a coastline, but others include snowflakes and trees. Seen from a distance, they have one shape or outline; closer up more intricate details are revealed; closer still, and there is yet more detail. However close you go, the more intricate the outline, with, often, the patterns repeated at different scales. Because these outlines never resolve themselves into smooth lines – in other words never conform to some simple mathematical function – Mandelbrot called them the ‘most complex objects in mathematics.’41 At the same time, however, it turns out that simple mathematical rules can be fed into computer programs that, after many generations, give rise to complicated patterns, patterns that ‘never quite repeat themselves.’ From this, and from their observations of real-life fractals, mathematicians now infer that there are in nature some very powerful rules governing apparently chaotic and complex systems that have yet to be unearthed – another example of deep order.
In the late 1980s and early 1990s chaos suddenly blossomed as one of the most popular forms of mathematics, and a new research outfit was founded, the Santa Fe Institute in New Mexico, southeast of Los Alamos, where Murray Gell-Mann, discoverer of the quark, joined the faculty.42 This new specialty has come up with several new concepts, among them ‘self-organised criticality,’ ‘catastrophe theory,’ the hierarchical structure of reality, ‘artificial life,’ and ‘self-organisation.’ Self-organised criticality is the brainchild of Per Bak, a Danish physicist who emigrated to the United States in the 1970s.43 His starting point, as he told John Horgan, is a sandpile. As one adds grains of sand and the pile grows, there comes a point – the critical state – when the addition of a single grain can cause an avalanche. Bak was struck by the apparent similarity of this process to other phenomena – stock market crashes, the extinction of species, earthquakes, and so on. He takes the view that these processes can be understood mathematically – that is, described mathematically. We may one day be able to understand why these things happen, though that doesn’t necessarily mean we shall be able to control and prevent them. It is not far from Per Bak’s theory to Frenchman René Thom’s idea of catastrophe theory, that purely mathematical calculations can explain ‘discontinuous behaviour,’ such as the emergence of life, the change from a caterpillar into a butterfly, or the collapse of civilisations. They are all aspects of the search for deep order.
Against all this the work of Philip Anderson stands out. He won a Nobel Prize in 1977 for his work on superconductors. Instead of arguing for underlying order, Anderson’s view is that there is a hierarchy of order – each level of organisation in the world, and in biology in particular, is independent of the order in the levels above and below. ‘At each stage, entirely new laws, concepts and generalisations are necessary, requiring inspiration and creativity to just as great a degree as in the previous one. Psychology is not applied biology, nor is biology applied chemistry … you mustn’t give in to the temptation that when you have a good general principle at one level that it’s going to work at all levels.’44
There is a somewhat disappointed air about the chaoplexologists at the turn of the century. What seemed so thrilling in the early 1990s has not, as yet, produced anything nearly as exciting as string theory, for example. Where math does remain exciting and undismayed, however, is in its relationship to biology. These achievements were summarised by Ian Stewart, professor of mathematics at Warwick University in Britain, in his 1998 book Life’s Other Secret.45 Stewart comes from a tradition less well known than the Hawkings-Penrose-Feynman-Glashow physics/cosmology set, or the Dawkins-Gould-Dennett evolution set. He is the latest in a line that includes D’Arcy Wentworth Thompson (On Growth and Form, 1917), Stuart Kauffman (The Origins of Order, 1993), and Brian Goodwin (How the Leopard Changed Its Spots, 1994). Their collective message is that genetics is not, and never can be, a complete explanation for life. What is also needed, surprising as it may seem, is a knowledge of mathematics, because it is mathematics that governs the physical substances – the deep order – out of which, in the end, all living things are made.
Life’s Other Secret is dedicated to showing that mathematics ‘now informs our understanding of life at every level from DNA to rain forests, from viruses to flocks of birds, from the origins of the first self-copying molecule to the stately unstoppable march of evolution.’46 Some of Stewart’s examples are a mixture of the enchanting and the provocative, such as the mathematics of spiders’ webs and snowflakes, the population variations of ant colonies, and the formation of swarms of starlings; he also explores the branching systems of plants and the patterned skins of such animals as leopards and tigers. He has a whole chapter, ‘Flowers for Fibonacci,’ outlining patterns in the plant kingdom. The Fibonacci sequence of numbers –
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144 …
– was first invented by Leonardo of Pisa in 1202, Leonardo being the son of Bonaccio, hence ‘Fi-bonacci.’ In the sequence, each number is the sum of the two that precede it, and this simple arrangement describes so much: lilies have 3 petals, for example, buttercups have 5, delphiniums 8, marigolds 13, asters 21 and daisies 34, 55 or 89.47 But Stewart’s book, and thinking, are much more ambitious and interesting than this. He begins by showing that the division of cells in the embryo displays a remarkable similarity to the way soap bubbles form in foams, and that the way chromosomes are laid out in a dividing cell is also similar to the way mutually repelling magnets arrange themselves. In other words, whatever instructions are coded into genes, many biological entities behave as though they are constrained by the physical properties they possess, properties that can be written as mathematical equations. For Stewart this is no accident. This is life taking advantage of the mathematics/physics of nature for its own purposes. He finds that there is a ‘deep geometry’ of molecules, especially in DNA, which forms knots and coils, this architecture being all-important. For example, he quotes a remarkable experiment carried out by Heinz Fraenkel-Conrat and Robley Williams with the tobacco mosaic virus.48 This, says Stewart, is a bridge between the inorganic and organic worlds; if the components of the virus are separated in a test tube and then left to their own devices, they spontaneously reassemble into a complete virus that can replicate. In other words, it is the architecture of the molecules that automatically produces life. In theory, therefore, this form of virus – life – could be created by preparing synthetic substances and putting them together in a test tube. In the latter half of the 1990s, mathematicians have understood the processes by which primitive forms of life – the slime mould, for example, the soil amoeba Dictyostelium discoideum – proceed. They turn out to be not so very difficult mathematical equations. ‘The main point here,’ says Stewart, ‘is that a lot of properties of life are turning out to be physics, not biology.’49
Perhaps most revealing are the experiments that Stewart and others call ‘artificial life.’ These are essentially games played on computers designed to replicate in symbolic form various aspects of evolution.50 The screen will typically have a grid, say 100 squares wide and 100 squares deep. Into each of these squares is allotted a ‘bush’ or a ‘flower,’ say, or on the other hand, a ‘slug’ and ‘an animal that preys on slugs.’ Various rules are programmed in: one rule might be that a predator can move five squares each time, whereas a slug can move only one square; another might be that slugs on green flowers are less likely to be seen (and eaten) than slugs on red flowers, and so on. Then, since computers are being used, this artificial life can be turned on and run for, say, 10,000 moves, or even 50 million moves, to see what ‘A-volves’ (A = artificial). A number of these programs have been tried. The most startling was Andrew Pargellis’s ‘Amoeba,’ begun in 1996. This was seeded only with a random block of computer code, 7 percent of which was randomly replaced every 100,000 steps (to simulate mutation). Pargellis found that about every 50 million steps a self-replicating segment of code appeared, simply as a result of the math on which the program was based. As Stewart put it, ‘Replication didn’t have to be built into the rules – it just happened.’51 Other surprises included symbiosis, the appearance of parasites, and long periods of stasis punctuated by rapid change – in other words, punctuated equilibrium much as described by Niles Eldredge and Stephen Jay Gould. Just as these models (they are not really experiments in the traditional sense) show how life might have begun, Stewart also quotes mathematical models which suggest that a network of neural cells, a ‘neural net,’ when hooked together, naturally acquires the ability to make computations, a phenomenon known as ‘emergent computation.’52 It means that nets with raw computational ability can arise spontaneously through the workings of ordinary physics: ‘Evolution will then select whichever nets can carry out computations that enhance the organism’s survival ability, leading to specific computation of an increasingly sophisticated kind.’53
Stewart’s fundamental point, not accepted by everyone, is that mathematics and physics are as powerful as genetics in giving form to life. ‘Life is founded on mathematical patterns of the physical world. Genetics exploits and organises those patterns, but physics makes them possible and constrains what they can be.’54 For Stewart, genetics is not the deepest secret, the deepest order of life. Instead, mathematics is, and he ends his book by predicting a new discipline for the twenty-first century, ‘morphomatics,’ which will attempt to marry math, physics, and biology and which, he clearly hopes, will reveal the deep patterns in the world around us and, eventually, help us to understand how life began.