27
FORCES OF NATURE

By insisting that science was a ‘culture’ just as much as serious literature was, C. P. Snow was emphasising both the intellectual parity of the two activities and, at the same time, their differences. Perhaps the most important difference was the scientific method — the process of empirical observation, rational deduction, and continuous modification in the light of experience. On this basis, scientists were depicted as the most rational of beings, unhindered in their activities by such personal considerations as rivalry, ambition, or ideology. Only the evidence counted. Such a view was supported by the scientific papers published in professional journals. The written style was invariably impersonal to the point of anonymity, with a near-universal formal structure: statement of the problem; review of the literature; method; results; conclusion. In the journals, science proceeded by orderly steps, one at a time.

There was only one problem with this view: it wasn’t true. It wasn’t close to true. Scientists knew this, but for a variety of reasons, one of which was the insecurity Snow highlighted, it was rarely if ever broadcast. The first person to draw attention to the real nature of science was yet another Austro-Hungarian emigré, Michael Polanyi, who had studied medicine and physical chemistry in Budapest and at the Kaiser Wilhelm Institute in Berlin before World War II. By the end of the hostilities, however, Polanyi was professor of sociology at Manchester University (his brother Karl was an economist at Columbia). In his 1946 Riddell lectures, at the University of Durham, published as Science, Faith and Society, Michael Polanyi advanced two fundamental points about science that would come to form a central plank in the late-twentieth-century sensibility.1 He first said that much of science stems from guesswork and intuition and that although, in theory, science is continually modifiable, in practice it doesn’t work out like that: ‘The part played by new observations and experiment in the process of discovery is usually over-estimated.’2 ‘It is not so much new facts that advance science but new interpretations of known facts, or the discovery of new mechanisms or systems that account for known facts.’ Moreover, advances ‘often have the character of a gestalt, as when people suddenly “see” something that had been meaningless before.’3 His point was that scientists actually behave far more intuitively than they think, and that, rather than being absolutely neutral or disengaged in their research, they start with a conscience, a scientific conscience. This conscience operates in more than one way. It guides the scientist in choosing a path of discovery, but it also guides him in accepting which results are ‘true’ and which are not, or need further study. This conscience, in both senses, is a fundamental motivating force for the scientist.

Polanyi, unlike others perhaps, saw science as a natural outgrowth of religious society, and he reminded his readers that some of the founders of the Christian church – like Saint Augustine – were very interested in science. For Polanyi, science was inextricably linked to freedom and to an atomised society; only in such an environment could men make up their own minds as true independents. But for him, this was an outgrowth of monotheistic religion, Christianity in particular, which gave the world the idea, the tradition, of ‘transcendent truth,’ beyond any one individual, truth that is ‘out there,’ waiting to be found. He examined the structure of science, observing for example that few fellows of the Royal Society ever objected that any of their colleagues were unworthy, and that few injustices were done, in that no one was left out of the society who was worthy of inclusion. Science, and fairness, are linked.

Polanyi saw the tradition of science, the search for objective, transcendent truth, as at base a Christian idea, though of course much developed – evolved – beyond the times when there was only revealed religion. The development of science, and the scientific method, he felt, had had an effect on toleration in society, and on freedom, every bit as important as its actual findings. In fact, Polanyi saw an eventual return to God; for him, the development of science, and the scientific way of thinking and working, was merely the latest stage in fulfilling God’s purpose, as man makes moral progress. The fact that scientists operate so much from intuition and according to their consciences only underlines his point.4

George Orwell disagreed. He believed science to be coldly rational, and no one detested or feared this cold rationalism more than he did. Both Animal Farm and Nineteen Eighty-Four are ostensibly political novels. When the latter was published in 1948, it was no less contentious than Orwell’s earlier book and was again interpreted by conservatives as an attack on the totalitarian nature of socialism by a former socialist who had seen the light. But this is not how the author saw it himself. As much as anything, it was a pessimistic attack on science. Orwell was pessimistic partly because he was ill with TB, and partly because the postwar world of 1948 was still very grim in Britain: the meat ration (two chops a week) was not always available, bread and potatoes were still rationed, soap was coarse, razor blades were blunt, elevators didn’t work, and according to Julian Symons, Victory gin gave you ‘the sensation of being hit on the head with a rubber club.’5 But Orwell never stopped being a socialist, and he knew that if it was to develop and succeed, it would have to take on the fact of Stalinism’s brutality and totalitarian nature. And so, among the ideas that Orwell attacks in Nineteen Eighty-Four, for example, is the central argument of The Managerial Revolution by James Burnham, that a ‘managerial class’ – chief among whom were scientists, technicians, administrators, and bureaucrats – was gradually taking over the running of society in all countries, and that terms like socialist and capitalist had less and less meaning.6 But the real power of the book was Orwell’s uncanny ability to evoke and predict totalitarian society, with its scientific and mock-scientific certainties. The book opens with the now-famous line, ‘It was a bright cold day in April, and the clocks were striking thirteen.’ The clocks do not (yet) strike thirteen, but Orwell’s quasi-scientific ideas about Thought Police, Newspeak, and memory holes (a sort of shredder whereby the past is consigned to oblivion) are already chillingly familiar. Phrases like ‘Big brother is watching you’ have passed into the language partly because the technology now exists to make this possible.

Orwell’s timing for Nineteen Eighty-Four could not have been better. The year in which the book was published, 1948, saw the beginning of the Berlin blockade, when Stalin cut off electricity to the western zones of the divided city, and all access by road and rail from West Germany. The threat of Stalinism was thus made plain for all to see. The blockade lasted nearly a year, until May 1949, but its effects were more permanent because the whole episode concentrated the minds of the Western powers, who now realised that the Cold War was here to stay. But Orwell’s timing was also good because Nineteen Eighty-Four coincided exactly with a very different set of events taking place on the intellectual front inside Russia which showed, just as much as the Berlin blockade, what Stalinism was all about. This was the Lysenko affair.

We have already seen, in chapter 17, how in the 1930s Soviet biology was split between traditional geneticists, who supported Western ideas – Darwin, Mendelian laws of inheritance, Morgan’s work on the chromosome and the gene – and those who followed the claims of Trofim Lysenko, who embraced the Lamarckian idea of the inheritance of acquired characteristics.7 During and immediately after World War II the situation inside Russia changed substantially. War concentrates the mind wonderfully, and thanks to the requirements of a highly mechanised and highly technical war, the Russian leadership needed scientists as it had never needed them before. As a result, science inside Russia was rapidly reorganised, with scientists rather than party commissars being placed in charge of key committees. Everything from geology to medicine was revamped in this way, and in several cases leading scientists were elevated to the rank of general. Brought in from the cold after the inquisition of the 1930s, scientists were given priority housing, allowed to eat in the special restaurants otherwise reserved for party apparatchiks and to use the special hospitals and sanitaria that had hitherto been the prerogative only of high party officials. The Council of Ministers even passed a resolution that provided for the building of dachas for academicians. More welcome still was the abolition of strict control over science by party philosophers that had been in place since the mid-1930s.

The war was particularly beneficial for genetics in Russia because, from 1941 on, Soviet Russia was an ally in particular of the United States and Great Britain. As a direct result of this alliance, the scientific barriers erected by Stalinism in the 1930s were dismantled. Soviet scientists were allowed to travel again, to visit American and British laboratories; foreign scientists (for example, Henry Dale, J. B. S. Haldane, and Ernest Lawrence) were again elected to Russian academies, and foreign journals were once more permitted inside the Soviet Union.8 Many of the Russian geneticists who opposed Lysenko took this opportunity to enlist the aid of Western colleagues – especially British and American biologists, and Russian emigrés in the United States, people like Theodosius Dobzhansky. They were further aided by the development of the ‘evolutionary synthesis’ (see chapter 20), which linked genetics and Darwinism and therefore put intellectual pressure on Michurin and Lysenko. Mendelian and Morgan-style experimentation and theory were reinstated, and thousands of boxes of Drosophila were imported into Russia in the immediate postwar years. As a direct result of all this activity, Lysenko found his formerly strong position under threat, and there was even an attempt to remove him from his position as a member of the praesidium of the Academy of Sciences.9 Letters of complaint were sent to Stalin, and for a while the Soviet leadership, hitherto very much in Lysenko’s camp, stood back from the debate. But only for a while.

The start of the Cold War proper was signalled in spring 1946 by Winston Churchill’s ‘Iron Curtain’ speech in Fulton, Missouri, but the confrontation really began in March 1947 with the announcement of the ‘Truman Doctrine,’ with aid to Greece and Turkey designed specifically to counteract the influence of communism. Shortly afterwards, Communists were expelled from the coalition governments in France and Italy. In Russia, one of the consequences was a new, strident ideological campaign that became known as zhdanovshchina, after Andrei Zhdanov, a member of the Politburo, who announced a series of resolutions laying down what was and was not politically correct in the media. At first writers and artists were cautioned against ‘servility and slavishness before Western culture,’ but at the end of 1946 an Academy of Social Sciences was created in Moscow under Agitprop control, and in the spring of 1947 zhdanovshchina was extended to philosophy. By the summer, science was included. At the same time, party ideologists resumed their control as authorities over science. Russian scientists who had gone abroad and not returned were now attacked publicly, the election of eminent Western scholars to Russian academies was stopped, and several academic journals were closed, especially those published in foreign languages. So far as science was concerned, Stalinist Russia had come full circle. As the pendulum swung back his way, Lysenko began to reassert his influence. His main initiative was to help organise a major public debate at VASKhNIL, the Lenin All-Union Academy of Agricultural Sciences, on the subject of ‘the struggle for existence.’ By putting Darwin centre stage, it was Lysenko’s intention to highlight not only the division between ‘Mendelian-Morganists’ and ‘Michurinists’ but to extend that division from the narrow field of genetics to the whole of biology, a naked power play. The central issue in the debate was between those who, like Lysenko, denied that there was competition within species, only that interspecific competition existed, and those traditionalists who argued that there was competition throughout all spheres of life. Marx, it will be remembered, had admired Darwin, and had conceived history as a dialectic, a struggle. By Lysenko’s time, however, the official doctrine of Stalinism was that men are equal, that in a socialist society cooperation – and not competition – is what counts, and that differences between people (i.e., within the species) are not hereditary but solely produced by the environment. The debate was therefore designed to smoke out which scientists were in which camp.10

For some reason Stalin had always warmed to Lysenko. It seems the premier had pronounced views of his own on evolution, which were clearly Lamarckian. One reason for this may have been because Lamarck’s views were felt to accord more closely with Marxism. A more pressing reason may have been that the Michurinist/Lysenkoist approach fitted with Stalin’s rapidly developing views about the Cold War and the need to denounce everything Western. At any rate, he gave Lysenko a special consignment of ‘branching wheat’ to test his theories, and in return the ‘scientist’ kept Stalin regularly informed about the battle between the Michurinists and the Mendelians. And so, when this issue finally reached the Lenin Ad-Union Academy meeting in August 1948, Stalin took Lysenko’s line, even going so far as to annotate conference documents with his own comments.11

The conference itself was a carefully staged victory for Lysenko. Following his opening address, five days were devoted to a discussion. However, his opponents were not allowed to speak for the first half of the meeting, and overall only eight of the fifty-six speakers were allowed to criticise him.12 At the end, not only did the conference ratify Lysenko’s approach, but he revealed he had the support of the Central Committee, which meant, in effect, that he had Stalin’s full endorsement for total control, over not just genetics but all of Soviet biology. The VASKhNIL meeting was also followed by a sustained campaign in Pravda. Normally, the newspaper consisted of four pages; that summer for nine days the paper produced six-page editions with an inordinate amount of space devoted to biology.13 A colour film about Michurin was commissioned, with music by Shostakovich. It is difficult to exaggerate the intellectual importance of these events. Recent research, published by Nikolai Krementsov, has revealed that Stalin spent part of the first week of August 1948 editing Lysenko’s address; this was at exactly the time he was meeting with the ambassadors of France, Britain, and the United States for prolonged consultations on the Berlin crisis. After the conference, at the premier’s instigation, great efforts were made to export Michurinist biology to newborn socialist countries such as Bulgaria, Poland, Czechoslovakia, and Romania. Biology, more than any other realm of science, concerns the very stuff of human nature, for which Marx had set down certain laws. Biology was therefore more of a potential threat to Marxist thought than any other science. The Lysenko version of genetics offered the Soviet leadership the best hope for producing a science that posed no threat to Marxism, and at the same time set Soviet Russia apart from the West. With the Iron Curtain firmly in place and communications between Russian scientists and their Western colleagues cut to a minimum, the path was set for what has rightly been called the death of Russian genetics. For the USSR it was a disaster.

The personal rivalry, political manoeuvring, self-deception, and sheer cussedness that disfigured Soviet genetics for so long is of course the very antithesis of the way science prefers itself portrayed. It is true that the Lysenko affair may be the very worst example of political interference in an important scientific venture, and for that reason the lessons it offers are limited. In the West there was nothing strictly comparable but even so, in the 1950s, there were other very significant advances made in science which, on examination, were shown to be the fruits of anything but calm, reflective, disinterested reason. On the contrary, these advances also resulted from bitter rivalry, overweening ambition, luck, and in some cases downright cheating.

Take first the jealous nature of William Shockley. That, as much as anything, was to account for his massive input into twentieth-century intellectual history. That input may be said to have begun on Tuesday, 23 December 1947, just after seven o’clock in the morning, when Shockley parked his MG convertible in the parking lot of Bell Telephone Laboratories in Murray Hill, New Jersey, about twenty miles from Manhattan.14 Shockley, a thin man without much hair, took the stairs to his office on the third floor of the lab. He was on edge. Later in the day, he and two colleagues were scheduled to reveal a new device they had invented to the head of Bell Labs, where they worked. Shockley was tense because although he was the nominal head of his little group of three, it had actually been the other two, John Bardeen and Walter Brattain, who had made the breakthrough. Shockley had been leapfrogged.15 During the morning it started to snow. Ralph Bown, the research director of Bell, wasn’t deterred however, and stopped by after lunch. Shockley, Bardeen, and Brattain brought out their device, a small triangle of plastic with a piece of gold foil attached, fixed in place by a small spring made from a paper clip.16 Their contraption was encased in another piece of plastic, transparent this time, and shaped like a capital C. ‘Brattain fingered his moustache and looked out at the snow. The baseball diamond below the lab window was beginning to disappear. The tops of the trees on the Wachtung Mountains in the distance were also lost as the low cloud closed in. He leaned across the lab bench and switched on the equipment. It took no time at all to warm up, and the oscilloscope to which it was connected immediately showed a luminous spot that raced across the screen.17 Brattain now wired the device to a microphone and a set of headphones, which he passed to Bown. Quietly, Brattain spoke a few words into the microphone – and Bown shot him a sharp glance. Brattain had only whispered, but what Bown heard was anything but a whisper, and that was the point of the device. The input had been amplified. The device they had built, an arrangement of germanium, gold foil, and a paper clip, was able to boost an electrical signal almost a hundredfold.18

Six months later, on 30 June 1948, Bown faced the press at the Bell Headquarters on West Street in Manhattan, overlooking the Hudson River. He held up the small piece of new technology. ‘We have called it the Transistor,’ he explained, ‘because it is a resistor or semiconductor device which can amplify electrical signals as they are transferred through it.19 Bown had high hopes for the new device; at that time the amplifiers used in telephones were clumsy and unreliable, and the vacuum tubes that performed the same function in radios were bulky, broke easily, and were very slow in warming up.20 The press, or at least the New York Times, did not share this enthusiasm, and its report was buried in an inside section. It was at this point that Shockley’s jealousy paid off. Anxious to make his own contribution, he kept worrying about the uses to which the transistor might be put. Looking at the world around him, the mass-society world of standardisation, he grasped that if the transistor were to be manufactured in bulk, it needed to be simpler and stronger.

The transistor was in fact a development of two inventions made much earlier in the century. In 1906 Lee de Forest had stumbled across the fact that an electrified wire mesh, placed in the path of a stream of electrons in a vacuum tube, could ‘amplify’ the flow at the outgoing end.21 This natural amplification was the most important aspect of what came to be called the electronics revolution, but de Forest’s discovery was built on by solid-state physics. This was due to a better grasp of electricity, itself the result of advances in particle physics. A solid structure will conduct electricity if the electron in its outer shed is ‘free’ – i.e., that shell isn’t ‘full’ (this goes back to Pauli’s exclusion principle and Linus Pauling’s research on the chemical bond and how it affected reactivity). Copper conducts electricity because there is only one electron in its outer shed, whereas sulphur, for example, which does not carry electricity at ad, has all its electrons tightly bound to their nuclei. Sulphur, therefore, is an insulator.22 But not all elements are this simple. ‘Semiconductors’ (silicon, say, or germanium) are forms of matter in which there are a few free electrons but not many. Whereas copper has one free electron for each atom, silicon has a free electron for every thousand atoms. It was subsequently discovered that such semiconductors have unusual and very useful properties, the most important being that they can conduct (and amplify) under certain conditions, and insulate under others. It was Shockley, smarting from being beaten to the punch by Bardeen and Brattain, who put all this together and in 1950 produced the first, simple, strong, semiconductor transistor, capable of being mass-produced.23 It consisted of a sliver of silicon and germanium with three wires attached. In conversation this device was referred to as a ‘chip.’24

Shockley’s timing was perfect. Long-playing records and ‘singles’ had recently been introduced to the market, with great success, and the pop music business was taking off. In 1954, the very year Alan Freed started playing R & B on his shows, a Dallas company called Texas Instruments began to manufacture chip-transistors for the new portable radios that had just gone on sale, which were cheap (less than $50) and therefore ideal for playing pop all day long. For reasons that have never been adequately explained, TI gave up this market, which was instead taken over by a Japanese firm no one had ever heard of, Sony.25 By then Shockley had fallen out with first one, then the other erstwhile colleague. Bardeen had stormed out of the lab in 1951, unable to cope with Shockley’s intense rivalry, and Brattain, likewise unable to stomach his former boss, had himself reassigned to a different section of Bed Labs. When the three of them gathered in Stockholm in 1956 to receive the Nobel Prize for Physics, the atmosphere was icy, and it was the last time they would be in the same room together.26 Shockley had himself left Bed by that time, forsaking the snow of New Jersey for the sunshine of California, in particular a pleasant valley of apricot orchards south of San Francisco. There he opened the Shockley Semiconductor Laboratory.27 To begin with, it was a small venture, but in time the apricots would be replaced by more laboratories. In conversation the area was referred to as Silicon Valley.

Shockley, Bardeen, and Brattain fought among themselves. With the discovery of DNA, the long-chain molecule that governs reproduction, the rivalry was between three separate groups of researchers, on different continents, some of whom never met. But feelings ran just as high as between Shockley and his colleagues, and this was an important factor in what happened.

The first the public knew about this episode came on 25 April 1953, in Nature, in a 900-word paper entitled ‘Molecular Structure of Nucleic Acids.’ The paper followed the familiar, ordered layout of Nature articles. But although it was the paper that created the science of molecular biology, and although it also helped kill off Lysenkoism, it was the culmination of an intense two-year drama in which, if science really were the careful, ordered world it is supposed to be, the wrong side won.

Among the personalities, Francis Crick stands out. Born in Northampton in 1916, the son of a shoemaker, Crick graduated from London University and worked at the Admiralty during World War II, designing mines. It was only in 1946, when he attended a lecture by Linus Pauling, that his interest in chemical research was kindled. He was also influenced by Erwin Schrödinger’s What Is Life? and its suggestion that quantum mechanics might be applied to genetics. In 1949 he was taken on by the Cambridge Medical Research Council Unit at the Cavendish Laboratory, where he soon became known for his loud laugh (which forced some people to leave the room) and his habit of firing off theories on this or that at the drop of a hat.28 In 1951 an American joined the lab. James Dewey Watson was a tall Chicagoan, twelve years younger than Crick but extremely self-confident, a child prodigy who had also read Schrödinger’s What Is Life? while he was a zoology student at the University of Chicago, which influenced him toward microbiology. As science historian Paul Strathern tells the story, on a visit to Europe Watson had met a New Zealander, Maurice Wilkins, at a scientific congress in Naples. Wilkins, then based at King’s College in London, had worked on the Manhattan Project in World War II but became disillusioned and turned to biology. The British Medical Research Council had a biophysics unit at King’s, which Wilkins then ran. One of his specialities was X-ray diffraction pictures of DNA, and in Naples he generously showed Watson some of the results.29 It was this coincidence that shaped Watson’s life. There and then he seems to have decided that he would devote himself to discovering the structure of DNA. He knew there was a Nobel Prize in it, that molecular biology could not move ahead without such an advance, but that once the advance was made, the way would be open for genetic engineering, a whole new era in human experience. He arranged a transfer to the Cavendish. A few days after his twenty-third birthday Watson arrived in Cambridge.30

What Watson didn’t know was that the Cavendish had ‘a gentleman’s agreement’ with King’s. The Cambridge laboratory was studying the structure of protein, in particular haemoglobin, while London was studying DNA. That was only one of the problems. Although Watson hit it off immediately with Crick, and both shared an amazing self-confidence, that was virtually all they had in common. Crick was weak in biology, Watson in chemistry.31 Neither had any experience at all of X-ray diffraction, the technique developed by the leader of the lab, Lawrence Bragg, to determine atomic structure.32 None of this deterred them. The structure of DNA fascinated both men so much that virtually all their waking hours were spent discussing it. As well as being self-confident, Watson and Crick were highly competitive. Their main rivals came from King’s, where Maurice Wilkins had recently hired the twenty-nine-year-old Rosalind Franklin (‘Rosy,’ though never to her face).33 Described as the ‘wilful daughter’ of a cultured banking family, she had just completed four years X-ray diffraction work in Paris and was one of the world’s top experts. When Franklin was hired by Wilkins she thought she was to be his equal and that she would be in charge of the X-ray diffraction work. Wilkins, on the other hand, thought that she was coming as his assistant. The misunderstanding did not make for a happy ship.34

Despite this, Franklin made good progress and in the autumn of 1951 decided to give a seminar at King’s to make known her findings. Remembering Watson’s interest in the subject, from their meeting in Naples, Wilkins invited the Cambridge man. At this seminar, Watson learned from Franklin that DNA almost certainly had a helical structure, each helix having a phosphate-sugar backbone, with attached bases: adenine, guanine, thymine, or cytosine. After the seminar, Watson took Franklin for a Chinese dinner in Soho. There the conversation turned away from DNA to how miserable she was at King’s. Wilkins, she said, was reserved, polite, but cold. In turn, this made Franklin on edge herself, a form of behaviour she couldn’t avoid but detested. At dinner Watson was outwardly sympathetic, but he returned to Cambridge convinced that the Wilkins-Franklin relationship would never deliver the goods.35

The Watson-Crick relationship meanwhile flourished, and this too was not unrelated to what happened subsequently. Because they were so different, in age, cultural, and scientific background, there was precious little rivalry. And because they were so conscious of their great ignorance on so many subjects relevant to their inquiry (they kept Pauling’s Nature of the Chemical Bond by their side, as a bible), they could slap down each other’s ideas without feelings being hurt. It was light-years away from the Wilkins-Franklin ménage, and in the long run that may have been crucial.

In the short run there was disaster. In December 1951, Watson and Crick thought they had an answer to the puzzle, and invited Wilkins and Franklin for a day in Cambridge, to show them the model they had built: a triple-helix structure with the bases on the outside. Franklin savaged them, curtly grumbling that their model didn’t fit any of her crystallography evidence, either for the helical structure or the position of the bases, which she said were on the inside. Nor did their model take any account of the fact that in nature DNA existed in association with water, which had a marked effect on its structure.36 She was genuinely appalled at their neglect of her research and complained that her day in Cambridge was a complete waste of time.37 For once, Watson and Crick’s ebullient self-confidence let them down, even more so when word of the debacle reached the ears of their boss. Bragg called Crick into his office and put him firmly in his place. Crick, and by implication Watson, was accused of breaking the gentleman’s agreement, of endangering the lab’s funding by doing so. They were expressly forbidden from continuing to work on the DNA problem.38

So far as Bragg was concerned, that was the end of the matter. But he had misjudged his men. Crick did stop work on DNA, but as he told colleagues, no one could stop him thinking about it. Watson, for his part, continued work in secret, under cover of another project on the structure of the tobacco mosaic virus, which showed certain similarities with genes.39 A new factor entered the situation when, in the autumn of 1952, Peter Pauling, Linus’s son, arrived at the Cavendish to do postgraduate research. He attracted a lot of beautiful women, much to Watson’s satisfaction, but more to the point, he was constantly in touch with his father and told his new colleagues that Linus was putting together a model for DNA.40 Watson and Crick were devastated, but when an advance copy of the paper arrived, they immediately saw that it had a fatal flaw.41 It described a triple-helix structure, with the bases on the outside – much like their own model that had been savaged by Franklin – and Pauling had left out the ionisation, meaning his structure would not hold together but fall apart.42 Watson and Crick realised it would only be a matter of time before Pauling himself realised his error, and they estimated they had six weeks to get in first.43 They took a risk, broke cover, and told Bragg what they were doing. This time he didn’t object: there was no gentleman’s agreement so far as Linus Pauling was concerned.

So began the most intense six weeks Watson or Crick had ever lived through. They now had permission to build more models (models were especially necessary in a three-dimensional world) and had developed their thinking about the way the four bases – adenine, guanine, thymine, and cytosine – were related to each other. They knew by now that adenine and guanine were attracted, as were thymine and cytosine. And, from Franklin’s latest crystallography, they also had far better pictures of DNA, giving much more accurate measures of its dimensions. This made for better model building. The final breakthrough came when Watson realised they could have been making a simple error by using the wrong isomeric form of the bases. Each base came in two forms – enol and keto – and all the evidence so far had pointed to the enol form as being the correct one to use. But what if the keto form were tried?44 As soon as he followed this hunch, Watson immediately saw that the bases fitted together on the inside, to form the perfect double-helix structure. Even more important, when the two strands separated in reproduction, the mutual attraction of adenine to guanine, and of thymine to cytosine, meant that the new double helix was identical to the old one – the biological information contained in the genes was passed on unchanged, as it had to be if the structure was to explain heredity.45 They announced the new structure to their colleagues on 7 March 1953, and six weeks later their paper appeared in Nature. Wilkins, says Strathern, was charitable toward Watson and Crick, calling them a couple of ‘old rogues.’ Franklin instantly accepted their model.46 Not everyone was as emollient. They were called ‘unscrupulous’ and told they did not deserve the sole credit for what they had discovered.47 In fact, the drama was not yet over. In 1962 the Nobel Prize for Medicine was awarded jointly to Watson, Crick, and Wilkins, and in the same year the prize for chemistry went to the head of the Cavendish X-ray diffraction unit, Max Perutz and his assistant, John Kendrew. Rosalind Franklin got nothing. She died of cancer in 1958, at the age of thirty-seven.48

Years later Watson wrote an entertaining and revealing book about the whole saga, on which this account is partly based. Some of his success as an author lay in his openness about the scientific process, which made him and his colleagues seem far more human than had hitherto been the case. For most people up until then, science books were textbooks, thick as bricks and just as dry. Partly this was a tradition, a convention that what counted in science was the results, not how the participants achieved them. Another reason, of course, in the case of certain sciences at least, was the Cold War, which kept many crucial advances secret, at least for a while. In fact the Cold War, which succeeded in making scientists into faceless bureaucrats, along the lines Orwell had laid into in Nineteen Eighty-Four, also sparked a bitter rivalry between scientists on either side of the divide, very different from the cooperative international mood in physics in the early part of the century. The most secret discipline was in fact physics itself and its penumbra of activities. And it was here that the rivalry was keenest. Archival research carried out in Russia since perestroika has, for example, identified one great scientist who, owing to secrecy, was virtually unknown hitherto, not only in the West but in his own country, and who was almost entirely obsessed with rivalry. He was more or less single-handedly responsible for Soviet Russia’s greatest scientific success, but his strengths were also his weaknesses, and his competitiveness led to his crucial failures.49

On Friday, 4 October 1957, the world was astounded to learn that Soviet Russia had launched an orbiting satellite. Sputnik I measured only twenty-three inches across and didn’t do much as it circled the earth at three hundred miles a minute. But that wasn’t the point: its very existence up there, overflying America four times during the first day, was a symbol of the Cold War rivalry that so preoccupied the postwar world and in which, for a time at least, the Russians seemed to be ahead.50 Receiving the story in the late afternoon, next morning the New York Times took the unusual step of printing a three-decker headline, in half-inch capitals, running the whole way across the front page:

SOVIET FIRES EARTH SATELLITE INTO SPACE;
IT IS CIRCLING THE GLOBE AT 18,000 MPH;
SPHERE TRACKED IN 4 CROSSINGS OVER U. S.51

Only then did Nikita Khrushchev, the Russian leader, realise what an opportunity Sputnik’s launch provided for some Cold War propaganda. The next day’s Pravda was quite different from the day before, which had recorded the launch of Sputnik in just half a column. ‘World’s First Artificial Satellite of Earth Created in Soviet Nation,’ ran the headline, and it too stretched all the way across page one. The paper also published the congratulations that poured in, not only from what would soon come to be called satellite states of the USSR, but from scientists and engineers in the West.52

Sputnik was news partly because it showed that space travel was possible, and that Russia might win the race to colonise the heavens – with all the psychological and material advantages that implied – but also because, in order to reach orbit, the satellite must have been launched at a speed of at least 8,000 metres per second and with an accuracy which meant the Russians had solved several technological problems associated with rocket technology. And it was rocket technology that lay at the heart of the Cold War arms race; both Russia and the United States were then trying their hardest to develop intercontinental ballistic missiles (ICBMs) that could carry nuclear warheads vast distances between continents. The launch of Sputnik meant the Russians had a rocket with enough power and accuracy to deliver hydrogen bombs on to American soil.53

After dropping behind in the arms race during World War II, the Soviet Union quickly caught up between 1945 and 1949, thanks to a small coterie of ‘atomic spies,’ including Julius and Ethel Rosenberg, Morton Sobell, David Greenglass, Harvey Gold, and Klaus Fuchs. But the delivery of atomic weapons was a different matter, and here, since the advent of perestroika, several investigations have been made of what was going on behind the scenes in the Russian scientific community. By far the most interesting is James Harford’s biography of Sergei Pavlovich Korolev.54 Korolev, who led an extraordinary life, may fairly be described as the father of both Russia’s ICBM system and its space program.55 Born in 1907 near Kiev, in Ukraine, into an old Cossack family, Sergei Pavlovich grew up obsessed with manmade flight. This led to an interest in rocket and jet propulsion in the 1930s. (It has also become clear since perestroika that the USSR had a spy in Wernher von Braun’s team, and that Korolev and his colleagues – not to mention Stalin, Beria, and Molotov – were kept up-to-date with German progress.) But Korolev’s smooth ride up the Soviet system came to an abrupt end in June 1937, when he was arrested in the purges and deported to the gulag, accused of ‘subversion in a new field of technology.’ He was given no trial but beaten until he ‘confessed.’56 He spent some of his time at the notorious camp in the Kolyma area of far-eastern Siberia, later made famous by Aleksandr Solzhenitsyn in The Gulag Archipelago.57 Robert Conquest, in The Great Terror, says that Kolyma ‘had a death rate of up to 30 per cent [per year],’ but Korolev survived, and because so many people interceded on his behalf, he was eventually moved to a sharashka, a penal institution not as severe as the gulag, where scientists and engineers were made to work on practical projects for the good of the state.58 Korolev was employed in a sharashka run by Andrei Tupolev, another famous aircraft designer.59 During the early 1940s the Tu-2 light bomber and the Ilyushin-2 attack aircraft were designed in the Tupolev sharashka, and had notable records later in the war. Korolev was released in the summer of 1944, but it was not until 1957 – the year Sputnik was launched – that he obtained complete exoneration for his alleged ‘subversion.’60

Photographs of Korolev show a tough, round-faced bear of a man, and do nothing to dispel the idea that he was a force of nature, with a temper that terrified even senior colleagues. After the war he adroitly picked the brains of Germany’s rocket scientists, whom Russia had captured, and it was the same story after the explosion of the first atomic bomb, and the leaking of atomic secrets to the Russians. It was Korolev who spotted that the delivery of weapons of mass destruction was every bit as important as the weapons themselves. Rockets were needed that could travel thousands of miles with great accuracy. Korolev also realised that this was an area where two birds could be killed with one stone. A rocket that could carry a nuclear warhead all the way from Moscow to Washington would need enough power to send a satellite into orbit.

There were sound scientific reasons for exploring space, but from the information recently published about Korolev, it is clear that a major ingredient in his motivation was to beat the Americans.61 This was very popular with Stalin, who met Korolev several times, especially in 1947. Here was another field, like genetics, where Soviet science could be different from, and better than, its Western counterpart.62 It was a climate where the idea of science as a cool, rational, reflective, disinterested activity went out the window. By the early 1950s Korolev was the single most important driving force behind the Russian rocket/space program, and according to James Harford his moods fluctuated wildly depending on progress. He had a German trophy car commandeered after the war, which he drove at high speeds around Moscow and the surrounding countryside to get the aggression out of his system. He took all failures of the project personally and obsessively combed the open American technical literature for clues as to how the Americans might be progressing/63 In the rush to be first, mistakes were made, and the first five tests of what was called in Russia the R-7 rocket were complete failures. But at last, on 21 August 1957, an R-7 flew the 7,000 kilometres to the Kamchatka Peninsula in eastern Siberia.64

In July 1955 the Eisenhower administration had announced that the United States intended to launch a satellite called Vanguard as part of the International Geophysical Year, which was due to run from 1957 to 1958. Following this announcement, Korolev recruited several new scientists and began to build his own satellite. Recent accounts make it clear that Korolev was intensely aware of how important the project was historically – he just had to be first – and once R-7 had proved itself, he turned up the heat. Within a month of the first R-7 reaching Kamchatka, Sputnik lifted off its launchpad in Baikonur. The launch not only made headline news in the world’s media but gave a severe jolt to aeronautical professionals in the West.65 The Americans responded almost immediately, bringing forward by several months the launch of their own satellite, to December 1957. This too was scarcely the mark of cool, rational scientists – and it showed. In the full glare of the television cameras, the American satellite got only a few feet off the ground before it fell back to earth and exploded in flames. ‘OH, WHAT A FLOPNIK!’ crowed Pravda. ‘KAPUTNIK!’ said another newspaper; ‘STAYPUTNIK,’ a third.66

Realising the coup Korolev had produced, Khrushchev called him to the Kremlin and instructed him to provide something even more spectacular to celebrate the fortieth anniversary of the revolution.67 Korolev’s response was Sputnik 2, launched a month after Sputnik 1 — with Laika, a mongrel dog, aboard. As a piece of theatre it could not be faulted, but as science it left a lot to be desired. After refusing to separate from its booster, Sputnik 2’s thermal control system failed, the satellite overheated – and Laika was roasted. Animal rights groups protested, but the Russians dismissed the complaints, arguing that Laika had been ‘a martyr to a noble cause.’68 And in any case, Sputnik 2 was soon followed by Sputnik 3.69 This was intended as the most sophisticated and productive of all the satellites, equipped with sensitive measuring devices to assess a whole range of atmospheric and cosmological phenomena. Korolev’s immediate motive was to heap further humiliation on the United States – but he came another cropper. During tests for the satellite, a crucial tape recorder failed to work. To have rectified it thoroughly would have delayed the launch, and the man responsible, Alexei Bogomolov, ‘did not want to be considered a loser in the company of winners.’ He argued that the failure was due to electrical interference in the test room and that such interference wouldn’t exist in space. No one else was taken in – except the one man who counted, Korolev.70 The tape recorder duly failed in flight. Nothing sensational occurred – there was no spectacular explosion – but crucial information was not recorded. As a result, it was the Americans, whose Explorer 3 had finally been launched on 26 March 1958, who observed a belt of massive radiation around the earth that became known as the Van Allen belts, after James Van Allen, who designed the instruments that did record the phenomenon.71 And so, after the initial space flight, with all that implied, the first major scientific discovery was made not by Korolev but by the late-arriving Americans. Korolev’s personality was responsible for both his successes and his failures.72

Nineteenth-fifty-eight was the first full year of the space age, with twenty-two launch attempts, though only five were successful. Korolev went on securing ‘firsts,’ including unmanned landings on the moon and Venus, and in April 1961 Yuri Gagarin became the first human being to orbit the earth. When Korolev died, in January 1966, he was buried in the wall of the Kremlin, a supreme honour. But his identity was always kept secret while he was alive; it is only recently he has received his full due.

Character was certainly crucial to the fifth great scientific advance that took place in the 1950s. Neither can one rule out the role of luck. For the fact is that Mary and Louis Leakey, archaeologists and palaeontologists, had been excavating in Africa, in Kenya and Tanganyika (later Tanzania) since the 1930s without finding anything especially significant. In particular, they had dug at Olduvai Gorge, a 300-foot-deep, thirty-mile-long chasm cut into the Serengeti Plain, part of the so-called Rift Valley that runs north-south through the eastern half of Africa and is generally held to be the border between two massive tectonic plates.73 For scientists, the Olduvai Gorge had been of interest ever since it had first been discovered in 1911, when a German entomologist named Wilhelm Kattwinkel almost fell into it as he chased butterflies.74 Climbing down into the gorge, which cuts through many layers of sediments, he discovered innumerable fossil bones lying around, and these caused a stir when he got them back to Germany because they included parts of an extinct horse. Later expeditions found sections of a modern human skeleton, and this led some scientists to the conclusion that Olduvai was a perfect place for the study of extinct forms of life, including – perhaps – ancestors of mankind.

It says a lot for the Leakeys’ strength of character that they dug at Olduvai from the early 1930s until 1959 without making the earth-shattering discovery they always hoped for.75 Until that time, as was mentioned in earlier chapters, it was believed early man originated in Asia. Born in Kenya to a missionary family, Louis had found his first fossils at the age of twelve and had never stopped from then on. His quixotic character involved to begin with a somewhat lackadaisical approach to scientific evidence, which ensured that he was never offered a formal academic position.76 In the prewar moral climate Leakey’s career was not helped either by an acrimonious divorce from his first wife, which put paid to his chances of an academic position in straitlaced Cambridge.77 Another factor was his activity as a British spy at the time of Kenya’s independence movement in the late 1940s and early 1950s, culminating in his appearance to give evidence in court against Jomo Kenyatta, the leader of the independence party, and later the country’s first president.78 (Kenyatta never seems to have borne a grudge.) Finally, there was Leakey’s fondness for a succession of young women. There was nothing one-dimensional about Leakey, and his character was central to his discoveries and to what he made of them.

During the 1930s, until most excavation was halted because of the war, the Leakeys had dug at Olduvai more years than not. Their most notable achievement was to find a massive collection of early manmade tools. Louis and his second wife Mary were the first to realise that dint tools were not going to be found in that part of Africa, as they had been found all over Europe, say, because in East Africa generally, flint is lacking. They did, however, find ‘pebble tools’ – basalt and quartzite especially – in abundance.79 This convinced Leakey that he had found a ‘living floor,’ a sort of prehistoric living room where early man made tools in order to eat the carcasses of the several extinct species that by now had been discovered in or near Olduvai. After the war, neither he nor Mary revisited Olduvai until 1951, in the wake of the Kenyatta trial, but they dug there through most of the 1950s. Throughout the decade they found thousands of hand axes and, associated with them, fossilised bones of many extinct mammals: pigs, buffalos, antelopes, several of them much bigger than today’s varieties, evoking a romantic image of an Africa inhabited by huge, primitive animals. They renamed this living floor ‘the Slaughter House.’80 At that stage, according to Virginia Morrell, the Leakeys’ biographer, they thought that the lowest bed in the gorge dated to about 400,000 years ago and that the highest bed was 15,000 years old. Louis had lost none of his enthusiasm, despite having reached middle age without finding any humans in more than twenty years of searching. In 1953 he got so carried away by his digging that he spent too long in the African sun and suffered such a severe case of sunstroke that his hair ‘turned from brown to white, literally overnight.’81 The Leakeys were kept going by the occasional find of hominid teeth (being so hard, teeth tend to survive better than other parts of the human body), so Louis remained convinced that one day the all-important skull would turn up.

On the morning of 17 July 1959, Louis awoke with a slight fever. Mary insisted he stay in camp. They had recently discovered the skull of an extinct giraffe, so there was plenty to do.82 Mary drove off in the Land Rover, alone except for her two dogs, Sally and Victoria. That morning she searched a site in Bed I, the lowest and oldest, known as FLK (for Frieda Leakey’s Korongo, Frieda Leakey being Louis’s first wife and korongo being Swahili for gully). Around eleven o’clock, with the heat becoming uncomfortable, Mary chanced on a sliver of bone that ‘was not lying loose on the surface but projecting from beneath. It seemed to be part of a skull…. It had a hominid look, but the bones seemed enormously thick – too thick, surely,’ as she wrote later in her autobiography.83 Dusting off the topsoil, she observed ‘two large teeth set in the curve of a jaw.’ At last, after decades. There could be no doubt: it was a hominid skull.84 She jumped back into the Land Rover with the two dogs and rushed back to camp, shouting ‘I’ve got him! I’ve got him!’ as she arrived. Excitedly, she explained her find to Louis. He, as he put it later, became ‘magically well’ in moments.85

When Louis saw the skull, he could immediately see from the teeth that it wasn’t an early form of Homo but probably australopithecine, that is, more apelike. But as they cleared away the surrounding sod, the skull revealed itself as enormous, with a strong jaw, a flat face, and huge zygomatic arches – or cheekbones – to which great chewing muscles would have been attached. More important, it was the third australopithecine skull the Leakeys had found in association with a hoard of tools. Louis had always explained this by assuming that the australopithecines were the victims of Homo killers, who then feasted on the more primitive form of ancestor. But now Louis began to change his mind – and to ask himself if it wasn’t the australopithecines who had made the tools. Tool making had always been regarded as the hallmark of humanity – and now, perhaps, humanity should stretch back to the australopithecines.

Before long, however, Louis convinced himself that the new skull was actually midway between australopithecines and modern Homo sapiens and so he called the new find Zinjanthropus boisei – Zinj being the ancient Arabic word for the coast of East Africa, anthropos denoting the fossil’s humanlike qualities, and boisei after Charles Boise, the American who had funded so many of their expeditions.86 Because he was so complete, so old and so strange, Zinj made the Leakeys famous. The discovery was front-page news across the world, and Louis became the star of conferences in Europe, North America, and Africa. At these conferences, Leakey’s interpretation of Zinj met some resistance from other scholars who thought that Leakey’s new skull, despite its great size, was not all that different from other australopithecines found elsewhere. Time would prove these critics right and Leakey wrong. But while Leakey was arguing his case with others about what the huge, flat skull meant, two scientists elsewhere produced a completely unexpected twist on the whole matter. A year after the discovery of Zinj, Leakey wrote an article for the National Geographic magazine, ‘Finding the World’s Earliest Man,’ in which he put Zinjanthropus at 600,000 years old.87 As it turned out, he was way off.

Until the middle of the century, the main dating technique for fossils was the traditional archaeological device of stratigraphy, analysing sedimentation layers. Using this technique, Leakey calculated that Olduvai dated from the early Pleistocene, generally believed to be the time when the giant animals such as the mammoth lived on earth alongside man, extending from 600,000 years ago until around 10,000 years ago. Since 1947, a new method of dating, the carbon-14 technique, had been introduced. C14 dating depends on the fact that plants take out of the air carbon dioxide, a small proportion of which is radioactive, having been bombarded by cosmic rays from space. Photosynthesis converts this Co2 into radioactive plant tissue, which is maintained as a constant proportion until the plant (or the organism that has eaten the plant) dies, when radioactive carbon uptake is stopped. Radioactive carbon is known to have a half-life of roughly 5,700 years, and so, if the proportion of radioactive carbon in an ancient object is compared with the proportion of radioactive carbon in contemporary objects, it is possible to calculate how long has elapsed since that organism’s death. With its relatively short half-life, however, C14 is only useful for artefacts up to roughly 40,000 years old. Shortly after Leakey’s National Geographic article appeared, two geophysicists from the University of California at Berkeley, Jack Evernden and Garniss Curtis, announced that they had dated some volcanic ash from Bed I of Olduvai – where Zinj had been found – using the potassium-argon (K/Ar) method. In principle, this method is analogous to C14 dating but uses the rate at which the unstable radioactive potassium isotope potassium-40 (K40) decays to stable argon-40 (Ar40). This can be compared with the known abundance of K40 in natural potassium, and an object’s age calculated from the half-life. Because the half-life of K40 is about 1.3 billion years, this method is much more suitable for geological material.88

Using the new method, the Berkeley geophysicists came up with the startling news that Bed I at Olduvai was not 600,000 but 1.75 million years old.89 This was a revelation, the very first clue that early man was, much, much older than anyone suspected. This, as much as the actual discovery of Zinj, made Olduvai Gorge famous. In the years that followed, many more skulls and skeletons of early hominids would be found in East Africa, sparking bitter controversy about how, and when, early man developed. But the ‘bone rush’ in the Rift Valley ready dates from the fantastic publicity surrounding the discovery of Zinj and its great antiquity. This eventually produced the breathtakingly audacious idea – almost exactly one hundred years after Darwin – that man originated in Africa and then spread out to populate the globe.

*

Each of these episodes was important in itself, albeit in very different ways, and transformed our understanding of the natural world. But besides the advances in knowledge that at least four of them share and to which we shall return (Lysenko was eventually overthrown in the mid-1960s), they all have in common that they show science to be an untidy, emotional, obsessive, all-too-human activity. Far from being a calm, reflective, solely rational enterprise, carried out by dispassionate scientists only interested in the truth, science is revealed as not so very different from other walks of life. If this seems an unexceptional thing to say now, at the end of the century, that is a measure of how views have changed since these advances were made, in the 1940s and 1950s. Early on in that same decade, Claude Lévi-Strauss had expressed the general feeling of the time: ‘Philosophers cannot insulate themselves against science,’ he said. ‘Not only has it enlarged and transformed our vision of life and the universe enormously: it has also revolutionised the rules by which the intellect operates.’90 This mindset was underlined by Karl Popper in The Logic of Scientific Discovery, published in English in 1959, in which he set out his view that the scientist encounters the world – nature – essentially as a stranger, and that what sets the scientific enterprise apart from everything else is that it only entertains knowledge or experience that is capable of falsification. For Popper this is what distinguished science from religion, say, or metaphysics: revelation, or faith, or intuition have no part, at least no central role; rather, knowledge increases incrementally, but that knowledge is never ‘finished’ in the sense that anything is ‘knowable’ as true for all time.91 But Popper, like Lévi-Strauss, focused only on the rationalism of science, the logic by which it attempted – and often managed – to move forward. The whole penumbra of activities – the context, the rivalry, the ambition and hidden agendas of the participants in these dramas (for dramas they often were) – were left out of the account, as somehow inappropriate and irrelevant, sideshows to the main event. At the time no one thought this odd. Michael Polanyi, as we have seen, had raised doubts back in 1946, but it was left to a historian of science rather than a philosopher to produce the book that changed for all time how science was perceived. This was Thomas Kuhn, whose Structure of Scientific Revolutions appeared in 1962.

Kuhn, a physicist turned historian of science at MIT, was interested in the way major changes in science come about. He was developing his ideas in the 1950s and so did not use the examples just given, but instead looked at much earlier episodes from history, such as the Copernican revolution, the discovery of oxygen, the discovery of X rays, and Einstein’s ideas about relativity. Kuhn’s chief argument was that science consists mainly of relatively stable periods, when nothing much of interest goes on and scientists working within a particular ‘paradigm’ conduct experiments that flesh out this or that aspect of the paradigm. In this mode, scientists are not especially sceptical people – rather, they are in a sort of mental straitjacket as laid down by the paradigm or theory they are following. Amid this set of circumstances, however, Kuhn observed that a number of anomalies will occur. To begin with, there is an attempt to incorporate the anomalies into the prevailing paradigm, and these will be more or less successful. Sooner or later, however, the anomalies grow so great that a crisis looms within whatever branch of science it may be – and then one or more scientists will develop a totally new paradigm that better explains the anomalies. A scientific revolution will have taken place.92 Kuhn also noted that science is often a collaborative exercise; in the discovery of oxygen, for example it is actually very difficult to say precisely whether Joseph Priestley or Antoine-Laurent Lavoisier was primarily responsible: without the work of either, oxygen would not have been understood in exactly the way it was. Kuhn also observed that revolutions in science are often initiated by young people or those on the edge of the discipline, not fully trained – and therefore not fully schooled – in a particular way of thought. He therefore stressed the sociology and social psychology of science as a factor in both the advancement of knowledge and the reception of new knowledge by other scientists. Echoing an observation of Max Planck, Kuhn found that the bulk of scientists never change their minds – a new theory wins because adherents of the old theory simply die out, and the new theory is favoured by the new generation.93 In fact, Kuhn makes it clear several times that he sees scientific revolutions as a form of evolution, with the better – ‘fitter’ – ideas surviving while the less successful become extinct. The view that science is more ordered than is in fact the case, Kuhn said, is aided by the scientific textbook.94 Other disciplines use textbooks, but it is in science that they are most popular, reflecting the fact that many young scientists get their information predigested (and therefore repackaged), rather than by reading the original literature. So, very often scientists do not – or did not then – learn about discoveries at first hand, as someone interested in literature reads the original books themselves, as well as reading textbooks of literary criticism. (In this, Kuhn was echoing one of F. R. Leavis’s main criticisms of C. P. Snow.)

Much was made of Kuhn’s book, especially by nonscientists and antiscientists, so it is necessary to emphasise that he was not seeking to pull the rug out from under the feet of science. Kuhn always maintained that science produced, as Lévi-Strauss said, a special kind of knowledge, a knowledge that worked in a distinctive way and very well.95 Some of the uses to which his book was put would not have met with his approval. Kuhn’s legacy is a reconceptualisation of science, not so much a culture, as Snow said, but a tradition in which many scientists serve their apprenticeship, which predetermines the types of question science finds interesting, and the way it seeks answers to problems. Thus the scientific tradition is nowhere near as rational as is generally thought. Not all scientists find this view convincing, and obviously there is much scope for disagreement as to what is or is not a paradigm, and what is and is not normal science. But for historians of science, and many in the humanities, Kuhn’s work has been very liberating, allowing scientific knowledge to be regarded as somehow more tentative than before.