MYTH 25
THAT SCIENCE HAS BEEN LARGELY A SOLITARY ENTERPRISE
Kathryn M. Olesko
I know not what I may seem to the world; but to myself, I seem to have been only like a boy, playing on the sea-shore, and diverting myself, in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.
—Isaac Newton, as reported by Edmond Turnor (1806)
It’s difficult to imagine how I would ever have enough peace and quiet in the present sort of climate to do what I did in 1964.
—Peter Higgs, as quoted in The Guardian (2013)
Three hundred years separate Isaac Newton (1643–1727) and Peter Higgs (b. 1929), but they share in common a belief in the value of solitude for scientific work. The image of the scientist as the solitary genius who works alone for hours on end, emotionally separated from friends and family and oblivious to the humdrum needs of daily life, is deeply embedded in culture. Around 1800, William Blake’s iconic and well-known depiction of Newton as the godlike geometer, united with his natural surroundings and deep in contemplation on the shores of the sea, epitomized the solitude of scientific creativity. Popular culture later transformed the image, but the element of solitude remained intact. In contrast to Blake’s divine Newton, Mary Shelley created the lonely and mad Dr. Victor Frankenstein (1818); H. G. Wells, the sinister and isolated Dr. Moreau (1896); and director Roland Emmerich, the unbalanced and slightly wild Dr. Brackish Okun in the movie Independence Day (1996).
Mid-twentieth-century biographical studies of Isaac Newton provide one of the most well-known examples of the myth of the solitary genius. Forced to flee Cambridge, where the plague had spread, and take refuge on his mother’s farm in Lincolnshire, Newton was reputed to have ventured into a garden where a falling apple sparked the idea of universal gravitation in a moment of divine inspiration (see Myth 6). Scholars have since dubbed 1665–1666 Newton’s “miraculous year,” the year in which he discovered gravity, the composition of white light, and calculus. After the Lincolnshire years, Newton’s solitude, self-neglect, detachment from society, and tendency to become lost in thought became legendary. Although educated at Cambridge, he presented himself as someone who had no need for pedagogy, schooling, and the social processes by which knowledge was learned. He became the self-taught genius who had no teacher—the perfect solitary scholar. So engrained did this image become that it did not matter whether or not Newton uttered the words in the epigraph to this essay (he probably did not). Alexander Pope’s (1688–1744) proposed epitaph for Newton’s grave expressed the spirit of this solitary genius:
Nature and Nature’s Laws lay hid in Night.
God said, Let Newton be! and All was Light.1
It is not at all surprising, then, that Newton’s legendary work habits dictated the title of a biography: Newton was “never at rest.”2
Why has the myth of science as a largely solitary enterprise endured? Like all myths, it is a story that legitimates aspects of the social, cultural, economic, or political order. A transcendent engagement with nature in solitude evokes scenes of religious revelation: St. John in the desert, St. Jerome in his study, Jesus in the garden of Gethsemane. The ideals of liberal individualism and Western rationality are embedded in solitary creativity. The myth presumes that detachment and isolation are necessary preconditions for objectivity, and so for truth. In the Western tradition of liberalism following John Stuart Mill (1806–1873), the voice in the wilderness addresses a reality others cannot see and so must be heeded. In the context of these traditions, the scientist in solitude is a secular saint: ascetic, self-denying, and, above all, self-disciplined. No wonder, then, that exceptional scientists, like saints, have “miraculous years” (Albert Einstein’s was 1905; see Myth 18). Emotionality and emotional attachments—especially to family and loved ones—are unnecessary and potentially dangerous distractions corrupting the heroic search for the secrets of nature. This myth is thus a self-validating ideology. Powerful as these associations are, though, ideology accounts for only part of its persistent appeal.3
Science itself is partly to blame. The reward systems of science celebrate the individual. Scientists are honored in perpetuity by eponymous laws and constants and entities—such as Newton’s laws, Mendel’s Laws, Planck’s constant, or the Higgs boson—which project scientific discoveries as individual achievements. Nobel Prizes are awarded to individuals, not groups or teams. Scientific textbooks credit individual scientists with discovery and invention. Evidence for the myth is nearly everywhere in scientific culture, and it is difficult to overcome.
The myth persists, though, less because of science than because stories about it have been told in particular ways. The culprit is history: as scientists tell it, as historians write it, and as students understand it. History as rendered in scientific textbooks is mostly about individuals, not scientists working in teams or in communication with one another. So physicist David Park told the story of the wave nature of light through a series of individual discoveries, from Thomas Young (1773–1829) in 1802 to James Clerk Maxwell (1831–1879) in 1861. He left out, interestingly, the most politically contentious contributor, Augustin Fresnel (1788–1827), and his work on diffraction. The omission is telling. Fresnel’s contact with the British community and the politics of science, especially in France’s Academy of Sciences, of deciding whether light was composed of waves or particles played a powerful role in shaping the fate of the wave theory of light.4 Social systems and political intrigue simply did not fit Park’s conception of history.
Textbook entries that seem to confirm science as a solitary enterprise have a ripple effect on history. Consider the history of the element yttrium and its textbook-designated discoverer, the relatively unknown Finnish chemist Johan Gadolin (1760–1852). Gadolin never claimed to have discovered a new earth in 1794—only that he could not identify part of the composition of a black stone he had found in the Ytterby quarry in Finland. When other chemists scrutinized his results over the following decades, they named the new earth yttria. The act of naming consolidated history: Gadolin became the most important contributor to the discovery of yttria even though other investigators did more to uncover its properties. When chemical textbooks reclassified the earths as elements in the nineteenth century, each element acquired a story of its origin. Complex stories eventually evolved into simpler ones in scientific handbooks, which in the end identified Gadolin as the only discoverer of yttrium. Popular histories of science in the early twentieth century accepted the party line.5
In each retelling of yttrium’s discovery—from textbooks to handbooks to popular expositions—Gadolin’s role expanded while that of the scientific community waned. This literary process exemplifies one way the myth of science as a solitary enterprise could be perpetuated within and beyond the scientific community. But scientists are not the only ones responsible for the myth’s grip on history.
Until well into the twentieth century, historians were also responsible for endorsing it. Conventions of historical writing help to explain why. Educational psychologists argue that precollege students tend to view history in terms of individual agency rather than as the unfolding of larger processes. So the discovery of America in 1492 is about Christopher Columbus, King Ferdinand, and Queen Isabella, rather than about the Kingdom of Castile undergoing social, religious, and economic change, and engaging in commercial competition. This approach is commonly known as “Great Man” history, and many of the early classical narratives in the history of science adopted it. Histories of the Scientific Revolution—the period from Nicolaus Copernicus’s (1473–1543) proposal that the sun replace earth as the center of the universe in 1543 to shortly after the publication of Isaac Newton’s theory of universal gravitation in 1687—were and still are addicted to narratives guided largely by individuals and their discoveries.6
Concurrent with the social movements of the 1960s and later—civil rights, the women’s movement, and antiwar protests—scholars revolutionized the historical study of the sciences. Although varied, these new approaches shared a common belief in the inherently social nature of scientific practice. Most fall under the rubric of social constructivism. Constructivism emphasizes context, community, controversy, communication, and many other social dimensions of scientific practice, and it blurs the boundary between science and society. Scientific pedagogy, for instance, which used to be viewed as part of institutional history, became a crucial site of generational reproduction and knowledge formation and transmission. At an earlier time, facts were discovered. From the constructivist’s perspective, facts are not born—they are made. The creation of knowledge, once regarded as an individual pursuit, became a collective enterprise. Scientific discovery, formerly the final stage of knowledge creation, now started as a local phenomenon. Only gradually and through a communal process did the results of discovery become universal. Moreover, the larger contexts of scientific practice—social, political, economic, and cultural—could insert themselves into the production of knowledge at any point. The assumption undergirding constructivism is that science and society are inseparable. To know one, you had to know the other.7
There was no room in constructivism for the myth of science as a solitary enterprise. The establishment of scientific societies in the seventeenth century, so the argument now goes, was partly to neutralize any chance that scientific practice would resemble religious practice, in which solitude was condoned. Those who claimed that solitude was a precondition for scientific discovery were merely speaking rhetorically. A leading practitioner of constructivism concluded: “The solitary philosopher is therefore only a man imitating God.”8
From a constructivist perspective, then, a figure such as Newton looks very different. How solitary was he as a scientific practitioner? He certainly never ventured very far from Cambridge, London, and Lincolnshire. And he never visited a beach. Yet despite his relatively stationary existence, he was wired into a vast global network of numerical data on tidal levels, the length of pendulums, and the position of comets, on which he drew to support empirically his theory of universal gravitation. This information could only have reached the shores of England through connections established by trading companies, Jesuit missionaries, astronomers, and the correspondence network of scholars known as the Republic of Letters. Natural philosophers, astronomers, mariners, dockyard workers, and traders shipped their local data to Newton, or Newton sent his emissaries to distant ports and sites to gather the quantitative information that he needed. This relay of information was never simply a handoff because both the data and their producers had to be assessed and analyzed for reliability—a task Newton mastered by developing the means to reconcile discordant data by taking their averages. Thus, Newton’s natural philosophy would not have been possible without Great Britain’s commercial revolution and the global trade network of which it was a part. Similarly, Charles Darwin’s (1809–1882) evidence of biological evolution came from sources embedded in Britain’s imperial network. Stories of Newton or Darwin as solitary geniuses are cultural constructions that ignore the important role of context in their lives.9
Like all myths, the myth of the solitary scientist distorts the past. It does so not only by sins of commission but also by sins of omission. Its corollary is the myth that science is a (white) masculine pursuit. Both myths render invisible the contributions of women, people of color, and technicians. Female scientists, assistants, calculators, and technicians who for a long time worked alone or nearly alone were overshadowed by their male counterparts. Notwithstanding the controversy over whether or not James Watson (b. 1928) and Francis Crick (1916–2004) used Rosalind Franklin’s (1920–1958) X-ray diffraction images with her explicit permission in their 1953 discovery of the structure of DNA, her contribution was crucial to the discovery—and yet she received less credit for it than Watson and Crick did. In a similar vein, when Cecilia Gaposchkin (1900–1979) discovered that hydrogen was the most abundant element in the universe in 1925, male astronomers remained unconvinced until Henry Norris Russell (1877–1957), director of the Princeton University Observatory, published the result and cited Gaposchkin in his footnotes.10
The newer approaches to the history of science have resulted in a growth of interest in the roles played by women and other formerly neglected groups in the scientific enterprise. The historian’s broader vision of the demographic base of practicing scientists cannot sustain a conception of the scientific persona whose traits were only masculine. More closely considering the role of women illuminates how compatible family life and the pursuit of science could be. Combining the two was sometimes even necessary in sciences such as astronomy, where women aided nocturnal celestial observations during certain eras. The history of women in science thus resulted in some of the most important correctives to the myth of science as a solitary enterprise.11
Peter Higgs’s statement in the epigraph was an admission that the myth of science as a solitary enterprise persisted into the twentieth century. But it also acknowledged that in the twenty-first century, science could not be practiced alone—and in quietude. Thoughts produced in solitude do not become part of scientific knowledge unless subjected to a tightly controlled peer-review process and then offered to the scientific community for further discussion, debate, and review at meetings and conferences. Collaborative researchers in a laboratory setting have evolved into international teams of researchers who combine their data in multiauthored publications. The Intergovernmental Panel on Climate Change, for instance, has hundreds of authors and dozens of editors who vote on every decision in its reports.12 The epistemological space of science operates in a democratic fashion. No more solitary than life itself, science is fundamentally and deeply a collaborative and social enterprise.