I’ve had the enriching experience of collaborating on two books with friends—physicist Stephen Hawking and spiritual leader Deepak Chopra. Their worldviews could not have been further apart if they existed in different universes. My vision of life is pretty much the same as Stephen’s—that of the scientist. But it is far different from Deepak’s, which is probably why we chose to title our book War of the Worldviews and not Isn’t It Wonderful How We Agree on Everything?
Deepak is passionate about his beliefs, and in the time we spent traveling together, he was always trying to convert me and questioning my approach to understanding the world. He called it reductionist, because I believe that the mathematical laws of physics can, ultimately, explain everything in nature—including human beings. In particular, as I’ve said, I believe, as do most scientists today, that everything—again including us—is made of atoms and the elementary particles of matter, which interact through the four fundamental forces of nature, and that if one understands how all that works, one can, in principle, at least, explain everything that happens in the world. In practice, of course, we have neither enough information about our surroundings nor a powerful enough computer to use our fundamental theories to analyze phenomena like human behavior, so the question of whether Deepak’s mind is ruled by the laws of physics must remain an open question.
I didn’t object in principle to Deepak’s characterizing me as a reductionist, but I would bristle when he said it, because the way he said it made me feel embarrassed and defensive, as if a soulful person couldn’t believe as I did. In fact, at gatherings of Deepak’s supporters, I sometimes felt like an Orthodox rabbi at a convention of pork producers. I was always being asked leading questions like “Do your equations tell you what I am experiencing when I look at a painting by Vermeer or listen to a symphony by Beethoven?” Or: “If my wife’s mind is really both particles and waves, how do you explain her love for me?” I had to admit I couldn’t explain her love for him. But then again, I couldn’t explain any love—using equations. To me, that is beside the point. For, as a tool for understanding the physical world, if not our mental experience (at least not yet), the application of mathematical equations has had unprecedented success.
We may not be able to calculate next week’s weather by tracing the motion of each atom and applying the fundamental principles of atomic and nuclear physics, but we do have a science of meteorology that uses higher-level mathematical models and isn’t too bad at predicting the weather tomorrow. Similarly, we have applied sciences that study the ocean, light and electromagnetism, the properties of materials, diseases, and dozens of other aspects of our everyday world in ways that allow us to put our knowledge to extraordinary practical uses, undreamt of until the past few hundred years. Today, at least among scientists, there is virtually universal agreement about the validity of the mathematical approach to understanding the physical world. Yet it took a very long time for that view to prevail.
The acceptance of modern science as a metaphysical system based on the idea that nature behaves according to certain regularities originated with the Greeks; but science didn’t achieve the first of its convincing successes at making use of these laws until the seventeenth century. That jump, from the ideas of philosophers like Thales, Pythagoras, and Aristotle to those of Galileo and Newton, was a great leap. Still, it needn’t have taken two thousand years.
The first great stumbling block in the path to accepting and building on the Greek heritage was the Roman conquest of Greece in 146 B.C. and Mesopotamia in 64 B.C. The rise of Rome was the beginning of centuries of declining interest in philosophy, mathematics, and science—even among the Greek-speaking intellectual elite—because the practical-minded Romans placed little stock in those fields of study. A remark by Cicero expresses nicely the Roman disdain for theoretical pursuits: “The Greeks,” he said, “held the geometer in the highest honor; accordingly, nothing made more brilliant progress among them than mathematics. But we have established as the limit of this art its usefulness in measuring and counting.” And indeed, during the roughly thousand-year duration of the Roman Republic and its successor, the Roman Empire, the Romans undertook vast and impressive engineering projects that were dependent, no doubt, upon much measuring and counting; but, as far as we know, they produced not one Roman mathematician of note. That is an astounding fact, and it stands as evidence of the enormous effect of culture on developments in mathematics and science.
Though Rome did not provide an environment conducive to science, after the dissolution of the Western Roman Empire, in A.D. 476, things got even worse. Cities shrank, the feudal system arose, Christianity dominated Europe, and rural monasteries and later the cathedral schools became the centers of intellectual life, which meant that scholarship was focused on religious issues, and inquiries into nature were considered frivolous or unworthy. Eventually, the intellectual heritage of the Greeks was lost to the Western world.
Fortunately for science, in the Arab world the Muslim ruling class did find value in Greek learning. That’s not to say that they pursued knowledge for its own sake—that stance was no more endorsed by Islamic ideology than by Christianity. But wealthy Arab patrons were willing to fund translations of Greek science into Arabic, in the belief that Greek science was useful. And indeed, there was a period of hundreds of years during which medieval Islamic scientists made great progress in practical optics, astronomy, mathematics, and medicine, overtaking the Europeans, whose own intellectual tradition lay dormant.*
But by the thirteenth and fourteenth centuries, a time when the Europeans were awakening from their long slumber, science in the Islamic world had gone into serious decline. There seem to have been several factors at play. For one, conservative religious forces came to impose an ever narrower understanding of practical utility, which they considered to be the only acceptable justification for scientific investigation. Also, for science to flourish, society must be prosperous and offer possibilities for private or governmental patronage, because most scientists do not have the resources to support their work in an open market. In late medieval times, however, the Arab world came under attack by external forces ranging from Genghis Khan to the Crusaders and was torn asunder by internal factional warfare as well. Resources that might once have been devoted to the arts and sciences were now diverted to warfare—and the struggle for survival.
Another reason that the study of science began to stagnate was that the schools that came to dominate a significant segment of intellectual life in the Arab world didn’t value it. Known as madrassas, these were charitable trusts sustained by religious endowments, whose founders and benefactors were suspicious of the sciences. As a result, all instruction had to center on religion, to the exclusion of philosophy and science. Any teaching of those subjects thus had to occur outside the colleges. With no institution to support them or bring them together, scientists became isolated from one another, which created a huge barrier to specialized scientific training and research.
Scientists cannot exist in a vacuum. Even the greatest profit immensely from their interaction with others in their field. The lack of peer-to-peer contact in the Islamic world prevented the cross-fertilization of ideas necessary for progress. Also, without the salutary benefits of mutual criticism, it became hard to control the proliferation of theories that lacked empirical grounding, and difficult to find a critical mass of support for those scientists and philosophers with views that challenged the conventional wisdom.
Something comparable to this kind of intellectual suffocation occurred in China, another grand civilization that could conceivably have developed modern science before the Europeans. In fact, China had a population of more than 100 million people during the High Middle Ages (1200–1500), roughly double the population of Europe at the time. But the educational system in China proved, like that of the Islamic world, far inferior to the one that was developing in Europe, at least with respect to science. It was rigidly controlled and focused on literary and moral learning, with little attention paid to scientific innovation and creativity. That situation remained virtually unchanged from the early Ming dynasty (around 1368) until the twentieth century. As in the Arab world, only modest advances in science (as opposed to technology) were achieved, and they came despite, and not because of, the educational system. Thinkers who were critical of the intellectual status quo and who attempted to develop and systematize the intellectual tools necessary to push the life of the mind forward were strongly discouraged, as was the use of data as a means of advancing knowledge. In India, too, a Hindu establishment focused on caste structure insisted on stability at the expense of intellectual advance. As a result, though the Arab world, China, and India did produce great thinkers in other realms, they produced no scientists equivalent to those who, in the West, would create modern science.
The revival of science in Europe began toward the end of the eleventh century, when the Benedictine monk Constantinus Africanus began to translate ancient Greek medical treatises from Arabic to Latin. As had been the case in the Arab world, the motivation to study Greek wisdom lay in its utility, and these early translations whetted the appetite for the translation of other practical works in medicine and astronomy. Then, in 1085, during the Christian reconquest of Spain, whole libraries of Arabic books fell into Christian hands, and over the next decades, large numbers were translated, thanks in part to generous funding from interested local bishops.
It is hard to imagine the impact of the newly available workers. It was as if a contemporary archaeologist had stumbled on and translated tablets of ancient Babylonian text and found that they presented advanced scientific theories far more sophisticated than our own. Over the next few centuries, sponsorship of the translations became a status symbol among the social and commercial elite of the Renaissance. As a result, the recovered knowledge spread beyond the Church and became a kind of currency, collected as the wealthy today might collect art—and indeed, the wealthy would display their books and maps as one might today display a sculpture or painting. Eventually, the new value placed on knowledge independent of its utilitarian value led to an appreciation of scientific inquiry. In time, this undermined the Church’s “ownership” of truth. In competition with truth as revealed in the Scriptures and in Church tradition, there was now a rival truth: the truth as revealed by nature.
But merely translating and reading ancient Greek works does not a “scientific revolution” make. It was the development of a new institution—the university—that would really transform Europe. It would become the driver of the development of science as we know it today, and it would keep Europe in the forefront of science for many centuries, producing the greatest strides in science the world had ever known.
The revolution in education was fueled by growing affluence and a multitude of career opportunities for the well educated. Cities like Bologna, Paris, Padua, and Oxford acquired reputations as centers of learning, and both students and teachers gravitated to them in large numbers. Teachers would set up shop, either independently or under the auspices of an existing school. Eventually, they organized into voluntary associations modeled after the trade guilds. But though the associations called themselves “universities,” these were at first simply alliances, owning no real estate and having no fixed location. Universities in the sense that we know them came decades later—Bologna in 1088, Paris around 1200, Padua around 1222, and Oxford by 1250. There, natural science, not religion, would become the focus, and scholars would come together to interact and stimulate one another.
That’s not to say that the university in medieval Europe was the Garden of Eden. As late as 1495, for example, German authorities saw the need for a statute explicitly forbidding anyone associated with the university from drenching freshmen with urine, a statute that no longer exists but that I still require my own students to adhere to. Professors, for their part, often had no dedicated classroom and were thus forced to lecture in rooming houses, churches, even brothels. Worse, professors were commonly paid directly by the students, who could also hire and fire them. At the University of Bologna, there was another bizarre twist on what is the norm today: students fined their professors for unexcused absence or tardiness, or for not answering difficult questions. And if a lecture was not interesting or proceeded too slowly or too quickly, they would jeer and become rowdy. Aggressive tendencies got so out of hand in Leipzig that the university had to pass a rule against throwing stones at professors.
Despite these practical hardships, the European universities were great enablers of scientific progress, in part because of the way they brought people together to share and debate ideas. Scientists can withstand distractions like jeering students and perhaps even occasional flying urine, but to go without endless academic seminars—that’s just unthinkable. Today, most scientific advances stem from university research, as they must, because that is where the lion’s share of funding for basic research goes. But just as important, historically, has been the role of the university as a gathering place of minds.
The scientific revolution that would distance us from Aristotelianism, transform our views of nature and even society, and lay the groundwork for who we are today is often said to have begun with Copernicus’s heliocentric theory and to have culminated in Newton’s physics. But that picture is oversimplified—though I use the term “scientific revolution” as a convenient shorthand, the scientists involved had a wide variety of goals and beliefs, rather than being a united bunch deliberately striving together to create a new system of thought. Even more important, the changes the “scientific revolution” refers to were actually gradual: the great scholars of 1550 to 1700 who built the grand cathedral of knowledge whose pinnacle was Newton did not emerge from nowhere. It was the medieval thinkers at the early universities of Europe who did the backbreaking work of digging the foundation for them.
The greatest of that work was accomplished by a group of mathematicians at Merton College, Oxford, between 1325 and 1359. Most people know, at least vaguely, that the Greeks invented the idea of science and that it was in the time of Galileo that modern science came into being. Medieval science, though, gets little respect. That’s a shame, because medieval scholars made surprising progress, despite living in an age in which people routinely judged the truth of statements not according to empirical evidence but by how well they fit into their preexisting system of religion-based beliefs—a culture that is inimical to science as we know it today.
Philosopher John Searle wrote about an incident that illustrates the fundamentally different terms in which we and medieval thinkers see the world. He told of a Gothic church in Venice called the Madonna del Oro (Madonna of the Orchard). The original plan was to call it the church of San Christoforo, but while it was being built a statue of the Madonna mysteriously turned up in an adjoining orchard. The name change came because the statue was assumed to have fallen from heaven, an event that was considered a miracle. There was no more doubt, then, about that supernatural explanation than there would be, now, about the earthly interpretation that we would assign such an incident. “Even if the statue were found in the gardens of the Vatican,” Searle wrote, “the church authorities would not claim it had fallen out of heaven.”
I brought up the accomplishments of medieval scientists at a party once. I said I was quite impressed by their work, given their culture and the hardships they faced. We scientists today complain about the time “wasted” applying for grants, but at least we have heated offices and don’t have to hunt cats for dinner when our town’s agricultural production takes a dip. Not to mention having to escape the Black Death, which came in 1347 and killed half the population.
The library at Merton College, Oxford
The party I was at was heavy on academics, so the person I was talking to didn’t react to my musings the way most people would have—by suddenly realizing she needed to go refill her chardonnay. Instead she said, incredulously, “Medieval scientists? Come on. They operated on patients without anesthetic. They made healing potions out of lettuce juice, hemlock, and gall from a castrated boar. And didn’t even the great Thomas Aquinas himself believe in witches?” She had me there. I had no idea. But I looked it all up later, and she was right. Yet despite her apparently encyclopedic knowledge of certain aspects of medieval medical scholarship, she hadn’t heard of their more lasting ideas in the realm of physical science, which seemed to me all the more miraculous, given the state of medieval knowledge in other fields. And so, though I had to concede that one would not want to be treated by a medieval doctor who traveled to the present-day world in a time machine, I stood my ground with regard to the progress those medieval scholars had made in the physical sciences.
What did they do, these forgotten heroes of physics? To start with, among all the types of change considered by Aristotle, they singled out change of position—that is, motion—as being most fundamental. That was a deep and prescient observation, because most types of change we observe are specific to the substances involved—the rotting of meat, the evaporation of water, the falling of leaves from a tree. As a result, they wouldn’t yield much to a scientist looking for the universal. The laws of motion, on the other hand, are fundamental laws that apply to all matter. But there’s another reason the laws of motion are special: on the submicroscopic level, they are the cause of all the macroscopic changes we experience in our lives. That’s because, as we now know—and as some of the ancient Greek atomists had speculated—the many kinds of change we experience in the everyday world can ultimately be understood by analyzing the laws of motion that operate on the fundamental building blocks of materials: atoms and molecules.
Though the Merton scholars did not discover those comprehensive laws of motion, they did intuit that such laws existed, and they set the stage for others to discover them centuries later. In particular, they created a rudimentary theory of motion that had nothing to do with the science of other types of change—and nothing to do with the idea of purpose.
The task the Merton scholars assumed wasn’t an easy one, given that the mathematics required for even the simplest analysis of motion was still primitive at best. But there was another serious handicap, and to overcome it was an even greater triumph than to succeed with the limited mathematics of the era, for it was not a technical barrier but a limitation imposed by the way people thought about the world: the Merton scholars were, like Aristotle, hampered by a worldview in which time played a mostly qualitative and subjective role.
We who are steeped in the culture of the developed world experience the passage of time in a way that people of earlier eras would not recognize. For most of humankind’s existence, time was a highly elastic framework that stretched and contracted in a completely private manner. Learning to think of time as anything but inherently subjective was a difficult and far-reaching step forward, as great an advance for science as the development of language or the realization that the world could be understood through reason.
For example, to search for regularities in the timing of events—to imagine that for a rock to fall sixteen feet will always require one second—would have been a revolutionary concept in the era of the Merton scholars. For one, no one had any idea how to measure time with any precision, and the concept of minutes and seconds was virtually unheard of. In fact, the first clock to record hours of equal length wasn’t invented until the 1330s. Before that, daylight, however long, had been divided into twelve equal intervals, which meant that an “hour” might be more than twice as long in June as in December (in London, for example, it varied from 38 to 82 of today’s minutes). That this didn’t bother anyone reflects the fact that people had very little use for anything but a vague and qualitative notion of time’s passage. In light of that, the idea of speed—distance traveled per unit of time—must have seemed an oddball one indeed.
Given all the obstacles, it seems miraculous that the Merton scholars managed to create a conceptual foundation for the study of motion. And yet they went so far as to state the first-ever quantitative rule of motion, the “Merton rule”: The distance traversed by an object that accelerates from rest at a constant rate is equal to the distance traversed by an object that moves for the same amount of time at half the accelerating object’s top speed.
Admittedly, that’s a mouthful. Although I’ve long been familiar with it, looking at it now, I had to read it twice just to follow what it was saying. But the opacity of the rule’s phrasing serves a purpose, for it illustrates how much easier science would become once scientists learned to use—and invent, if necessary—the appropriate mathematics.
In today’s mathematical language, the distance traversed by an object that accelerates from rest at a constant rate can be written as ½ a × t2. The second quantity, the distance traversed by an object that moves for the same amount of time at half the accelerating object’s top speed, is simply ½ (a × t) × t. Thus, the above statement of the Merton rule, translated mathematically, becomes: ½ a × t2 = ½ (a × t) × t. That’s not just more compact, it also makes the truth of the statement instantly obvious, at least to anyone who has had pre-algebra.
If those days are far behind you, just ask a sixth grader—he or she will understand it. In fact, the average sixth grader today knows far more mathematics than even the most advanced fourteenth-century scientist. Whether an analogous statement will be said of twenty-eighth-century children and twenty-first-century scientists is an interesting question. Certainly human mathematics prowess has, for centuries, been steadily progressing.
An everyday example of what the Merton rule is saying is this: If you accelerate your car steadily from zero to one hundred miles per hour, you’ll go the same distance as if you had driven at fifty the entire time. It sounds like my mother nagging me about driving too fast, but though the Merton rule is today common sense, the Merton scholars were unable to prove it. Still, the rule made quite a splash in the intellectual world, and it quickly diffused to France, Italy, and other parts of Europe. The proof came soon after, from across the Channel, where the French counterparts of the Merton scholars worked at the University of Paris. Its author was Nicole Oresme (1320–1382), a philosopher and theologian who would eventually rise to the post of bishop of Lisieux. To achieve his proof, Oresme had to do what physicists throughout history have done repeatedly: invent new mathematics.
If mathematics is the language of physics, a lack of appropriate mathematics makes a physicist unable to speak or even reason about a topic. The complex and unfamiliar math Einstein needed to use to formulate general relativity may be why he once advised a young schoolgirl, “Do not worry about your difficulties in mathematics: I can assure you that mine are still greater.” Or, as Galileo put it, “the book [of nature] cannot be understood unless one first learns to comprehend the language and read the letters in which it is composed. It is written in the language of mathematics, and its characters are triangles, circles, and other geometrical figures, without which it is humanly impossible to understand a single word of it; without these, one is wandering around in a dark labyrinth.”
To shine a light on that dark labyrinth, Oresme invented a type of diagram intended to represent the physics of the Merton rule. Though he didn’t understand his diagrams in the same way we do today, one might consider this to be the first geometric representation of the physics of motion—and, thus, the first graph.
I’ve always found it strange that many people know who invented calculus, though few people ever use it, while few people know who invented the graph, though everyone uses them. I suppose that’s because today the idea of graphs seems obvious. But in medieval times, the idea of representing quantities with lines and forms in space was strikingly original and revolutionary, maybe even a little nutty.
To get an idea of the difficulty in achieving even a simple change in the way people think, I like to remember the story of another nutty invention, a decidedly nonmathematical one: Post-it notes, those small pieces of paper with a strip of reusable adhesive on one side that allows them to be easily attached to things. The Post-it note was invented in 1974 by Art Fry, a chemical engineer at the 3M Company. Suppose, though, that they had not been invented back then, and that today I came to you, an investor, with the idea and a prototype pad of notes. Surely you’d recognize the invention as a gold mine and jump at the opportunity to invest, right?
Outlandish as it may seem, most people probably wouldn’t, as evidenced by the fact that when Fry presented his idea to the marketing people at 3M—a company known for both their adhesives and their innovations—they were unenthusiastic and believed they’d have a hard time selling something that would have to command a premium price compared with the scratch paper it was intended to replace. Why didn’t they rush to embrace the treasure Fry was offering them? Because in the pre-Post-it era, the notion that you might want to stick scraps of paper striped with weak adhesive on things was beyond people’s imagination. And so Albert Fry’s challenge was not just to invent the product but to change the way people thought. If that was an uphill battle with regard to the Post-it note, one can only imagine the degree of difficulty one faces when trying to do the same in a context that really matters.
Luckily, Oresme didn’t need Post-its for his proof. Here is how we would interpret his arguments. To begin, place time along the horizontal axis and velocity along the vertical axis. Now suppose the object you are considering starts at time zero and moves for a while at constant speed. That motion is represented by a horizontal line. If you shade in the area below that line, you get a rectangle. Constant acceleration, on the other hand, is represented by a line that rises at some angle because as time increases, so does velocity. If you shade in the region below that line, you have a triangle.
The area under these curves—the shaded areas—represents speed multiplied by time, which is the distance the object has traveled. Employing this analysis, and knowing how to calculate the areas of rectangles and triangles, it is then easy to demonstrate the validity of the Merton rule.
One reason Oresme doesn’t get the credit he deserves is that he did not publish many of his works. In addition, though I’ve explained how we would interpret his work today, the conceptual framework he actually used was not nearly as detailed and quantitative as I described, and was utterly different from our modern understanding of the relation between mathematics and physical quantities. That new understanding would spring from a series of innovations regarding the concepts of space, time, velocity, and acceleration that would be one of the most significant contributions of the great Galileo Galilei (1564–1642).
Though medieval scholars working in the universities of the thirteenth and fourteenth centuries had progressed in furthering the tradition of a rational and empirical scientific method, the grand explosion of European science did not immediately follow. Instead it was the inventors and engineers who transformed European society and culture in late medieval Europe, a period concurrent with the first stirrings of the Renaissance, which spanned roughly from the fourteenth to the seventeenth centuries.
These early Renaissance innovators created the first great civilization not powered primarily by the exertion of human muscle. Waterwheels, windmills, new kinds of mechanical linkages, and other devices were developed or improved and incorporated into village life. They powered sawmills, flour mills, and a variety of clever machines. Their technological innovations had little to do with theoretical science, but they did lay the groundwork for later advances by generating new wealth that helped foster a rise in learning and literacy, and by promoting the realization that an understanding of nature can aid the human condition.
The entrepreneurial spirit of the early Renaissance also saw the invention of one technology that had a direct and major impact on later science, as well as on society in general: the printing press. Though the Chinese had developed movable type centuries earlier—around 1040—it was relatively impractical, due to the use of pictograms in Chinese writing, which created the need for thousands of different characters. In Europe, however, the appearance of mechanical movable-type printing around 1450 changed everything. In 1483, for example, to set up a book for printing the Ripoli Press charged three times what a scribe would charge to copy the book. But with their setup, Ripoli would produce a thousand or more copies, while a scribe would produce only one. As a result, within just a few decades, more books had been printed than the scribes of Europe had produced in all the preceding centuries combined.
The printing press strengthened the emerging middle class and revolutionized the circulation of thoughts and information throughout Europe. Knowledge and information suddenly became available to a far wider group of citizens. Within a few years the first math texts were printed, and by 1600 almost a thousand had been published. In addition, there came a new wave in the recovery of ancient texts. Just as important, those with new ideas suddenly had a far larger forum for their views, and those who, like scientists, thrived on scrutinizing and furthering the ideas of others soon had far greater access to the work of their cohorts.
As a result of these changes in European society, its establishment was less fixed and uniform than that of the Islamic world, China, or India. Those societies had become rigid and focused on a narrow orthodoxy. The European elite, meanwhile, found themselves tugged and bent by the competing interests of town and country, church and state, pope and emperors, as well as the demands of a new lay intelligentsia and growing consumerism. And so, as European society evolved, its arts and its science had more freedom to change, and they did, resulting in a new and more practical interest in nature.
In both the arts and science, that new emphasis on natural reality became the soul of the Renaissance. The term itself is French for “rebirth,” and indeed the Renaissance represented a new beginning in both physical existence and culture: it began in Italy just after the Black Death killed between a third and half of Europe’s population, then spread slowly, not reaching northern Europe until the sixteenth century.
In art, Renaissance sculptors learned anatomy, and painters learned geometry, both focusing on creating more faithful representations of reality based on keen observation. Human figures were now rendered in natural surroundings, and with anatomical accuracy, and three-dimensionality was suggested through the use of light and shadow and linear perspective. Painters’ subjects also now showed realistic emotion, their faces no longer displaying the flat, otherworldly quality of earlier medieval art. Renaissance musicians, meanwhile, studied acoustics, while architects scrutinized the harmonic proportions of buildings. And scholars interested in natural philosophy—what we today call science—placed a new emphasis on gathering data and drawing conclusions from it, rather than employing pure logical analysis biased by a desire to confirm their religious worldview.
Leonardo da Vinci (1452–1519) perhaps best epitomizes the scientific and humanist ideal of that period, which did not recognize a stark separation between science and the arts. A scientist, engineer, and inventor, he was also a painter, sculptor, architect, and musician. In all these pursuits, Leonardo tried to understand the human and natural worlds through detailed observation. His notes and studies in science and engineering consume more than ten thousand pages, while as a painter he wasn’t content to merely observe posing subjects but also studied anatomy and dissected human corpses. Where prior scholars had viewed nature in terms of general qualitative features, Leonardo and his contemporaries invested enormous effort in perceiving the fine points of nature’s design—and placed less emphasis on the authority of both Aristotle and the Church.
It was into this intellectual climate, toward the end of the Renaissance, that Galileo was born, in Pisa in 1564, just two months before the birth of another titan, William Shakespeare. Galileo was the first of seven children of Vincenzo Galilei, a well-known lute player and music theorist.
Vincenzo came from a noble family—not the kind of noble family that we think of today, people who go on fox hunts and sip tea every afternoon, but the kind that has to use its name to get a job. Vincenzo probably wished he were a nobleman of the first kind, as he loved the lute and played it whenever possible—walking in town, riding a horse, standing at the window, lying in bed—a practice that brought in little in cold cash.
Hoping to guide his son toward a lucrative way of life, Vincenzo sent young Galileo to study medicine at the University of Pisa. But Galileo was more interested in math than in medicine, and he began to take private lessons in the works of Euclid and Archimedes, and even Aristotle. He told friends many years later that he would have preferred to forgo university training and instead take up drawing and painting. Vincenzo, however, had pushed him toward a more practical pursuit on the age-old fatherly theory that it is worth certain compromises to avoid a life where dinner means hemp-seed soup and beef entrails.
When Vincenzo heard that Galileo had turned toward mathematics rather than medicine, it must have seemed as if he’d chosen to major in living off his inheritance, as inadequate as it would be. But it hardly mattered. In the end Galileo didn’t complete a degree in medicine, math, or anything else. He dropped out and began a life journey that would indeed find him chronically short of money and often in debt.
After quitting school, Galileo at first supported himself by giving private mathematics lessons. He eventually got wind of an opening for a junior position at the University of Bologna. Though he was twenty-three, he applied, and in a novel twist on rounding he reported his age as “around twenty-six.” The university apparently wanted someone “around” a little older and hired a thirty-two-year-old who had also actually finished his degree. Still, even centuries later, it has to be comforting to anyone who has ever been turned down for an academic job that it’s an experience you share with the great Galileo.
Two years later, Galileo did become a professor, in Pisa. There, he taught his beloved Euclid and also a course on astrology aimed at helping medical students determine when to bleed patients. Yes, the man who did so much to further the scientific revolution also advised aspiring doctors on what the position of Aquarius means for the placement of leeches. Today astrology has been discredited, but in the age before we knew very much about the laws of nature, the idea that heavenly bodies affect our lives here on earth seemed to be a reasonable one. After all, it is true of the sun and also the moon, which had long been known to have a mysterious correlation with the tides.
Galileo Galilei, as painted by Flemish artist Justus Sustermans in 1636
Galileo made astrological forecasts both out of personal interest and for profit, charging his students twelve scudi for a reading. At five per year, he could double his sixty-scudi teaching salary, a sum he could get by on, but just barely. He also liked to gamble, and in an era before anyone knew much about the mathematics of probability, Galileo was not only a pioneer in calculating odds but also a good bluffer.
In his late twenties, tall and stocky, with a fair complexion and reddish hair, Galileo was generally well liked. But his tenure at Pisa didn’t last long. Though generally respectful of authority, he was prone to sarcasm and could be scathing to both his intellectual adversaries and administrators who rubbed him the wrong way. The rubbing that inflamed him at Pisa came when the university stubbornly insisted that professors don their academic gowns around town as well as when they were teaching.
Galileo, who liked to write poetry, retaliated by writing a poem for the university authorities. Its subject was clothing—Galileo came out against it. It is, he argued, a source of deceit. For example, without clothes, his verses proclaimed, a bride could look at her prospective mate and “See if he is too small, or has French diseases [and] Thus informed, take or leave him as she pleases.” Not a poem likely to endear you to Parisians. It didn’t go over well in Pisa, either, and young Galileo was again on the job market.
As it turned out, that was all for the good. Galileo promptly received an appointment near Venice, at the university in Padua, starting at 180 scudi per year, triple his former salary—and he would later describe his sojourn there as the best eighteen years of his life.
By the time Galileo got to Padua, he was disenchanted with Aristotelian physics. For Aristotle, science consisted of observation and theorizing. For Galileo, that was missing a crucial step, the use of experiments, and in Galileo’s hands, experimental physics advanced as much as its theoretical side. Scholars had been performing experiments for centuries, but they were generally done to illustrate ideas that they already accepted. Today, on the other hand, scientists perform experiments to rigorously test ideas. Galileo’s experiments fell somewhere in between. They were explorations—more than illustrations, but not quite rigorous tests.
Two aspects of Galileo’s approach to experiments are especially important. First, when he got a result that surprised him, he didn’t reject it—he questioned his own thinking. Second, his experiments were quantitative, a rather revolutionary idea at the time.
Galileo’s experiments were much like those you might see performed in a high school science class today, though of course his lab differed from what you’d find in a high school in that it lacked electricity, gas, water, and fancy equipment—and by “fancy equipment” I mean, for example, a clock. As a result, Galileo had to be a sixteenth-century MacGyver, making complex devices from the Renaissance equivalent of duct tape and a toilet bowl plunger. For instance, to create a stopwatch, Galileo poked a small hole in the bottom of a large bucket. When he needed to time an event, he would fill the vessel with water, collect what leaked out, and weigh it—the weight of the water was proportional to the duration of the event.
Galileo employed this “water clock” to attack the controversial issue of free fall—the process by which an object falls to the earth. To Aristotle, free fall was a type of natural motion governed by certain rules of thumb, such as: “If half the weight moves the distance in a given time, its double [the whole weight] will take half the time.” In other words, objects fall at a constant speed, which is proportional to their weight.
If you think about it, that’s common sense: a rock falls faster than a leaf. And so, given the lack of measuring or recording instruments and the little that was known about the concept of acceleration, Aristotle’s description of free fall must have seemed reasonable. But if you think again, it also violates common sense. As Jesuit astronomer Giovanni Riccioli would point out, even the mythological eagle that killed Aeschylus by dropping a turtle on his head knew instinctively that an object dropped on your head will do more damage if dropped from higher up—and that implies that objects speed up as they fall. As a result of such considerations, there was a long tradition of back-and-forth about this issue, with various scholars over the centuries having expressed skepticism about Aristotle’s theory.
Galileo was familiar with the criticisms and wanted to do his own investigation into the matter. However, he knew his water clock was not precise enough to allow him to experiment with falling objects, so he had to find a process that was slower yet demonstrated the same physical principles. He settled upon measuring the time it took for highly polished bronze balls to roll down smooth planes that were inclined at various angles.
Studying free fall by taking measurements of balls rolling down ramps is something like buying an outfit according to how it looks on the Internet—there’s always the chance that the clothes will look different on the actual you than on the gorgeous model. Despite the dangers, such reasoning lies at the core of the way modern physicists think. The art of designing a good experiment lies largely in knowing which aspects of a problem are important to preserve and which you can safely ignore—and in how to interpret your experimental results.
In the case of free fall, Galileo’s genius was to design the rolling-ball experiment with two criteria in mind. First, he had to slow things down enough so that he could measure them; equally important, he sought to minimize the effects of air resistance and friction. Although friction and air resistance are part of our everyday experience, he felt that they obscure the simplicity of the fundamental laws that govern nature. Rocks might fall faster than feathers in the real world, but the underlying laws, Galileo suspected, dictated that in a vacuum they would fall at the same rate. We must “cut loose from these difficulties,” he wrote, “and having discovered and demonstrated the theorems, in the case of no resistance … use them and apply them [to the real world] … with such limitations as experience will teach.”
For small tilts, the balls in Galileo’s experiment rolled rather slowly, and the data was relatively easy to measure. He noted that with these small angles, the distance covered by the ball was always proportional to the square of the time interval. One can show mathematically that this means the ball had gained speed at a constant rate—that is, the ball was undergoing constant acceleration. Moreover, Galileo noted that the ball’s rate of fall did not depend on how heavy it was.
What was striking was that this remained true even as the plane was tilted at ever steeper angles; no matter what angle the tilt, the distance the ball covered was independent of the ball’s weight and proportional to the square of the time it took to roll. But if that is true for a tilt of forty, fifty, sixty, even seventy or eighty degrees, why not ninety? And so now comes Galileo’s very modern-sounding reasoning: he said that his observations of the ball rolling down the plane must also hold true for free fall, which one can consider as equivalent to the “limiting case” in which the plane is tilted at ninety degrees. In other words, he hypothesized that if he tilted the plane all the way—so that it was vertical and the ball was actually falling rather than rolling—it would still gain speed at a constant rate, which would mean that the law he observed for inclined planes also holds for free fall.
In this way, Galileo replaced Aristotle’s law of free fall with his own. Aristotle had said that things fall at a speed that is proportional to their weight, but Galileo, postulating an idealized world in which the fundamental laws reveal themselves, came to a different conclusion: in the absence of the resistance provided by a medium such as air, all objects fall with the same constant acceleration.
If Galileo had a taste for mathematics, he also had a penchant for abstraction. It was so well developed that he sometimes liked to watch scenes play out entirely in his imagination. Nonscientists call these fantasies; scientists call them thought experiments, at least when they pertain to physics. The good thing about imagining experiments conducted purely in your mind is that you avoid the pesky issue of setting up apparatus that actually works but are nevertheless able to examine the logical consequences of certain ideas. And so, in addition to sinking Aristotle’s theory about free fall through his practical experiments with inclined planes, Galileo also employed thought experiments to join the debate regarding one of the other central criticisms of Aristotle’s physics, the criticism that concerned the motion of projectiles.
What is it that, after the initial force applied when a projectile is fired, continues to propel it forward? Aristotle had guessed it might be particles of air that rush in behind the projectile and continually push it, but even he was skeptical of that explanation, as we’ve seen.
Galileo attacked the issue by imagining a ship at sea, with men playing catch in a cabin, butterflies fluttering about, fish swimming in a bowl at rest upon a table, and water dripping from a bottle. He “noted” that these all proceed in exactly the same manner when the ship is in steady motion as when the ship is at rest. He concluded that because everything on a ship moves along with the ship, the ship’s motion must be “impressed” upon the objects, so that once the ship is moving, its motion becomes a sort of baseline for everything on it. Couldn’t, in the same way, the motion of a projectile be impressed upon the projectile? Could that be what keeps the cannonball going?
Galileo’s ruminations led him to his most profound conclusion, another radical break with Aristotelian physics. Denying Aristotle’s assertion that projectiles require a reason for their motion—a force—Galileo proclaimed that all objects that are in uniform motion tend to maintain that motion, just as objects at rest tend to stay at rest.
By “uniform motion” Galileo meant motion in a straight line and at a constant speed. The state of “rest” then, is simply an example of uniform motion in which the velocity happens to be zero. Galileo’s observation came to be called the law of inertia. Newton later adapted it to become his first law of motion. A few pages after stating the law, Newton adds that it was Galileo who discovered it—a rare instance of Newton’s giving credit to someone else.
The law of inertia explains the problem of the projectile that had plagued the Aristotelians. According to Galileo, once fired, a projectile will remain in motion unless some force acts to stop it. Like Galileo’s law of free fall, this law is a profound break from Aristotle: Galileo was asserting that a projectile does not require the continual application of force to keep it in motion; in Aristotle’s physics, continued motion in the absence of a force, or “cause,” was inconceivable.
On the basis of what I had told him about Galileo, my father, who liked to compare any important person you were talking about to some figure from Jewish history, called Galileo the Moses of science. He said it because Galileo led science out of the Aristotelian desert and toward a promised land. The comparison is especially apt because, like Moses, Galileo did not make it to the promised land himself: he never went so far as to identify gravity as a force or to decipher its mathematical form—that would have to wait for Newton—and he still clung to some of Aristotle’s beliefs. For example, Galileo believed in a kind of “natural motion” that isn’t uniform and yet needn’t be caused by a force: motion in circles around the center of the earth. Galileo apparently believed that it is that type of natural motion that allows objects to keep up with the earth as it rotates.
These last vestiges of Aristotle’s system would have to be abandoned before a true science of motion could arise. For reasons such as this, one historian described Galileo’s concept of nature as “an impossible amalgam of incompatible elements, born of the mutually contradictory world views between which he was poised.”
Galileo’s contributions to physics were truly revolutionary. What he is most famous for today, however, is his conflict with the Catholic Church, based on his assertion that, contrary to the view of Aristotle (and Ptolemy), the earth is not the center of the universe but just an ordinary planet, which, like the others, orbits the sun. The idea of a heliocentric universe had existed as far back as Aristarchus, in the third century B.C., but the modern version can be credited to Copernicus (1473–1543).
Copernicus was an ambivalent revolutionary whose goal was not to challenge the metaphysics of his day, but simply to fix ancient Greek astronomy: what bothered him was that in order to make the earth-centered model work, one had to introduce a great many complicated ad hoc geometric constructions. His model, on the other hand, was much more refined and simpler, even artful. In the spirit of the Renaissance, he appreciated not just its scientific relevance but its aesthetic form. “I think it easier to believe this,” he wrote, “than to confuse the issue by assuming a vast number of Spheres, which those who keep Earth at the center must do.”
Copernicus first wrote about his model privately in 1514 and then spent decades making astronomical observations that supported it. But like Darwin, centuries later, he circulated his ideas discreetly, only among his most trusted friends, for fear of being scorned by the populace and the Church. And yet if Copernicus felt the danger, he also knew that with the proper politicking, the Church’s reaction could be tempered, and when Copernicus finally did publish, he dedicated the book to the pope, with a long explanation of why his ideas were not heresy.
In the end, the point was a moot one, for Copernicus didn’t publish his book until 1543, and by then he lay stricken on his deathbed—some say he didn’t even see a final printed version of his book until the very day he died. Ironically, even after his book was published, it had little immediate impact until later scientists like Galileo adopted it and began to spread the word.
Though Galileo didn’t invent the idea that the earth is not the center of the universe, he contributed something just as important—he used a telescope (which he improvised, based on a much more rudimentary version that had been invented not long before) to find startling and convincing evidence for that view.
It all started by accident. In 1597, Galileo was writing and lecturing in Padua about the Ptolemaic system, giving little indication that he had any doubts about its validity.* Meanwhile, at around the same time, an incident occurred in Holland that reminds us of the importance in science of being in the right place (Europe) at the right time (in this case, just decades after Copernicus). The incident, which would eventually cause Galileo to change his thinking, took place when two children who were playing in the shop of an obscure spectacle maker named Hans Lippershey put two lenses together and looked through them at a distant weathervane atop the town’s church. It was magnified. According to what Galileo later wrote about this event, Lippershey looked through the lenses, “one convex and the other concave … and noted the unexpected result; and thus [invented] the instrument.” He had created a spyglass.
We tend to think of the development of science as a series of discoveries, each leading to the next through the efforts of some solitary intellectual giant with a clear and extraordinary vision. But the vision of the great discoverers of intellectual history is more often muddled than clear, and their accomplishments more indebted to their friends and colleagues—and luck—than the legends show and than the discoverers themselves often wish to admit. In this instance, Lippershey’s spyglass had a magnifying power of only two or three, and when Galileo first heard about it some years later, in 1609, he was unimpressed. He became interested only because his friend Paolo Sarpi, described by historian J. L. Heilbron as a “redoubtable anti-Jesuit polymathic monk,” saw potential in the device—he thought that if the invention could be improved, it could have important military applications for Venice, an unwalled city dependent for its survival on early detection of any impending enemy attack.
Sarpi turned for help to Galileo, who, among his many and various ventures to supplement his income, had a sideline making scientific instruments. Neither Sarpi nor Galileo had any expertise in the theory of optics, but, through trial and error, in a few months Galileo developed a nine-power instrument. He gifted it to an awestruck Venetian Senate in exchange for a lifetime extension of his appointment and a doubling of his then salary, to one thousand scudi. Galileo would eventually improve his telescope to a magnifying power of thirty, the practical limit for a telescope of that design (a plano-concave eyepiece and a plano-convex objective).
Around December 1609, by which time Galileo had already developed a telescope with a magnifying power of twenty, he turned it skyward and aimed it at the largest body in the night sky, the moon. That observation, and others he would make, provided the best evidence to date that Copernicus was correct about the earth’s place in the cosmos.
Aristotle had claimed that the heavens form a separate realm, made of a different substance and following different laws, which cause all heavenly bodies to move in circles around the earth. What Galileo saw, though, was a moon that was “uneven, rough, and full of cavities and prominences, being not unlike the face of the earth, relieved by chains of mountains and deep valleys.” The moon, in other words, did not seem to be of a different “realm.” Galileo saw, too, that Jupiter had its own moons. The fact that these moons orbited Jupiter and not the earth violated Aristotle’s cosmology, while supporting the idea that the earth was not the center of the universe but merely one planet among many.
I should note here that when I say Galileo “saw” something, I don’t mean that he simply put the scope to his eyes, pointed it somewhere, and feasted on a revolutionary new set of images, as if he were watching a show at the planetarium. Quite the contrary, his observations required long periods of difficult and tedious effort, for he had to squint for hours through his imperfect, poorly mounted (by today’s standards) glass, and struggle to make sense of what he saw. When he gazed at the moon, for example, he could “see” mountains only by painstakingly noting and interpreting the movement, over a period of weeks, of the shadows they cast. What’s more, he could see only one one-hundredth of the surface at a time, so to create a composite map of the whole, he had to make numerous, scrupulously coordinated observations.
Such difficulties illustrate that, with regard to the telescope, Galileo’s genius lay not so much in perfecting the instrument as in the way he applied it. For example, when he perceived what seemed to be, say, a lunar mountain, he wouldn’t simply trust appearances; he would study the light and shadows and apply the Pythagorean theorem to estimate the mountain’s height. When he observed Jupiter’s moons, he at first thought they were stars. Only after multiple careful and meticulous observations, and a calculation involving the known motion of the planet, did he realize that the positions of the “stars” relative to Jupiter were changing in a manner that suggested they were circling.
Having made these discoveries, Galileo, though reluctant to enter the theological arena, became eager to be recognized for them. And so he began to devote much energy to publicizing his observations and crusading to replace the accepted cosmology of Aristotle with the sun-centered system of Copernicus. Toward that end, in March 1610, he published The Starry Messenger, a pamphlet describing the wonders he had seen. The book was an instant best seller, and though it was (in modern format) only about sixty pages long, it astounded the world of scholars, for it described marvelous, never-seen-before details of the moon and planets. Soon Galileo’s fame spread throughout Europe, and everyone wanted to peer through a telescope.
That September, Galileo moved to Florence to take the prestigious position of “chief mathematician of the University of Pisa and philosopher of the grand duke.” He retained his prior salary but had no obligation to teach or even reside in the city of Pisa. The grand duke in question was the Grand Duke Cosimo II de’ Medici of Tuscany, and Galileo’s appointment was as much the result of a campaign to curry favor with the Medicis as it was due to his grand accomplishments. He had even named the newly discovered moons of Jupiter the “Medicean planets.”
Soon after the appointment, Galileo fell terribly ill and remained bedridden for months. Ironically, he probably had the “French disease,” syphilis, a product of his attraction to Venetian prostitutes. But even while ill, Galileo continued striving to persuade influential thinkers of the validity of his findings. And by the following year, when he had regained his health, his star had risen so high that he was invited to Rome, where he gave lectures about his work.
In Rome, Galileo met Cardinal Maffeo Barberini and was granted an audience at the Vatican with Pope Paul V. It was a triumphant trip in every way, and Galileo seems somehow to have finessed his differences with official Church doctrine so as to cause no offense—perhaps because most of his lectures focused on the observations he had made with his telescope, without much discussion of their implications.
It was inevitable, though, that in his subsequent politicking Galileo would eventually come into conflict with the Vatican, for the Church had endorsed a version of Aristotelianism created by Saint Thomas Aquinas that was incompatible with Galileo’s observations and explanations; in addition, unlike his politic predecessor Copernicus, Galileo could be insufferably arrogant, even when consulting theologians regarding Church doctrine. And so, in 1616, Galileo was summoned to Rome to defend himself before various high-ranking officials of the Church.
The visit seemed to end in a draw—Galileo was not condemned, nor were his books banned, and he even had another audience with Pope Paul; but the authorities forbade him to teach that the sun, not the earth, is the center of the universe, and that the earth moves around it rather than vice versa. In the end, the episode would prove to have caused him a huge problem, for much of the evidence used against Galileo in his Inquisition trial seventeen years later would be drawn from the meetings during which the Church officials had explicitly forbidden him to teach Copernicanism.
For a while, however, the tensions eased, especially after Galileo’s friend Cardinal Barberini became Pope Urban VIII in 1623. For unlike Pope Paul, Urban held a generally positive view of science, and in the early years of his reign he welcomed audiences with Galileo.
Encouraged by the friendlier atmosphere, with Urban’s rise, Galileo began work on a new book, which he finished when he was sixty-eight, in 1632. The fruit of that labor was entitled Dialogo Sopra i due Massimi Sistemi del Mondo (in English, Dialogue Concerning the Two Chief World Systems: Ptolemaic and Copernican). But the “dialogue” was extremely one-sided, and the Church reacted—with good reason—as if the book’s title were Why Church Doctrine Is Wrong and Pope Urban Is a Moron.
Galileo’s Dialogue took the form of a conversation between friends: Simplicio, a dedicated follower of Aristotle; Sagredo, an intelligent neutral party; and Salviati, who made persuasive arguments for the Copernican view. Galileo had felt comfortable writing the book because he had told Urban about it, and Urban had seemed to approve. But Galileo had assured the pope that his purpose in writing it was to defend the Church and Italian science from the charge that the Vatican had banned heliocentrism out of ignorance—and Urban’s approval was based on the proviso that Galileo would present the intellectual arguments on both sides without judgment. If Galileo had indeed striven for that, he failed miserably. In the words of his biographer J. L. Heilbron, Galileo’s Dialogue “dismissed the fixed-earth philosophers as less than human, ridiculous, small-minded, half-witted, idiotic, and praised Copernicans as superior intellects.”
There was another insult. Urban had wanted Galileo to include in the book a disclaimer, a passage affirming the validity of Church doctrine; but instead of stating the disclaimer in his own voice, as Urban had asked, Galileo had the affirmation of religion voiced by his character Simplicio, described by Heilbron as a “nincompoop.” Pope Urban, being no nincompoop himself, was deeply offended.
When the stardust had settled, Galileo was convicted of violating the Church’s 1616 edict against teaching Copernicanism and forced to renounce his beliefs. His offense was as much about power and the control, or “ownership,” of truth as it was about the specifics of his worldview.* For most of those who formed the intellectual elite of the Church recognized that the Copernican view was probably correct; what they objected to was a renegade who would spread that word and challenge the doctrine of the Church.
On June 22, 1633, dressed in the white shirt of penitence, Galileo knelt before the tribunal that had tried him, and bowed to the demand that he affirm the authority of the Scriptures, declaring: “I, Galileo, son of the late Vincenzo Galilei, Florentine, aged seventy years … swear that I have always believed, do believe, and by God’s help will in the future believe, all that is held, preached, and taught by the Holy Catholic and Apostolic Roman Church.”
Despite proclaiming that he had always accepted Church doctrine, however, Galileo went on to confess that he had advocated the condemned Copernican theory even “after an injunction had been judicially intimated” to him by the Church, to the effect that he must, in the Church’s words, “abandon the false opinion that the sun is the center of the world and immovable, and that the earth is not the center of the world, and moves …”
What’s really interesting is the wording of Galileo’s confession: “I wrote and printed a book,” he said, “in which I discuss this new doctrine already condemned, and adduce arguments of great cogency in its favor.” So even while pledging allegiance to the Church’s version of the truth, he still defends the content of his book.
In the end, Galileo capitulates by saying that “desiring to remove from the minds of your Eminences, and all faithful Christians, this strong suspicion, justly conceived against me, with sincere heart and unfeigned faith I abjure, curse, and detest the aforesaid errors and heresies … and I swear that in the future I will never again say or assert, verbally or in writing, anything that might furnish occasion for a similar suspicion regarding me.”
Galileo would not receive as harsh a punishment as the Inquisition had leveled on Giordano Bruno, who had also declared that the earth revolved around the sun and for his heresy was burned at the stake in Rome in 1600. But the trial made the position of the Church quite clear.
Two days later, Galileo was released to the custody of the Florentine ambassador. He spent his last years under a kind of house arrest in his villa at Arcetri, near Florence. While living in Padua, Galileo had fathered three children out of wedlock. Of them, the daughter to whom he had been extremely close had died of the plague in Germany, and his other daughter was estranged from him; but his son, Vincenzo, lived nearby and took loving care of him. And although he was a prisoner, Galileo was allowed visitors, even heretics—provided they were not mathematicians. One of them was the young English poet John Milton (who would later refer to Galileo and his telescope in Paradise Lost).
Ironically, it was during his time in Arcetri that Galileo recorded his most fully worked-out ideas on the physics of motion, in the book he considered to be his greatest work: Discourses and Mathematical Demonstrations Relating to Two New Sciences. The book could not be published in Italy, due to the pope’s ban on his writings, so it was smuggled to Leiden and published in 1638.
By then Galileo’s health was failing. He had become blind in 1637, and the next year he began to experience debilitating digestive problems. “I find everything disgusting,” he wrote, “wine absolutely bad for my head and eyes, water for the pain in my side … my appetite is gone, nothing appeals to me and if anything should appeal [the doctors] will prohibit it.” Still, his mind remained active, and a visitor who saw him shortly before his death commented that—despite the prohibition on visitors of that profession—he had recently enjoyed listening to two mathematicians argue. He died at age seventy-seven, in 1642, the year Newton was born, in the presence of his son, Vincenzo—and, yes, a few mathematicians.
Galileo had wanted to be buried next to his father in the main Basilica of Santa Croce, in Florence. Grand Duke Cosimo’s successor, Ferdinando, had even planned to build a grand tomb for him there, across from that of Michelangelo. Pope Urban let it be known, however, that “it is not good to build mausoleums to [such men] … because the good people might be scandalized and prejudiced with regard to Holy authority.” And so Galileo’s relatives deposited his remains instead in a closet-size chamber under the church’s bell tower and held a small funeral attended only by a few friends, relatives, and followers. Still, many, even within the Church, felt the loss. Galileo’s death, the librarian at the court of Cardinal Barberini, in Rome, courageously wrote, “touches not just Florence, but the whole world and our whole century that from this divine man has received more splendor than from almost all the other ordinary philosophers.”
*The medieval period runs from A.D. 500 to 1500 (or, in some definitions, 1600). In either case it spans, with some overlap, the era between the cultural achievements of the Roman Empire and the flourishing of science and art in the Renaissance. It was a time some in the nineteenth century dismissed as “a thousand years without a bath.”
*He did, however, have some sympathy for a version of Copernicus’s ideas that had been developed by German astronomer (and astrologer) Johannes Kepler, mainly because it supported his own pet theory of the tides (which he ascribed, incorrectly, to the action of the sun). Still, when Kepler urged Galileo to speak out in support, Galileo refused.
*Indeed, while Galileo was prohibited from teaching Copernicanism, he was allowed to continue his work and use a telescope during his years of house arrest.