images

Isaac Newton ended his life as the warden of the Royal Mint, and his struggle against the counterfeiters and the rescue of British coinage left him hardly any leisure for new scientific research. But it seems that Newton could always find time for a proper quarrel, and in the last part of his life, he finally found a worthy opponent. His row with the German polymath Gottfried Wilhelm Leibniz is the stuff of legend and has kept science historians busy for centuries. It was a battle between giants; Leibniz was not only as fanatic a scientist as Newton, and driven by the same wide-ranging thirst for knowledge, but he could also be equally relentless in imposing his will. The reason for the dispute was befitting of the adversaries: it concerned the invention of a completely new type of mathematics, the significance of which for the future development of science cannot possibly be overstated.

The question which forms the basis of this mathematics, however, is astonishingly simple and, at first sight, completely harmless: How do you demonstrate change? It's easy to see why this question was of interest to Newton. After all, motion is nothing more than a change of location, acceleration is a change of speed, force a change of momentum, and he had thought deeply about all of these concepts in order to be able to write his Principia.

HOW DO YOU DESCRIBE MOTION?

Newton was not the first person to wrestle with this issue, however. The question about the nature of change had been discussed more than two thousand years before his birth in ancient Greece. In the fifth century BCE, the philosopher Zeno of Elea thought up his story of Achilles and the tortoise. He described a race between the famous hero and the notoriously slow animal. The tortoise gets a head start of a hundred meters, and then the race begins. Achilles sprints off and quickly runs the first hundred meters. In this time, however, the tortoise has also advanced a little bit, so Achilles is not yet level. Naturally, he easily runs this small distance, but still needs time to do so and during this time, the tortoise advances another little bit. Its lead does get constantly smaller, but—according to Zeno—Achilles can nonetheless never overtake it. In the time that he requires to cover the distance to the tortoise, it always manages to advance a little bit further and so he cannot win the race.

Zeno had a soft spot for such paradoxes. In another well-known case, he explained that an arrow shot from a bow cannot actually move in reality. If one observes the arrow during its flight at a particular point in time, it also occupies a very specific place in the air. It must be at rest in this place, otherwise it wouldn't be in a place. What is valid for one point in time must be valid for all points in time during the flight, which must mean that the arrow is constantly at rest and so motion is impossible. Or, as Zeno himself nicely put it: “What is in motion moves neither in the place it is nor in one in which it is not.”1

Now, Zeno wasn't an idiot and he must surely have known that something about his arguments wasn't quite right. He hadn't come up with them in order to actually prove the impossibility of motion, and it was clear to him that they contradicted people's experience. What he probably wanted to do was to defend his teacher Parmenides. Parmenides was of the opinion that—to put it simply—everything merely exists and there can be no “becoming” or “passing away,” since that would mean that “something” would have to arise out of “nothing.” This idea is, of course, open to criticism—but Zeno wanted to demonstrate that the arguments of his opponents, who assumed the reality of change, could also lead to contradictions. He couldn't yet recognize the contradictions that he created himself with his reasoning. Zeno believed that an infinite series of additions must give an infinitely large result. He also thought that, by dividing the running distance an infinite number of times, this would mean that the distance to run must be infinitely large. This was wrong, but the world had to wait for Isaac Newton and Gottfried Wilhelm Leibniz to come along, in order to be able to understand this with mathematical precision.2

Whatever one thinks of these arguments among the ancient Greeks—the question of the nature, and above all a mathematical demonstration, of change and motion remained unanswered for a long time. And it was a fundamental issue that urgently needed to be solved if you wanted to make sensible statements about moving objects.

If I set off on my morning run through Paradise Park in Jena, for instance, and cover a distance of ten kilometers in fifty minutes (thanks for your admiration; not bad, I know), it's not difficult to calculate that I run at an average speed of twelve kilometers per hour. But I certainly wasn't running at a constant twelve kilometers per hour for the whole fifty minutes. Just over a third of the way in, for example, there is a place that is a bit downhill and I generally run this part a bit more quickly, probably at about fifteen kilometers per hour. But in the last part of my jogging route, there are lots of traffic lights and crossings that slow me down, and I am certainly much slower than twelve kilometers per hour there.

It is also not difficult to calculate appropriate average speeds for each individual kilometer of my run. Or for all one-hundred-meter intervals. Or even for every individual meter, centimeter, or millimeter. Nevertheless, these are still average values that tell me how quickly I completed a certain distance. But what if I want to know how quick I am at a very specific moment? Then we are back with Zeno's paradoxical story of the flying arrow. In a single moment, I do not cover any distance; there is no line along which I have moved at a concrete speed.3

From a mathematical point of view, this leads us to the question of infinity. We can make the distance for which we want to calculate an average speed smaller and smaller and simply need to divide this distance by the time it takes us to cover it. As long as the length of the distance is not zero, that's a simple task. But at some point between “zero” and “arbitrarily small,” we meet infinity—and this was mathematically difficult to grasp in the seventeenth century. In order to calculate the speed of an object during a very specific moment, the distance has to be made infinitely small. But how?

It is worth also briefly viewing the problem from a geometrical point of view. Both Newton and Leibniz were influenced in their mathematical work by René Descartes. Among the latter's great achievements was the combination of geometry and algebra, that is to say the connection between mathematical diagrams and mathematical equations. Descartes showed that geometrical problems can also be formulated as equations—and vice versa—and he contributed to the resolution of the “tangent line problem” that is a central part of the new mathematics of Newton and Leibniz.

It is simple to draw a diagram that represents a movement. In the example of my jogging route, you would plot the time on the x axis and the place where I am at each moment in time on the y axis. At the point in time zero, I have covered zero kilometers: the starting point lies at the coordinates 0 min. / 0 km. Five minutes later, I have reached a kilometer further, and the next point in the diagram is at the coordinates 5 mins. / 1 km. Then perhaps I run a bit quicker and reach the two-kilometer mark after nine minutes and so enter a further point at the coordinates 9 mins. / 2 km. The next point follows perhaps at 14 mins. / 3 km, and so on and so forth: I can enter as many points with values for kilometers and minutes as I like and will end up with a graph that illustrates my movement during my run. In order to calculate the speed, you simply have to compare two selected points in time. If I want to know my average speed for the second kilometer, for example, I take the corresponding point with the coordinates 9/2 and subtract the point one kilometer before—in this case 5/1. The calculation is simple: 9–5 = 4 minutes, and 2–1 = 1 kilometer. So my average speed during the second kilometer was 4 minutes per kilometer, which works out to be 15 km/h. The problem can also be solved geometrically, by simply drawing a straight line through the points 9/2 and 5/1. The steeper the line, the quicker I was. If it is horizontal on the diagram (so parallel to the x axis which shows the time), the speed was zero. The time coordinate has changed, but the place coordinate hasn't, which means I haven't moved. The greater the slope or gradient in this line, the greater the speed.

If we move from the average speed to the actual speed that we have at any given point, then we come up against the same problem as before. The two points through which we draw the line in the diagram get closer and closer to each other, as the distance covered that we are looking at gets smaller. Eventually, the two points come together and then it is no longer immediately clear how and with which gradient the line should be drawn. This line, which represents the gradient or in this case the momentary speed, is called the “tangent,” and Descartes found a way to draw it. That was a remarkable achievement, but his solution was only practical for specific curves. The general problem of the mathematical explanation of changing dimensions was one he couldn't solve.

There were a few other mathematicians in the seventeenth century who made occasional small advances in the investigation of infinity or the drawing of tangents on curves. But these were isolated cases, which could only be applied to certain problems. What was missing was a new approach that would provide a comprehensive solution to the question and didn't only consider the calculation of speeds: always, when you want to investigate how a certain dimension changes when subject to the change of another dimension, you face the same problem. And such processes are to be found everywhere: the change in house prices subject to population growth; the change in the global temperature subject to CO₂ emissions; the change in the concentration of chemical elements during a molecular reaction; the change in the calculating time of a computer program subject to the processing power; the change in the mass of a rocket subject to its altitude. These and other phenomena can only be understood if the appropriate mathematical tools are available.

NEWTON'S DIFFERENTIAL AND INTEGRAL CALCULUS

It was precisely these tools that Isaac Newton created. But Gottfried Wilhelm Leibniz also created the same tools, and the question as to which of the two was the true originator of this powerful new mathematics, and whether one had copied from the other, occupied not only Newton and Leibniz for years, but also science historians (almost) up until the present day.

Newton's route to a mathematical understanding of change began—like so much else in his life—in the years 1665 and 1666. Having fled from the plague in Cambridge to his home village of Woolsthorpe, he didn't only think about optics and falling apples, but also about infinity. He struggled with this term, just as he had struggled to find terms for phenomena like “force” and “mass.” For him, it wasn't only about pure mathematics, but also about the basis of matter itself: How far can one divide something? Is there anything that is indivisible? Can points with no spatial dimension still be put together in some way to form a line? What does “infinitely small” mean? Nothing—or actually something? And if something, then what?

Newton had to fight his way through a tangle of imprecise terms: “infinite,” “uncertain,” “indistinguishable.” He used a special symbol (a small “0”) in his texts to refer to this strange thing that was in a way nothing, yet was at the same time something after all. He wrote about lines that differ from one another by an infinitely small amount, and lines that do not differ from one another at all, and endeavored not to erase the almost indistinguishable difference between these descriptions.

The big breakthrough came when Newton no longer saw a curve as a rigid geometrical image, but rather as a point that moves and, so to speak, pulls the lines with it and draws them. He invented new words to refer to these flowing and flexible images: fluxions and fluents. Fluxions were Newton's way of being able to calculate a tangent after all, even if there was only a single point through which it could be drawn. Simplistically put, he just imagined a further point that was an infinitely small distance from the actual point. This distance, which didn't actually exist, was delineated by his fluxions, and from them he could then measure the gradient of the tangent. What Newton had created is learned by everybody in math classes today in the form of differential calculus. He also created the counterpart to this: what is known today as integral calculus, in which out of an infinite number of small areas, the area beneath a curve can be calculated.

Newton had thus found his tools. He was in a position to characterize changes mathematically and to avoid all the problems and paradoxes that had given people headaches since ancient Greek times. In one fell swoop, he was able to solve numerous mathematical problems which had previously been insoluble, and deal with questions much more quickly than had been possible to date. He used these tools to make the calculations that would form the basis of the Principia, but there was one thing that he didn't do: make his method public. The publication of these mathematical findings had been planned, but the criticism by Hooke and others that had followed his first publications put him off (see chapter 3). He did write everything up in 1671 in a paper titled Tractatus de Methodis Serierum et Fluxionum, but left this unpublished, just like his other mathematical book De Analysi per Aequationes Numero Terminorum Infinitas. That was at least, however, one of the texts he showed to his colleague John Collins, who was so enthused by it that, without Newton's knowledge, he made a copy of it (which would later lead to problems….). Newton himself continued to use his method of fluxions, but removed every clue to their use from his work. He had achieved one of the most important breakthroughs in the field of mathematics—and remained silent about it.

LEIBNIZ, THE PLAGIARIST?

While Newton was thinking about infinity in Woolsthorpe, Gottfried Wilhelm Leibniz was just in the process of trying to get his doctorate. His dissertation was finished, but the University of Leipzig had still refused to confer his doctorate—aged just twenty, he was still too young. That didn't mean that he hadn't achieved enough by then. Even as a child, he had been exceedingly thirsty for knowledge and had taught himself Latin and Greek at the tender age of eight using books from his parents’ library. He began his studies at the University of Leipzig in 1661, when he was fourteen. Leibniz wanted to be a lawyer like his father, but also studied theology and philosophy. And mathematics, too: in 1663, he switched to the University of Jena for a semester, where he met Professor Erhard Weigel. Weigel was not really known as a famous mathematician, but he was an outstanding teacher. He was an advocate of clear and comprehensible language and encouraged his colleagues and students to avoid Latin arguments in specialist language and instead to use simple German. Weigel's influence was later to play an important role in Leibniz's bitter quarrel with Newton.4

In 1666, Leibniz would have been ready to receive his doctorate, but the university council in Leipzig was against this. So he went to Nuremberg, to the University of Altdorf (which no longer exists), and, after a few months there, received his doctorate,5 “with great applause,” as Leibniz himself reported. “In my public disputation, I expressed my thoughts so clearly and felicitously, that not only were the hearers astonished at this extraordinary and, especially in a jurist, unexpected degree of acuteness; but even my opponents publicly declared that they were extremely well satisfied.”6

It would seem that the young Leibniz could not be accused of excessive modesty. But his work must indeed have met with great enthusiasm, since he was offered a professorship in Altdorf soon after graduating. He rejected this, however; he felt he was destined for a higher calling. He wanted to know more, learn more, and have contact with as many other like-minded thinkers as possible. In order to achieve this, he entered into the service of the Mainz Archbishop Johann Philipp von Schönborn as an advisor.7 He then went to Paris in 1672 as a diplomat, where his job was to advise the French king Louis XIV (the “Sun King”) and discourage him from going to war against Holland. Leibniz's alternative suggestion was a campaign against Egypt—an idea that found no favor with the king and was simply ignored. However, Leibniz did finally find suitable company in Paris, and he began to concern himself with a number of different subjects and to think seriously about mathematics.

Christiaan Huygens, who also lived in Paris at the time, recognized the potential in Leibniz. At a meeting, he gave him a problem to solve. Leibniz was asked to calculate the sum of the so-called “reciprocal triangular numbers,”8 which he managed to do. Huygens then showed Leibniz which mathematical books to read and how he should further his education. In 1673, the young German traveled to England, where he met many members of the Royal Society. He presented to them one of his inventions: a mechanical calculating machine that could not only add and subtract like those already in existence, but also multiply and divide. In so doing, however, he also became the involuntary inventor of the demo effect—when he wanted to demonstrate his instrument, it didn't work. This didn't go down too well with the Royal Society (and Robert Hooke claimed, of course, that he could construct a much better machine). The episode with the calculating machine would later cause Leibniz more problems, as would a visit to the English mathematician John Pell's house. Leibniz apparently wanted to show off a bit there and performed a mathematical trick on the calculation of roots that he had developed himself. Pell then showed him a book in which the very same method had long since been published, a book that Leibniz could theoretically also have read. Leibniz claimed that he hadn't read it, but a slight doubt remained: had Leibniz here perhaps presented somebody else's work as his own? He quickly wrote an explanation, in which he denied any wrongdoing, and delivered it to the Royal Society. This was perhaps done with the best of intentions, but there was now a written document linking Leibniz to a potential case of plagiarism, something he would later regret.

For the time being, however, he thought no further of the matter, instead taking it as a reason to gain a judicious and comprehensive understanding of mathematics. This led him in the next few years to his big breakthrough: like Newton, he found a mathematical way to demonstrate changes. What Newton called “fluxions,” Leibniz referred to as “calculus”—but it worked in exactly the same way, only perhaps a little better. For Leibniz, shaped by his studies under Erhard Weigel, gave plenty of thought from the beginning to how his new mathematics could be described and used as simply as possible. Where Newton had been happy enough that the whole thing worked, Leibniz also came up with a new and practical language of symbols to make it as easy as possible for mathematicians to follow.

In 1676, Leibniz had more or less finished his work on his “differential calculus” (and also, shortly afterward, his work on “integral calculus”). Now a court counselor and librarian in Hannover, he fine-tuned the details a little before publishing his findings. In 1684, the first of two articles appeared in which he presented the basic rules of his new method. Newton had come up with differential calculus back in 1666, but the Royal Society first learned about this new mathematics in 1685, in a book by the Scottish mathematician John Craig, who described Leibniz's findings in it.

This did not mean, however, that Leibniz and Newton had not known of one another before then. They had indeed not met personally, but had at least corresponded with each other. To unravel and analyze the full interaction between Leibniz and Newton, including all the information that was leaked to them, or could have been, would go far beyond the limits of this book (and generations of science historians have already done that in every detail; see the bibliography). Newton wrote two letters directly to Leibniz in 1676, however, in which he praised his colleague in exceptionally polite terms. The two of them also corresponded about mathematics, though without revealing anything about their respective discoveries. It must have been clear to them both that the other had developed a similar mathematical method, but both of them assumed that it was just a similar method and not the same one.

At least on the surface, the two of them seemed to have no interest in quarreling about the original authorship. Leibniz considered himself the inventor of differential calculus, and the rest of the mathematical world seemed to as well. Newton knew that he had had the same ideas much earlier, but made no public utterances on the matter and continued to publish nothing. In the first edition of the Principia, he even specifically mentioned that he knew of Leibniz's method.

NEWTON BRINGS OUT THE HEAVY ARTILLERY

The now-famous row about the original authorship of the new mathematics was actually instigated from the second rank of scientists. Nicolas Fatio de Duillier, formerly Newton's close friend, published a letter in 1699 in which he directed at Leibniz a barely veiled accusation of plagiarizing Newton's work. He said that Leibniz had heard of Newton's work during his time in England and only then developed the idea for his own method. Leibniz immediately published a response in Acta Eruditorum, a journal that he himself had founded (and the first German scientific journal), in which he forcibly denied the accusation. He then also wrote a kind of review of his own response, which he also published in the journal, though this time anonymously. In addition, he sent a letter of complaint to the Royal Society, of which he was now also a member. Newton kept out of the whole matter to begin with. This first attack on Leibniz was therefore without success, but the battle was by no means over.

In 1703, Isaac Newton was elected president of the Royal Society. He now seemed to feel more secure and began publishing works again. In 1704, Opticks appeared, a work that was largely concerned with his optical research, but which also contained an appendix at the end in which he publicly presented his mathematical method of fluxions for the first time. Leibniz reviewed this appendix and published his opinion, again anonymously. In his text, he mostly praised himself and hinted that Newton might have copied from him.

The row escalated in 1708, when the physicist and mathematician John Keill published a text in which he directly accused Leibniz of plagiarism. Leibniz didn't really know what to think of this. From his point of view, Keill wasn't a worthy opponent, and was merely a mediocre mathematician, and Leibniz didn't wish to waste time on him. He therefore did what he had done before in response to Fatio and addressed himself to the Royal Society, demanding that the “upstart” (as Leibniz wrote to the Society's secretary) Keill should publicly apologize for this defamation. This time, however, he had misjudged the situation, for now Newton was president and had in the meantime heard of Leibniz's anonymous review of his work, complete with its hint at plagiarism. For Newton, the time for holding back had passed, but for the moment, he left it to Keill to speak for him.

Keill subsequently repeated his accusations, rather than apologizing, and in 1712, the Royal Society finally set up a commission to clear up the matter. It was not a particularly fair business from the beginning. Newton was still president of the Royal Society, and Leibniz didn't get a chance to tell his side of the story to the commission. All the old stories were rolled out once more: how Leibniz had so boastfully appeared with his calculating machine, which then didn't work after all; how he had showed off to John Pell with mathematical tricks that others had discovered before him; how he had had the opportunity during his visit to England to look at private documents with Newton's work on mathematics. Hardly surprisingly, the commission soon came to the conclusion that Newton had invented the new mathematics, that he had been the first. In the concluding report, it was again put on record that Keill had said nothing wrong and that Leibniz had certainly had the opportunity to plagiarize Newton.

Now Leibniz was truly angered. He and his friend, the Swiss mathematician Johann Bernoulli, thenceforth insulted the English in their letters, calling them “men full of vanity, who have always used every opportunity to present German insights as their own” and saying they were “envious of all other nations.” In 1713, Leibniz then published a pamphlet—again anonymously—that became known under the name “Charta Volans.” In it, he praised himself and spoke of his “honest nature,” which he took as a standard by which to judge others. That is why Leibniz, wrote the anonymous author (i.e., Leibniz himself), had never thought that Newton might have copied him. The reason for Keill's attack was the “unnatural hostility towards foreigners of the English” and Newton allowed himself to be influenced by sycophants who had no idea of the true course of events. In addition, his desire for fame was a “sign of a mind that was neither respectable nor honest.” As evidence of all this, Leibniz mentioned the quarrels Newton had had with Hooke and Flamsteed.9

Now it was Newton's turn to get into a rage. He himself wrote a comment on the Royal Society commission's report, also anonymously, in which he attacked Leibniz once more, this time not only declaring his precedence with regards to differential calculus, but also discrediting Leibniz's special symbol language.

“Mr. Newton,” wrote Newton, talking about himself in the third person, “doth not place his Method in Forms of Symbols, nor confine himself to any particular Sort of Symbols for Fluents and Fluxions.”10 Then he showed precisely the confused notation that characterized his work: “And where he puts the Letters x, y, z for Fluents, he denotes their Fluxions, either by other Letters as p, q, r; or by the same Letters in other Forms as X, Y, Z or , , ż; or by any Lines as DE, FG, HI,” before proudly asserting: “All Mr. Newton's Symbols are the oldest in their several Kinds by many Years.” In the last point, Newton may even have been right. His predilection for geometry had already manifested itself in the Principia, where he had presented the results gained through infinitesimal calculus not as such, but rather translated back into the old mathematical language of geometry. The rest, however, was merely his own personal opinion, and his notation (for him perhaps clear, but for the rest of the world rather confusing) did not prove to be useful in the long run. Formulating the new mathematics which he and Leibniz had developed with the old symbols and conventions made the whole thing too complicated for everyone who wasn't a genius. The new symbols that Leibniz invented,11 and the clear language formulated from them, therefore played a major role in the breakthrough of calculus and in helping other mathematicians to accept it.

The mud-slinging progressed to the next round: Leibniz wrote to a friend that he would love to attack Keill physically, instead of just with words. In public, however, he resorted to criticizing Newton's physics. The theory of universal gravity meant nothing to Leibniz. He considered the idea of forces that exerted an effect over great distances across empty space to be nonsense and wrote to Bernoulli: “I have tested it and had to laugh at the idea…. This man is not very successful in metaphysics.” Leibniz considered empty space to be impossible and was an advocate of Descartes’ vortex theory. The conflict about physics and the nature of space was an attempt to shift the row with Newton from mathematics to philosophy. Here, he considered himself—quite rightly—to be superior to Newton and hoped therefore to gain an advantage.12 In the end, though, he merely did himself damage with this tactic.

Leibniz died in Hannover on November 14, 1716. Even after the death of his opponent, Newton was undeterred and continued to publish new attacks. In the second and third editions of the Principia, he deleted all mention of Leibniz. His fame increased ever more, and he was feted even during his lifetime as one of the greatest geniuses of all time; against him, the dead Leibniz had no chance. His criticism of Newton's universal gravity was one of the few cases where he was completely off the mark. And the more successful Newton's new physics became, the more people came to accept the opinion that Leibniz might have been wrong with mathematics too, if he had misunderstood physics. Conspiracy theories arose: hadn't Henry Oldenburg been secretary of the Royal Society at the time when Leibniz was visiting London? And wasn't Oldenburg a German like Leibniz? Perhaps he had been a spy for his country and secretly passed on Newton's findings….

Newton seemed to have won the battle for differential calculus. But over the centuries, a more differentiated (how apt!!) image developed. It took a long time for the conflict to finally end. After Newton's death, there were plenty of others among the following generations of scientists who took his side and continued to fight for his fame and recognition. British scholars in particular refused to accept a bad word about their “national saint,” and it was only slowly that people came around to the notion that Newton wasn't the godlike genius who had been portrayed during his lifetime and, above all, in the decades after his death. Slowly but surely, historians began to reveal not only Newton's complicated and unpleasant character, but also the false accusations and unfair attacks on Leibniz: “The great fault, or rather misfortune, of Newton's life was one of temperament; a morbid fear of opposition from others ruled his whole life…when he became king of the world of science it made him desire to be an absolute monarch; and never did monarch find more obsequious subjects. His treatment of Leibniz, of Flamsteed…is, in each case, a stain upon his memory,” writes the English mathematician Augustus De Morgan in a biography of Newton from the year 1846.

Today, the long confrontation is merely an interesting episode in science history, and there is—with the exception of a few details—no more dispute about the true course of events. The vast majority of science historians take the view that Newton and Leibniz made their mathematical discoveries independently of each other. Newton was the first to do so, but Leibniz was the first to actually publish his findings. In any case, his language of symbols was superior to the complicated notation of Newton's fluxions, which is why it is still used today.

Imagine for a moment a world in which two such great minds as Newton and Leibniz were friends who supported and encouraged each other, instead of quarreling and holding each other back: in such a world, Newton wouldn't have kept his work hidden from the world and, together with Leibniz, could have taken mathematics to unparalleled heights. Instead, though, two of the greatest scientists of all time had nothing better to do than to dish the dirt on each other.

THE CRUX OF HAVING TO BE THE FIRST

This time, things are quite clear. Pretty much everything that Isaac Newton did in this particular episode definitely shouldn't serve as an example for modern science. The same is true for Leibniz. The problems that the two of them had to face still exist, of course. The quarrel between Newton and Leibniz revolved mainly around precedence, which is a concern that is perhaps even more important today than back then. Whether somebody has been the first to discover or find out something or not is a determining factor in scientific careers and important prizes and awards. The mistake that Newton made—keeping his research results to himself for years, or perhaps even decades—is one that people today can hardly afford to make.

The world of science is now much more interlinked than in the seventeenth century. So many people are working on big projects that hardly anything can be kept secret and competitors would find out much more quickly than in Newton's day. When, for instance, the scientists at the LIGO Gravitational-Wave Observatory, after decades of trying, finally provided evidence of gravitational waves in autumn 2015, it only took a few weeks for the first rumors to spread on the internet. And when the discovery was officially reported in February 2016, everyone already knew beforehand what was going to be announced. In this case, it wasn't so serious, since nobody else would have had the technical wherewithal anyway to profit from the data and thus get the drop on LIGO.

But when it comes to the search for asteroids or comets, for example, you often find today the same secretive behavior as in the past. In December 2004, a team of astronomers under the American Mike Brown discovered a large asteroid in the outer solar system. The object had a diameter of almost 1,000 km and is classified today as a dwarf planet with the name “Haumea.” Brown and his team didn't make their discovery public right away, however. They waited until July 2005 and even then only made an announcement that pointed to a future publication. In this preview for the discovery, they gave no concrete data about Haumea, but used a codename for the asteroid that they had invented themselves. What Brown and his colleagues had overlooked was that, if you did an internet search for this codename, you got the logbooks of the telescopic observations—which had actually been intended for internal use only. This enabled people to find out Haumea's exact position in the sky. Shortly after Brown's announcement, Spanish astronomers led by José Luis Ortiz Moreno suddenly went public and declared that they had discovered a large celestial object in the outer solar system.

The whole situation proceeded a bit like the row between Newton and Leibniz. Brown wrote to Ortiz and wanted to know if he had stolen his data. Ortiz didn't answer, so Brown called the International Astronomical Union (IAU) to ask them to investigate the matter. Examination of internet records revealed that Brown's data had indeed been accessed by a computer in the region of the Spanish observatory on the day before the Spaniards’ public announcement. The Spanish then admitted that they had accessed the logbooks, but said they had only done so to verify their own observations; they had already discovered the asteroid themselves before and merely wanted to check what their colleagues had found. Brown doubted this, but it has never become completely clear what actually happened. The IAU gives the date of the Spanish team's announcement as the official discovery date. The right to name the celestial body, however, was granted to Mike Brown, although this is normally the exclusive preserve of the discoverers.

Similar disputes about precedence are constantly taking place in the world of science. They can be about major, revolutionary results13 or mere bagatelles that don't attract much public attention. Unlike Isaac Newton, however, everyone is fully aware that it is a dangerous business not to make your data public. The longer you wait to do so, the greater the risk that you will be overtaken by others.14

The desire to produce results as quickly as possible can lead to plagiarism in extreme cases. Both Newton and Leibniz certainly didn't behave in an exemplary fashion, but there has so far been no concrete evidence that either of them did indeed copy from the other. Presenting other people's insights as your own is just as reprehensible and unethical in the world of science (and not only there!) today as it was in the seventeenth century. Nevertheless, plagiarists are constantly being found guilty—very often in cases where those involved are not so much interested in an academic career, but rather in an academic degree. When the German defense minister Karl-Theodor zu Gutenberg was forced to give up his doctor's degree in February 2011, because it was proven that large sections of his doctoral thesis were not his own work and had instead been taken from other authors without acknowledgment, this led to a whole wave of checks of academic papers by politicians in Germany. As a result, the ministers of the European Union Silvana Koch-Mehrin and Jorgo Chatzimarkakis had to give up their doctoral degrees, as did the Christian Democratic Union politician Matthias Pröfrock and (ironically enough) the minister for education and research Annette Schavan. They certainly won't be the only politicians to have been less precise with their work than they should have been, and there are bound to be cases of plagiarism in other sections of the population, too. For such cases, at least, it would be very simple to prevent future attempts at plagiarism: it would simply be necessary to get rid of the use of the title of “doctor.”

The politicians named above were obviously not interested in an academic career and it can be presumed that they only wrote their dissertations because of the prestige attached to the title. For scientists, on the other hand, it doesn't really matter whether they can put a “Dr.” in front of their names or not. What matters for them is their actual work, the results that they come up with. If you read the papers in the scientific journals carefully, you'll realize that the authors only appear under their names and academic titles are not to be seen. Whether a scientific piece of work is good or bad is determined by testing it against reality, not by the (supposed) authority of a title. It would therefore be absolutely no problem for the scientific community if the title of doctor was simply abolished. It isn't required to prove your education; after all, you get certificates and statements of attendance for all the lectures you attend, and the scientific value of your thesis doesn't change if it is no longer called a “doctoral thesis.” The only thing that would disappear with the abolition of the doctor's title would be the motivation to gain supposed authority with illegal plagiarism.

It will take a long time until the rather conservative world of the universities decides to take such radical steps, however. Until then, it is all the more important not to yield to the temptation of passing off other people's work as your own. And perhaps Newton and Leibniz can serve as an example here. The two of them used a whole load of dirty tricks in their row about the precedence of differential and integral calculus. Presumably, however, neither of them would have dreamt of replacing true research with ignominious copying. Both Isaac Newton and Gottfried Wilhelm Leibniz had too great a thirst for knowledge to do that. Of course they were interested in recognition and fame. Naturally they were angered by the prospect of somebody else getting the praise for their own achievements. And of course they were both keen to be viewed favorably by posterity. Newton and Leibniz may have been geniuses, but they were also human. Both of them spent almost their entire lives deciphering the mysteries of nature, consistently and with little regard for losses to themselves (or others). Both of them wanted knowledge at all costs—knowledge about everything there is to know. Neither Isaac Newton nor Gottfried Wilhelm Leibniz would have dreamt of copying something from somewhere and thus denying themselves the pleasure of finding it out for themselves. They may well have been assholes. But they were also genuine scientists.