32.

Clocks

In 1845, a curious feature was added to the clock on St. John’s Church in Exeter, western England: another minute hand, running fourteen minutes faster than the original.1 This was, as Trewman’s Exeter Flying Post explained, “a matter of great public convenience,” for it enabled the clock to exhibit, “as well as the correct time at Exeter, the railway time.”2

The human sense of time has always been defined by planetary motion. We talked of “days” and “years” long before we knew that the Earth rotates on its axis and orbits the sun; from the waxing and waning of the moon, we got the idea of a month. The sun’s passage across the sky gives us terms such as “midday,” or “high noon.” Exactly when the sun reaches its highest point depends, of course, on where you’re looking from. If you happen to be in Exeter, you’ll see it about fourteen minutes after someone in London.

Naturally, as clocks became commonplace, people set them by their local celestial observations. That was fine if you needed to coordinate only with other locals: if we both live in Exeter and say we’ll meet at 7:00 p.m., it hardly matters that in London, 200 miles away, they think it’s 7:14. But as soon as a train connects Exeter and London—stopping at multiple other towns, all with their own idea of what the time is—we face a logistical nightmare. Early train timetables valiantly informed travelers that “London time is about four minutes earlier than Reading time, seven and a half minutes before Cirencester,” and so on, but many understandably got hopelessly confused. More seriously, so did drivers and signaling staff, which raised the risk of collisions.3

So the railways adopted “railway time”; they based it on Greenwich Mean Time, set by the famous observatory in the London borough of Greenwich. Some municipal authorities quickly grasped the usefulness of standardizing time across the country and adjusted their own clocks accordingly. Others resented this high-handed metropolitan imposition, and clung to the idea that their time was—as the Flying Post put it, with charming parochialism—“the correct time.” For years, the dean of Exeter stubbornly refused to adjust the clock on the city’s cathedral.

In fact, there’s no such thing as “the correct time.” Like the value of money, it’s a convention that derives its usefulness from the widespread acceptance of others. But there is such a thing as accurate timekeeping. That dates from 1656, and a Dutchman named Christiaan Huygens.

There were clocks before Huygens, of course. Water clocks appear in civilizations from ancient Egypt to medieval Persia. Others kept time from marks on candles.4 But even the most accurate devices might wander by fifteen minutes a day.5 This didn’t matter much if you were a monk wanting to know when to pray, unless God is a stickler for punctuality. But there was one increasingly important area of life where the inability to keep accurate time was of huge economic significance: sailing.

By observing the angle of the sun, sailors could figure out their latitude: where you are from north to south. But their longitude—where you are from east to west—had to be guessed. Wrong guesses could, and frequently did, lead to ships’ hitting land hundreds of miles from where navigators thought they were, sometimes literally hitting land and sinking.

How could accurate timekeeping help? Remember why Exeter’s clocks differed from London’s, 200 miles away: high noon happened fourteen minutes later. If you knew when it was midday at London’s Greenwich observatory—or any other reference point—you could observe the Sun, calculate the time difference, and work out the distance. Huygens’s pendulum clock was sixty times more accurate than any previous device: but even fifteen seconds a day soon mounts up on long seafaring voyages, and pendulums don’t swing neatly on the deck of a lurching ship.

Rulers of maritime nations were acutely aware of the longitude problem: the king of Spain was offering a prize for solving it nearly a century before Huygens’s work. Famously, it was a subsequent prize offered by the British government that led to a sufficiently accurate device being painstakingly refined, in the 1700s, by an Englishman named John Harrison.* It kept time to within a couple seconds a day. 6

Since Huygens and Harrison, clocks have become much more accurate still. And since the dean of Exeter’s intransigence, the whole world has agreed on what to regard as “the correct time”—coordinated universal time, or UTC, as mediated by various global time zones that maintain the convention of twelve o’clock being at least vaguely near the sun’s highest point. UTC is based on atomic clocks, which measure oscillations in the energy levels of electrons. The Master Clock itself, operated by the U.S. Naval Observatory in northwest Washington, D.C., is actually a combination of several different clocks, the most advanced of which are four atomic fountain clocks, in which frozen atoms are launched into the air and cascade down again. If something goes wrong—and even a technician entering the room will alter the temperature and possibly the timing—then there are several backup clocks, ready to take over at any nanosecond. The output of all this sophistication is accurate to within a second every three hundred million years.7

Is there a point to such accuracy? We don’t plan our morning commutes to the millisecond. In truth, an accurate wristwatch has always been more about prestige than practicality. For more than a century, before the hourly beeps of early radio broadcasts, members of the Belville family made a living in London by collecting the time from Greenwich every morning and selling it around the city, for a modest fee. Their clients were mostly tradesfolk in the horology business, for whom aligning their wares with Greenwich was a matter of professional pride.8

But there are places where milliseconds now matter. One is the stock market: fortunes can be won by exploiting an arbitrage opportunity an instant before your competitors. Some financiers recently calculated it was worth spending $300 million on drilling through mountains between Chicago and New York to lay fiber-optic cables in a slightly straighter line. That sped up communication between the two cities’ exchanges by three milliseconds. One may reasonably wonder whether that’s the most socially useful infrastructure the money could have bought, but the incentives for this kind of innovation are perfectly clear, and we can hardly be surprised if people respond to them.9

The accurate keeping of universally accepted time also underpins computing and communications networks.10 But perhaps the most significant impact of the atomic clock—as it was first with ships, and then with trains—has been on travel.

Nobody now needs to navigate by the angle of the sun; we have GPS. The most basic of smartphones can locate you by picking up signals from a network of satellites: because we know where each of those satellites should be in the sky at any given moment, triangulating their signals can tell you where you are on Earth. It’s a technology that has revolutionized everything from sailing to aviation, surveying to hiking. But it works only if those satellites agree on the time.

GPS satellites typically house four atomic clocks, made from cesium or rubidium. Huygens and Harrison could have only dreamed of their precision, but it’s still enough to misidentify your position by a couple yards—a fuzziness amplified by interference as signals pass through the Earth’s ionosphere.11 That’s why self-driving cars need sensors as well as GPS: on the highway, a couple yards is the difference between lane discipline and a head-on collision.

Meanwhile, clocks continue to advance: scientists have recently developed one, based on an element called ytterbium, that won’t have lost more than a hundredth of a second by the time the sun dies and swallows up the Earth, in about five billion years.12 How might this extra accuracy transform the economy between now and then? Only time will tell.