The history of empires is that of men’s misery. The history of the sciences is that of their grandeur and happiness.
—EDWARD GIBBON, 1761
There is more love for humanity in electricity and steam than in chastity and abstention from meat.
—ANTON CHEKHOV TO ALEXEI SUVORIN, 1894
From about 1820, the liberal-scientific nations accrued unprecedented amounts of power—over nature, and over their fellow human beings. This technological transformation freed billions from mind-deadening toil. It replaced the charm of tradition with the shock of the new. It also fostered political and military preeminence, and while it would be pleasing to report that the empowered nations acted solely to liberate, enlighten, and enrich the rest of humankind, most instead reverted to anachronistic campaigns of conquest. A handful of nations came, for a time, to rule most of the world. The era of imperialism and colonialism left behind an enduring legacy of cynicism about science and technology, especially among those who think of science less as a way of obtaining knowledge than as an engine of power.
Since science drives so much of today’s technological development, many assume that it played the same role in the past—as, indeed, it sometimes did. Chemists helped iron-and steelmongers improve the smelting process, thermodynamics showed how and why steam engines worked, and the young Guglielmo Marconi could not have made his first radio transmissions without the prior research of scientists like James Clerk Maxwell and Heinrich Hertz. But nineteenth-century technological breakthroughs more often arose from the efforts of industrialists, entrepreneurs, and amateur inventors, whose products inspired science at least as much as they were inspired by science. Henry Adams, writing about public attitudes toward the steam engine during Jefferson’s presidency, noted that Americans generally were “roused to feel the necessity of scientific training” by their exposure to the practical benefits of technological advancement: “Until they were satisfied that knowledge was money, they would not insist upon high education; until they saw with their own eyes stones turned into gold, and vapor into cattle and corn, they would not learn the meaning of science.”
Most inventions failed, and most of the unheralded inventors stayed unheralded. The popular mass-produced Singer sewing machines of the 1860s were preceded by a century of sewing machines that didn’t work very well. Primitive dishwashers were being patented for a century before improved models came into general household use in the 1950s. Fiber optics preoccupied dozens of inventors, starting with John Tyndall in 1854 and Alexander Graham Bell in 1880, but did not start carrying telephone conversations until 1977. How could these millions of often haphazard experiments, whether conducted in corporate laboratories or a local crackpot’s basement, have so often surpassed the accomplishments of educated specialists and government planners? Perhaps because, as the mathematician H. B. Phillips maintained, liberalism and free enterprise promote the emergence of what he called “thought centers.” “Advances will be most frequent when the number of independent thought centers is greatest, and the number of thought centers will be greatest when there is maximum individual liberty,” Phillips wrote. “Thus it appears that maximum liberty is the condition most favorable to progress.” Through free experimentation the steam engine, the clock, and the dynamo gained sufficient dominion over space, time, and energy to fulfill the optimistic predictions of inventors like William Strutt, who in 1823 suggested that although he knew that his forecast would “be laughed at,” the day would come when “time, distance, and expense shall be almost annihilated.”
Foremost among these innovations was that emblem and embodiment of the Industrial Revolution, the steam engine.
The British began mining coal in earnest after the widespread use of wood for heating, cooking, and charcoal making had driven timber prices up by a factor of ten. As the demand for coal kept rising, the miners found it necessary to sink shafts so deep that flooding became a persistent problem. Financial opportunities emerged for anyone who could develop better engines to keep the mines pumped dry. It was for this purpose that, in 1712, an itinerant ironmonger and hardware salesman named Thomas Newcomen developed the first practical steam engine. The Newcomen engine was inefficient but this was not a major problem at the mines, where the coal it burned for fuel lay close at hand. Then one day James Watt, a self-educated instrument maker and repairman at Glasgow University, was instructed to repair a Newcomen engine. He took it apart, experimented with ways to make it work better, and produced a smaller, more efficient steam engine suitable for use in transportation.
Steam engines on rails soon emerged from the mines, thanks mainly to men who grew up doing colliery work and were comfortable getting their hands dirty. Few were what you would call intellectuals. Richard Trevithick, the son of an illiterate Cornish mine captain, was a bar-brawling wrestler, baffled by books but at ease taking apart every mechanical device entrusted to his care. He built locomotives while working in the tin mines of Cornwall, and on February 21, 1804, his Penydarren, the first steam locomotive to run on rails, won a competition by hauling seventy men and a ten-ton load on a tramway in South Wales. George Stephenson, the son of a colliery fireman, created a series of improved locomotives and became the chief engineer for five railroad companies. His son Robert went on to construct ever-longer rail lines. The iron rails kept breaking but steel soon fixed that. An American kettle manufacturer, William Kelly—and, more successfully, the English engineer Henry Bessemer—developed the blast furnace techniques that have remained essential to steel production ever since. Steel mills, their showering sparks and glowing rivers of molten metal emblematic of industrial advance, became giant laboratories in their own right. Andrew Carnegie, whose steel mills made him the world’s richest man, recalled that “years after we had taken chemistry to guide us,” his competitors “said they could not afford to employ a chemist. Had they known the truth then, they would have known that they could not afford to be without one.” The mighty triumvirate of coal, steel, and steam, itself a sort of reciprocal engine, hummed into action and went to work shrinking terrestrial space and time.
The world’s first commercial railway opened in 1830. Operating between Liverpool and Manchester, it ran on a timetable, charged fares by the mile, offered three classes of service, and employed two sets of tracks so trains could run simultaneously in both directions. During twenty years of boom-and-bust speculation, the railroads expanded; their steel rails, cutting across European and American landscapes like giant draftsman’s lines, became part of the aesthetics of local landscapes. Some lines curved gracefully around hills, in the manner of Joseph Locke, while others carved through the terrain, as favored by Robert Stephenson—two approaches described by the civil engineer Thomas Tredgold as to “clamber over or plough through”—but all approximated what Einstein would enshrine in relativity theory as geodesics, lines of maximal space-time efficiency. Passengers compared the experience of rail travel to dreams of riding a magic carpet—the actress Fanny Kemble reporting, after taking a 30-mph publicity ride on George Stephenson’s Rocket, that the “sensation of flying was quite delightful and strange beyond description.” As higher-pressure steam locomotives and improved tracks reduced travel times (by 1847, a London-to-Birmingham train was routinely clocking speeds of 75 mph) the lure of rapid transit grew. By 1870 the English were taking 330 million rail trips annually, up from a tenth that many in 1845. American railroads grew from 2,800 miles of track in 1840 to over 30,000 miles by 1860.
Rail travel afforded city dwellers an opportunity to take in the fresh air and beauty of the countryside, but the intrusion of thundering, moving machines disquieted many of the Victorian intellectuals who lived through it. The poet William Wordsworth, who described himself as a “sensitive being” and “creative soul” and had romanticized the working class during the French Revolution (although he later became something of a reactionary on this point), protested in 1844 against a rural rail line on grounds that “uneducated persons” lacked the capacity to appreciate the beauty of the English Lake District. (“It is not against railways but against the abuse of them that I am contending,” he added; by “abuse” he seems to have meant the selling of train tickets to passengers less sensitive and creative than himself.) In America it was hoped that the railroads would bind the disparate states together with staves of steel, as the customary metaphor had it. John C. Calhoun spoke for many when he called on Congress to “conquer space” by underwriting the construction of railroads, highways, and canals. “Railroad iron is a magician’s rod,” declared Ralph Waldo Emerson. The railroads supercharged the factory system, which expanded rapidly thanks to a reliable supply of raw materials and parts, and the outward flow of inventory to fill orders, provided by rail at steadily decreasing costs—the early stirrings of today’s JIT (“Just in Time”) inventory management techniques.
With the railroads came the electric telegraph—the first in a series of devices, from the telephone to e-mail, that made it possible for messages to travel faster than messengers. Here too the new technology developed mainly in the hands of amateurs. Samuel F. B. Morse, who championed telegraphy and developed the binary dot-dash code that became its native tongue, was a portrait painter who plunged into telecommunications after his young wife, Lucretia, died while he was away on a business trip and was buried before he learned of her death. (“I long to hear from you,” he had written her, three days after she died.) Morse failed to persuade Congress to back his invention, but struck it rich by teaming up with private investors visionary enough to construct a network—it connected New York City with Philadelphia, Boston, and points west—before putting it on the market. Telegraph lines were soon going up along railroad rights-of-way so rapidly that no one agency could keep track of them all; by 1850, a dozen American telegraph companies managed an estimated twelve thousand miles of wires. The telegraph, as one historian writes, “severed the preexisting bond between transportation and communications…. At one stroke the life force of science—information—was freed of its leaded feet and allowed to fly at the speed of light.” “Is it not a feat sublime?” read the masthead of The Telegrapher, the journal of the National Telegraphic Union. “Intellect hath conquered time.”
The laying of transoceanic cables—a feat spurred by the efforts of the Irish physicist William Thomson (Lord Kelvin), who amused himself by developing low-voltage communications lines after having elucidated the science of thermodynamics—spawned fresh hopes that communication might promote international peace. “What can be more likely to effect [peace] than a constant and complete intercourse between all nations and individuals in the world?” asked Edward Thornton, the British ambassador to the United States, in a toast to Morse at Delmonico’s restaurant in New York in 1868. “Steam was the first olive branch offered to us by science. Then came a still more effective olive branch—this wonderful electric telegraph, which enables any man who happens to be within reach of a wire to communicate instantaneously with his fellow men all over the world.” Another toast at the same dinner party envisioned the telegraph’s “removing causes of misunderstanding, and promoting peace and harmony throughout the world.” Such rosy sentiments are easy to mock, but many today would agree that improved communications have, as another technological optimist put it long ago, helped people “know one another better…. [and] learn that they are brethren, and that it is no less their interest than their duty to cultivate goodwill and peace throughout the earth.”
As railroads and telegraph lines spread into the American West, the towns that sprang up at railheads and junctions often consisted of little more than a few clapboard buildings and unpaved streets plus a set of plat maps aimed at attracting investors. “Railroads in Europe are built to connect centers of population; but in the West the railroad itself builds cities,” observed Horace Greeley. “Pushing boldly out into the wilderness, along its iron track villages, towns, and cities spring into existence, and are strung together into a consistent whole by its lines of rails, as beads are upon a silken thread.” It sounded grand but did not always work: Midwestern boosterism was born of concern that one’s hometown had little to recommend it beyond a railroad station, a telegraph shed, and the positive outlook of its civic leaders. Those who survived risked recapitulations of the biblical saga of Cain and Abel when gunslingers, equivalent to the hunter-gatherers of old, swooped down to plunder towns protected by a single lawman or none at all: From such origins sprang an enduring tradition among rural Americans that families require firearms for their protection.
The railroad, the telegraph, and the factory transformed society’s sense of time. Clocks had been around for centuries but were mainly an enthusiasm of scientists: The first clock equipped with a minute hand was commissioned by the Danish astronomer Tycho Brahe from the Swiss mathematician Jost Burgi in 1577; the pendulum clock was invented in 1656 by the Dutch astronomer Christiaan Huygens; and the first man to wear a wristwatch was the French scientist Blaise Pascal. The rise of modern factories democratized this previously elite taste for accurate timekeeping, both by placing new temporal demands on workers—who punched a time clock and whose bosses liked to say, “Time is money”—and by producing affordable clocks and watches that became the high-tech centerpiece of many a working-class home. (Hence the gold watch upon retirement, and the cinematic trope of having commandos synchronize their watches before undertaking a mission.) Prior to the advent of the railroads, each community regulated its own clocks, from solar time as customarily measured by noting when the sun crossed the local meridian. Solar time depended on your longitude, and so was inherently local: A train conductor traveling 60 mph west from Pierre, South Dakota, would have to advance his watch seven minutes per hour in order to be on time when arriving in Livingston, Montana. This was neither practical nor safe, so in 1883 the railroads went ahead and divided the continental United States into four time zones, Congress eventually mandating the system in 1918. (A similar standardization in England was called “railroad time.”) Trains became symbols of time. Much of the lasting appeal of the 1952 movie High Noon arises from its insistence on the unity of three sounds—a ticking clock, a clicking telegraph, and the chuffing of a steam locomotive—imposed like civilization itself on a recently lawless West. Urged by a judge to save his life by getting out of town, the sheriff, played by Gary Cooper, replies, “There isn’t time.” Factually, his statement makes no sense—he still has ample opportunity to run away—but we take him to mean that it is no longer a time for the West, now bound together by rails and telegraph lines, to revert to the anarchy of old.
With the rise of electrical power, dynamos became central to the new technology. The dynamo generated electricity that could be carried by wires to provide lighting or, by using another dynamo in reverse, be turned back into mechanical power on demand. Its development had been difficult. Electricity had long fascinated the public—audiences were thrilled by demonstrations in which static electricity shocked ranks of hand-holding soldiers or sparked the lips of those venturesome enough to kiss an electrified woman—but nobody had been able to get much work out of it, Ben Franklin pronouncing himself “chagrined a little that we have been hitherto able to produce nothing in this way of use to mankind.” Its transformation into an engine of industry was eventually inaugurated by the research of Michael Faraday.
Faraday grew up behind his father’s blacksmith shop in London and at age fourteen was apprenticed to a bookbindery, where he educated himself by reading the books he bound. Intrigued by an article on electricity that he encountered in the Encyclopaedia Britannica, he began conducting electrical experiments and going to public science lectures, where showman-scientists staged gasp-inducing explosions and flashes of light. In 1812 Faraday attended a lecture by the famous chemist Humphry Davy—himself an autodidact who had first learned science from James Watt’s son Gregory, a boarder in the Davy household. Faraday bound his notes on the lecture in leather and sent them to Davy with a letter expressing, as Faraday recalled it, “my desire to escape from trade, which I thought vicious and selfish, and to enter into the service of science, which I imagined made its pursuers amiable and liberal.” He landed a job in the Royal Institution laboratory and soon became a science lecturer himself, but his real love was experimenting in his basement laboratory, where he remained doggedly determined even when repeatedly injured and once temporarily blinded. In 1849, speculating about a link between electromagnetism and gravitation—a connection that eluded him, as it would Einstein a century later—he scrawled this memorable passage in his notebook:
ALL THIS IS A DREAM. Still, examine it by a few experiments. Nothing is too wonderful to be true, if it be consistent with the laws of nature, and in such things as these, experiment is the best test.
Faraday found that a wire carrying an electric current could be induced to circle round a magnet, and that, conversely, the magnet would circle the wire—the basis of the electric motor and the generator. The first practical generators (called “dynamic-electric” generators, or “dynamos”) were employed to power arc lamps, in which an electrical spark sustained between two poles produced a dazzling, sun-white sheet of light. Arc lamps were employed in lighthouses and theaters, and as novelties in department stores, but their intense glare made them unwelcome on city streets (the elegantly dressed boulevardiers of Paris and London retreated from them in revulsion, preferring the more flattering hue of gaslight) and hopeless for lighting private residences. Fortunes clearly were to be made if a longer-lasting and less imperious electric light, suitable for the home, could be created. The inventor who proved equal to the challenge was Thomas Edison.
Edison’s hardscrabble childhood was the stuff of legend. Born in Milan, Ohio, in 1847, he dropped out of school while still in the first grade, sold newspapers to passengers on the Grand Trunk Railway, and learned by reading them to write newspaper articles himself. One day in 1862 he snatched the young son of a telegraph operator from the path of an oncoming boxcar and was rewarded with a job as a telegrapher. During twelve-hour night shifts, Edison read books on technology and science and acquired a lifelong fascination with Faraday. By age 21, he had devised an improved “duplex” telegraphy system that could handle multiple messages simultaneously, and by age thirty had established a laboratory, in Menlo Park, New Jersey. From it poured a flood of innovations that contributed to the development of the telephone, the phonograph, the fax machine, and the movies, but in a sense Edison’s greatest invention was the laboratory itself: Staffed by trained technicians and aimed at attaining practical, profitable results, it was America’s first R&D outfit. Edison drove his team hard, constantly reminding them that “genius is ninety-nine percent perspiration and one percent inspiration.” He worked for days and nights on end, napping on a stack of newspapers in a laboratory closet, unwavering in his conviction that any properly framed problem will eventually yield to persistent experimentation. When a financial backer complained that money was being wasted on useless experiments, Edison replied, “No experiments are useless.”
Among his fans was Henry Ford, who wrangled a meeting with Edison in 1896 and sketched his plan for a gasoline-powered automobile. Edison saw its potential, telling Ford, “You have the thing. Keep at it!” The two later became friends, Edison joining Ford (who viewed the automobile, like the railroads, as a way for working-class Americans to widen their horizons) on summer road trips around the West with Harvey Firestone and the naturalist John Burroughs. One day in northern Michigan these “Four Vagabonds,” as they styled themselves, encountered a farmer who was trying to repair an automobile. Ford helped him fix it but refused payment, saying, “I have all the money I want.” “Hell,” replied the farmer, “You can’t have that much and drive a Ford.” Edison died in 1931, after faltering at a Ford-hosted dinner in his honor at which Albert Einstein delivered a tribute by telephone. He held 1,093 patents, still the record.
Edison’s interest in electric lighting was kindled by a visit, on September 8, 1876, to a brass foundry in Ansonia, Connecticut, where another inventor, Moses G. Farmer, and his partner, William Wallace, were demonstrating a steam-powered dynamo hooked up to eight glaring white arc lamps. “Edison was enraptured,” wrote a reporter for the New York Sun. “He ran from [the dynamo] to the lights, and from the lights back to the instrument. He sprawled over a table with the simplicity of a child, and made all kinds of calculations.” At the end of the day, Edison bluntly told the hosts, “I believe I can beat you making the electric light.” Back at Menlo Park, he recalled, the “secret” to developing practical electrical lighting “suddenly came to me…. I am already positive it will be cheaper than gas.” The idea was to find a filament material that glowed steadily when electricity passed through it, and would keep glowing for many hours without burning out. That was the inspiration; now for the perspiration. Edison experimented with thousands of substances before finding a promising candidate, the leaf of a Japanese palm, and finally settling on a filament made of carbonized cardboard. “I speak without exaggeration,” he told a visitor, “when I say that I have constructed three thousand different theories in connection with the electric light, each one of them reasonable and apparently likely to be true. Yet in two cases only did my experiment prove the truth of my theory.” As with most people, when Edison said, “I speak without exaggeration,” he was exaggerating; his thousands of “theories” about electric-light filaments were mostly just trials of different materials. But his general point was correct—that useful, profitable inventions could be made through persistent trial and error, without waiting for theoretical science to show the way. As Joseph Priestley had put it a century earlier, “In this business…more is owing to…the observation of events, arising from unknown causes, than to any…preconceived theory.”
Now that he had a proper filament, all that remained for Edison was “to make the dynamos, the lamps, the conductors, and attend to a thousand details the world never hears of.” This he managed to do, and the Edison company was soon providing electric lighting to hundreds of firms and homes in lower Manhattan and beyond. Living resplendently in an electrically illuminated mansion on Fifth Avenue, Edison regaled journalists with hyperbolic tales about how, just ten years earlier, he’d “had to walk the streets of New York all night because I hadn’t the price of a bed.”
Electric lighting turned night, if not into day, at least into a new and less threatening species of night. “Sundown no longer emptied the promenade,” wrote Robert Louis Stevenson, “and the day was lengthened out to every man’s fancy. The city folk had stars of their own; biddable, domesticated stars.” The promise of the Enlightenment had come true in the form of light itself. Hulking dynamos became the centerpieces of science and technology expositions—dwarfing visitors like Henry Adams, who came away from the Paris Exposition of 1900 calling the dynamo “a symbol of infinity” and “a moral force.”
Edison promoted his direct current (DC) system as safer than the alternating current (AC) being marketed by his competitor George Westinghouse; he even made sure that the nation’s first execution by electric chair was carried out using AC, in order to impress upon the public its largely illusory dangers. AC eventually won out anyway, mainly because it could be transmitted more efficiently over long-distance wires.
The inventor of the AC motor was Nikola Tesla, a man sufficiently eccentric to remain an object of fascination to this day, although he lacked Edison’s genius for self-promotion and lived the last two decades of his life in obscurity, with only a pet pigeon for company. A native of Serbia, Tesla first encountered a dynamo at the polytechnic institute in Graz, Austria, where he was studying electrical engineering. The professor demonstrated that the dynamo could function as a motor when run in reverse, although it sparked badly. Tesla blurted out that he could fix the problem of converting the dynamo into a proper electric motor. The teacher rebuked him in front of the class, declaring that “Mr. Tesla may accomplish great things, but he certainly will never do this.”
A sneering teacher can be as motivating as a smiling one, and Tesla obsessed over the problem for years. (Neurotic and compulsive, he was good at obsessing. Among other quirks he counted everything—every cup of coffee he drank, every step he took, every lap he swam at dawn in a public bathing house on the Seine—and claimed to be so sensitive that the sound of “a fly alighting on a table in the room would cause a dull thud in my ear.”) As Tesla told the story, he was watching the sun set from a park bench in Budapest one February afternoon and reciting lines from Goethe’s Faust to himself—lines about the rotation of the earth—when “the idea came like a flash of lightning and in an instant the truth was revealed.” He told his first biographer that he immediately drew up the design, in the dirt with a stick, resolving, “I must return to work and build the motor so I can give it to the world. No more will men be slaves to hard tasks. My motor will set them free, it will do the work of the world.” He moved to New York, patented his AC induction motor, and went to work first for Edison and then for Westinghouse. In later years Tesla fell victim to neuroses, hallucinations, and accidents—a fire in 1895 destroying his Houston Street laboratory and his research papers—but as he had foreseen, the AC motor lifted the burden of heavy toil from the shoulders of millions.
The scientists, not entirely left behind in this flurry of advancing technology, managed to discern many of the principles of physics operating in the new machines. Sadi Carnot in the 1820s and Kelvin in the 1850s discovered the principles governing steam engines, and from them derived the laws of thermodynamics, while James Clerk Maxwell brought mathematical rigor to Faraday’s experimental results, producing an elegant portrait of electromagnetic fields. Yet none was able to accurately predict the behavior of the whirling electromagnetic fields inside the dynamos. It took an exceptionally creative scientist to recognize the importance of this seeming technicality; he showed up in the person of Albert Einstein.
Instinctively iconoclastic, Einstein regarded authority based on anything other than reason and empirical proof as self-evidently illegitimate. When his parents took him to see a military parade as a little boy, he promptly burst into tears. Left behind in a Munich high school when his parents moved to Italy, he escaped by convincing a doctor that school discipline was literally driving him crazy. He infuriated his teachers by sitting in the back row of class, a smile playing across his lips, and was so unpopular with the faculty at the Federal Polytechnic Institute in Zurich that upon graduation in 1900 he was unable to find a job in science and was reduced to working as a patent-office clerk in Bern. The work suited him, however, and although the world knew Einstein as a great theorist, able to probe the depths of the universe armed with nothing but paper and pencil, he had a lifelong involvement in engineering and technology as well. He held a dozen patents, one of them an exotic refrigerator that proved impractical for household use but was later used by NASA, and his technological facility was displayed in the special theory of relativity, his revolutionary account of light.
Einstein from an early age was intrigued by electromagnetism, the form of energy that manifests itself as light, electricity, and magnetism. One of his earliest memories was of being shown a compass, while sick in bed at age four or five, and wondering how its needle could respond to the earth’s magnetic field. It seemed to him “a miracle” that this invisible field could command the behavior of the tangible needle. “I can still remember—or at least believe I can remember—that this experience made a deep and lasting impression upon me,” he recalled in his Autobiographical Notes. “Something deeply hidden had to be behind things.” A few years later Einstein’s father, Hermann, and his uncle, Jakob, went into the dynamo business, building generators in a backyard shed. The business failed but Einstein kept thinking about dynamos. At the Swiss Polytechnic Institute he took classes from the physics dean, Heinrich Friedrich Weber, whose chair was funded by the dynamo manufacturer Werner von Siemens and who regularly subjected himself to the teeth-rattling jolts of electrical current it generated. Einstein knew that a mystery resided inside each dynamo, and within every other electromagnetic field that whirled or eddied or sped through space: If you tried to apply Maxwell’s elegant equations to them, you got paradoxical results. Herr Weber’s response to this difficulty was to ignore it; his lectures were devoid of any reference to Maxwell’s equations. Einstein, however, never let it go. He worked on electrodynamic field equations during spare moments at the patent office, discussed them with friends, and had the answer by 1905—his annus mirabilis, the year in which, at age twenty-six, he published not only the paper that inaugurated relativity but also one that helped create quantum physics, and three others revolutionizing atomic theory and statistical mechanics.
Einstein’s special relativity reconciled Maxwell’s equations with the high-speed world that he could imagine by projecting the growing speed of locomotives and spinning dynamos to spaceships approaching the velocity of light—a reconciliation purchased by portraying space and time as elastic. In relativity, the velocity of light becomes the universal yardstick—all observers measure it to be the same, regardless of the speed they themselves are traveling—while the observers’ mass, and the rate at which time passes for them, are flexible. A space probe leaving the earth increases in mass—and the passage of time on board slows down—relative to an identical spaceship that stays home. The effect is small for real space probes but becomes dramatic when velocities approach that of light, giving rise to the so-called “twins paradox” (which has puzzled science students for years although it isn’t really a paradox at all). Let’s call the twins Stella and Terry. Stella flies to a distant star at 70 percent the velocity of light, makes a brief stopover, and then returns at the same rate of speed. From Terry’s point of view her trip took fourteen years, but for Stella only two years have passed. This is in no sense illusory: If Terry and Stella were both thirty years old when Stella departed, upon her return he would be forty-four years old and she only thirty-two. People took to calling it a paradox because they imagined that the situation would be reversed if you took Stella’s point of view instead of Terry’s—but that is not the case, because their experiences are not the same. Stella’s spaceship accelerates to reach its appreciable velocity. The acceleration buys Stella space—without it she would never have left Earth—and in doing so buys time. Terry, having experienced no such change is his inertial framework, ages at a normal rate while Stella retains her youthfulness.
Einstein’s discovery of a deep link between space and time—so deep that they are now regarded as two aspects of the same entity, the space-time continuum—showed how fundamentally science can upset commonsense conceptions of nature. No sooner had people adapted themselves to the conquest of space and the consistent regulation of time than they were asked to change their ways of thinking about both, the authority of precision clocks and platinum “meter bars” giving way to Salvador Dalí’s limp clocks and the paradoxical architecture of M. C. Escher’s imaginary castles.
This scientific revolution had its own technological roots. Einstein’s original relativity paper, “On the Electrodynamics of Moving Bodies,” is full of terminology that his father and uncle might have bandied about in their dynamo shop. It begins by discussing “the electromagnetic interaction between a magnet and a conductor,” which is how a dynamo works. Viewed in terms of conventional physics, Einstein notes, the interaction looks very different depending on one’s point of view. To resolve this asymmetry, Einstein postulates that “we first have to clarify what is to be understood here by ‘time.’…If, for example, I say that ‘the train arrives here at seven o’clock,’ that means, more or less, ‘the pointing of the small hand of my clock to seven and the arrival of the train are simultaneous events.” Such simultaneity pertains, however, only to a system whose elements are at rest relative to one another. Set them in relative motion—a step that Einstein portrays, suitably for the technology of his day, in terms of trains, clocks, and “rigid measuring rods”—and the picture changes. Special relativity is a railway stretching to the stars, bringing back news of a stranger but less parochial physics than this world had previously known. It is also the key to nuclear power. It starkly illuminated the chasm that had opened up between the societies that possessed science, technology, and liberty, and the many more that did not—the inequity that had already fostered imperialism and colonialism.
Traditionally, nations possessing superior power have used it to subjugate their neighbors. As Thucydides commented in the fifth century BC, directly addressing his future readers:
Of the gods we believe, and of men we know, that by a necessary law of their nature they rule whenever they can. And it is not as if we were the first to make this law, or act upon it when made; we found it existing before us, and shall leave it to exist for ever after us; all we do is to make use of it, knowing that you and everybody else, having the same power as we have, would do the same as we do.
The advent of modern technological power produced much the same result. The nations that gained it found themselves freshly empowered to navigate across the seas and to explore and exploit distant lands, aloof and imperious as “The Flying Dutchman” in Edwin Arlington Robinson’s poem:
Lord of himself at last, and all by Science,
He seeks the Vanished Land….
He steers himself away from what is haunted
By the old ghost of what has been before,—
Abandoning, as always, and undaunted,
One fog-walled island more.
The European explorers and colonists promulgated not only their novel doctrines but their affection for novelty itself. As the Stanford University political scientist David B. Abernethy suggests, “The persistent effort of Europeans to undermine and reshape the modes of production, social institutions, cultural patterns, and value systems of indigenous peoples…was the outward projection of tumultuous changes in the way Europeans themselves lived.” Geographically, Europe is comprised of nations with rather strong and contrasting identities located in close proximity to one another; in this situation, the rise of one nation presents immediate challenges to its many neighbors; hence European imperialism itself promoted imperialism. Germany built a navy primarily out of fear and envy of the British navy. England fretted that Russia might come to dominate Mesopotamia (modern Iraq) if England didn’t get there first: Thomas Love Peacock of the East India Company, asked by a parliamentary committee, “Is it your opinion that the establishment of steam[boats] along the Euphrates would serve in any respect to counteract Russia?” replied, “I think so, by giving us a vested interest and a right to interfere.” An inherently expansionist dynamic arose from the imperialist enterprise, in that the existence of one colonial outpost promoted the creation of others in order to defend the first. The British statesman and classics scholar Evelyn Baring, the first Earl of Cromer, wryly observed in 1910 that the expansion of the British empire, like that of the ancient Roman empire, was
accompanied by misgivings, and was often taken with a reluctance which was by no means feigned…. [Rome] was impelled onwards by the imperious and irresistible necessity of acquiring defensible frontiers…. Public opinion of the world scoffed two thousand years ago, as it does now, at the alleged necessity; and…each onward move was attributed to an insatiable lust for an extended dominion.
Setbacks in one part of the world spurred military leaders to advance elsewhere. The British Parliament’s India Act of 1784 declared that any expansion of control of India was “repugnant to the wish, the honor and the policy of this nation,” but General Charles Corwallis, smarting from his having been obliged to surrender to the American revolutionaries at Yorktown in 1781, was soon embroiled in the prolonged and unpopular Mysore wars and was otherwise at work extending British control over India, until by 1815 the East India Company and the British governor-general ruled forty million Indians. Just eight European states—Britain, Portugal, Spain, France, the Netherlands, Belgium, Germany, and Italy, comprising only 1.6 percent of the earth’s territory—controlled 35 percent of the world’s land area by 1800, 67 percent by 1878, and over 84 percent by 1914; in 1909 the British Empire alone dominated 444 million people across a quarter of the world’s land surface. The costs of colonialism seemed remote and abstract but its benefits palpable to those in London and Paris who sipped Indian tea sweetened with sugar from Caribbean slave plantations—which was why British abolitionists would use only honey in their tea. Eventually the costs became unsustainable, but meanwhile European power fostered delusions about their superior “race” bringing civilization to the inhabitants of supposedly backward regions. As the British historian Norman Davies observes:
Raw power appeared to be made a virtue in itself, whether in popular views of evolution which preached “the survival of the fittest,” in the philosophy of historical materialism, which preached the triumph of the strongest class, in the cult of the Superman, or in the theory and practice of imperialism. Europeans, in fact, were made to feel not only powerful but superior.
The main motive for imperialism was that there was money in it. Substantial profits accrued to those who could import nutmeg from Pulo Run in the Banda Islands of the Indonesian archipelago, cinnamon from Ceylon, pepper from Sumatra and India’s Malabar coast, or cloves from the volcanic islands of the Moluccan triangle. This the Europeans could do efficiently, thanks to the technical advantages they enjoyed in ship design, navigational instruments, maps, and guns. It soon emerged, however, that maintaining free trade meant protecting the trade routes against piracy, then disciplining the port cities that succored the pirates, then dealing with real or imagined threats to the ports coming from farther inland. And so an enterprise that had begun in freedom devolved into freedom’s opposite.
The British and Dutch colonization of the Indies (a vague term covering India and nearby islands) was illustrative of the slippery slide to colonialism. It began with the establishment of enterprises in the spice and pepper trade. Initially independent—eight separate Dutch firms were operating in the Indies by 1599—these endeavors turned into government-sanctioned monopolies as investors, stung by the vagaries of piracy and politics, lobbied for protection. In this manner the English East India Company was established by royal charter in 1600 and the Dutch East India Company set up as a monopoly in 1602 (returning a monopolistic 18 percent a year to its investors over its two-hundred-year history). Government participation put national pride on the line, turning trade into tyranny. “How,” asked a modern historian, “did a people who thought themselves free end up subjugating so much of the world?” That was how—at least at first.
Many eighteenth- and nineteenth-century imperialists, men like John Barrow and Thomas Stamford Raffles, were scientifically inclined explorers out to learn about the natural world and bring the benefits of education, political liberty, and health care to what they took to be the benighted inhabitants of uncivilized lands. Barrow served forty-one years as second secretary to the Admiralty. Fascinated by astronomy, he became a navigator and geographer (helping to found the Royal Geographical Society in 1830), and an energetic explorer who studied whales and icebergs off Greenland, married a Bohr woman in South Africa, and learned Cantonese in China. He detested slavery, championed personal freedom, and always insisted—albeit while lobbying for more British naval bases, each new one intended to protect the last—that England’s goal was not to seize land or rule people but to promote trade. Raffles was born at sea, off the coast of Jamaica. He clerked for the East India Company from age fourteen, teaching himself geography, botany, zoology, and a variety of Asian languages that served him well in Malaysia and Indonesia. He founded the free port of Singapore; established a college to teach science, law, and democracy to the sons of indigenous rulers; and conducted botanical and zoological studies in the course of which he identified thirty-four new species of birds and thirteen species of mammals, chiefly in Sumatra. Back home he wrote a two-volume History of Java and launched the London Zoological Society. Virtually immune to racial and ethnic prejudice, Raffles denounced slavery, expressed disgust at the “cold-blooded, illiberal” practices of competing imperial powers, and always maintained that his object was “not territory, but trade” in order to foster “a better state of society.” Investigating a claim by his anthropologist friend William Marsden that the Battas of Sumatra were cannibals, Raffles found that they not only feasted on their prisoners of war but ate them alive, slicing off strips of their flesh and dipping them in a savory lime-and-chili sauce. Such conduct failed to disrupt his equanimity. “The Battas are not a bad people,” Raffles wrote the Duchess of Somerset. “They write and read, and think as much or more than those brought up in our National Schools.”
Yet the inherent illegitimacy of empire soon spawned monsters. Militarism ascended, as justifiable rebellions came to be lumped in with the earlier predations of pirates and warlords. Mediocre martinets, wielding an authority in the colonies that they could not have had at home, fueled local rebellions the suppression of which led to calls for even more military power. Racism reared its head, as an imagined justification for the anomaly of a white minority ruling multitudes of black, yellow, and brown peoples. American imperialists invoked manifest destiny—the notion that, for some unknown reason, power was historically destined to flow westward from the Middle East to Greece, Rome, Europe, and on across the Americas. “Westward the course of empires takes its way,” declared George Berkeley, coining an expansionist mantra.
It all led back to profits. “The continent lay before them, like an uncovered ore-bed,” wrote Henry Adams, contemplating American attitudes toward westward expansion circa 1820.
They could see, and they could even calculate with reasonable accuracy, the wealth it could be made to yield. With almost the certainty of a mathematical formula, knowing the rate of increase of population and of wealth, they could read in advance their economical history for at least a hundred years.
The subtle interrelationship between humans and technology, in which it is often difficult to determine whether people are using tools or the tools using them, was highlighted on the imperialist stage by the advent of the steam-powered gunboat. Imperialism sans gunboats might have remained largely a matter of subjugating the ports and coastlines where seagoing frigates could project military power. Imperialism plus gunboats extended its tentacles deep upriver, tipping the balance toward the very colonialism that many early imperialists had eschewed. Here too the road was paved with good intentions. The Scottish steamboat builder Macgregor Laird, celebrating “the immortal Watt” (“By his invention every river is laid open to us, time and distance are shortened”), predicted that
if his spirit is allowed to witness the success of his invention here on earth, I can conceive no application of it that would receive his approbation more than seeing the mighty streams of the Mississippi and the Amazon, the Niger and the Nile, the Indus and the Ganges, stemmed by hundreds of steam-vessels, carrying the glad tidings of “peace and good will toward men” into the dark places of the earth which are now filled with cruelty.
The phrase “gunboat diplomacy” entered the language as steam-powered gunboats subdued inland China during the Opium War, decimated the swift-sailing praus of the Burmese, and probed the heart of Africa. Quinine, the breech-loading Prussian needle-gun, the Maxim gun, and a host of other innovations contributed to the carnage. Ten thousand Dervish cavalrymen were mowed down by Maxim guns at Omdurman in the Sudan in 1898 by a British force that incurred only forty-one casualties, in what a young Winston Churchill, covering the battle as a journalist, called “the most signal triumph ever gained by the arms of science over barbarians.” Thousands of Zulu warriors armed with spears and leather shields were Gatling-gunned in the battle of Isandlwana in 1879 (in which the Zulus nonetheless prevailed). Millions of Congolese died under the sadistic rule of the notorious King Leopold II of Belgium, whose African campaigns incited Joseph Conrad, in his Heart of Darkness, to muse about “the fascination of the abomination.” Leopold’s army of African conscripts, the Force Publique, ordered to return with a severed right hand for every bullet they fired, filled the quota by amputating the hands of the living as well as the dead. The bloodstained list is long.
Yet this final spasm of colonialism was anachronistic, the work of modern Europeans who, unlike the Genghis Khans and Tamerlanes who preceded them, regarded themselves as being in some sense liberal. To comport their liberality with the stark fact that they ruled unfree multitudes, they were obliged to imagine that by exploiting non-Europeans they were somehow exporting the very gifts—of democracy, science, and economic freedom—that their conduct defied. For a long time they could at least take comfort in feeling that all Europe was united in the colonial enterprise—that, as King Leopold put it, each power was but imitating its neighbors. But the dream of European unity was shattered by the two world wars, in the aftermath of which the exhausted and all but bankrupt colonial powers found that they could no longer afford their colonies—which in the long run had proven unprofitable anyway, unfree trade being no bargain. Britain by 1921 was paying more to maintain just the state of Iraq than to meet all its national health care needs, while France was drained financially and emotionally by rebellions in Vietnam in the 1950s and Algeria in the 1960s. At the same time their financial resources were diminishing; Britain’s share of world manufacturing exports fell from 25 percent in 1950 to under 10 percent in 1973. The empires folded with spectacular speed. After being occupied for over a century on average, and in some instances over two centuries, more than eighty colonies gained independence in four decades, from 1940 to 1980—among them Syria and Lebanon in 1945–46; India, the largest nation ever ruled by a foreign power, the following year; Burma, Ceylon, and Indonesia in 1948; Iraq in 1958; Indonesia by 1960, then a slew of sub-Saharan African states, including Cameroon, Chad, Ivory Coast, Mali, Mauritania, Nigeria, Senegal, and Togo. By the late 1980s no substantial colonies remained anywhere in the world. The seemingly utopian visions of Joseph Priestley, who in 1791 had predicted that one day “the very idea of distant possessions will be even ridiculed [and] no part of America, Africa, or Asia, will be held in subjection to any part of Europe” had been realized.
The end of empire may have seemed little more than a footnote for the Europeans, struggling to rebuild capitals that were soon targeted on Soviet nuclear strike lists, but it was a watershed for the inhabitants of the former colonies. Whole chapters had been torn from what would otherwise have been the ongoing narratives of their own histories. Each new nation, writes the historian John Darwin, “needed a history that placed its own progress at the heart of the story.”
Each had its own heroes whose national struggle had been waged in the face of Europe’s cultural arrogance. New “nationalist” histories portrayed European rule (or influence) as unjust and repressive. Far from bringing progress to stationary parts of the world, European interference had blocked the social and cultural advances that were already in train…. “Decolonized history” encouraged many different social, ethnic, religious, or cultural groups to emerge from the shadows. The old colonial narratives in which Europeans stood out against the dark local backcloth now seemed like cartoons; crude and incomplete sketches of a crowded reality.
The emergence of these new narratives—and, importantly, their study by social scientists—was largely responsible for the rise of the notion that there are ways of knowing preferable to that of a “Western” science now declared to have been discredited by its association with Western imperialism and colonialism. These campaigns spawned many of the antiscientific and illiberal attitudes that have bedeviled academic discourse over the past half century. Among their targets was the idea of progress, now depicted as a parochial illusion, as blinkered as racism. Was there any progress, really?