3    Energies: or growth of primary and secondary converters

The growth of any organism or of any artifact is, in fundamental physical terms, a transformation of mass made possible by conversion of energy. Solar radiation is, of course, the primary energizer of life via photosynthesis, with subsequent autotrophic and heterotrophic metabolism producing an enormous variety of organisms. Inevitably, our hominin ancestors, relying solely on somatic energies as hunters and gatherers, were subject to the same energetic limits. The boundaries of their control were expanded by the mastering of fire: burning of phytomass (whose supply was circumscribed by photosynthetic limits) added the first extrasomatic energy conversion and opened the way to better eating, better habitation, and better defense against animals. Cooking was a particularly important advance because it had greatly enlarged both the range and quality of consumed food (Wrangham 2009). Prehistoric human populations added another extrasomatic conversion when they began to use, about 10,000 years ago, domesticated animals for transport and later for field work.

The subsequent history of civilization can be seen as a quest for ever higher reliance on extrasomatic energies (Smil 2017a). The process began with the combustion of phytomass (chemical energy in wood, later also converted to charcoal, and in crop residues) to produce heat (thermal energy) and with small-scale conversions of water and wind flows into kinetic energy of mills and sails. After centuries of slow advances, these conversions became more common, more efficient, and available in more concentrated forms (larger unit capacities), but only the combustion of fossil fuels opened the way to modern high-energy societies. These fuels (coals, crude oils, and natural gases) amount to an enormous store of transformed biomass produced by photosynthesis over the span of hundreds of millions of years and their extraction and conversion has energized the conjoined progression of urbanization and industrialization. These advances brought unprecedented levels food supply, housing comfort, material affluence, and personal mobility and extended expected longevity for an increasing share of the global population.

Appraising the long-term growth of energy converters (both in terms of their performance and their efficiency) is thus the necessary precursor for tracing the growth of artifacts—anthropogenic objects whose variety ranges from the simplest tools (lever, pulley) to elaborate structures (cathedrals, skyscrapers) and to astonishingly complex electronic devices—that will be examined in chapter 4. I use the term energy converter in its broadest sense, that is as any artifact capable of transforming one form of energy into another. These converters fall into two basic categories.

The primary ones convert renewable energy flows and fossil fuels into a range of useful energies, most often into kinetic (mechanical) energy, heat (thermal energy), light (electromagnetic energy), or, increasingly, into electricity. This large category of primary converters includes the following machines and assemblies: traditional waterwheels and windmills and their modern transformations, water and wind turbines; steam engines and steam turbines; internal combustion engines (gasoline- and diesel-fueled and gas turbines, either stationary or mobile); nuclear reactors; and photovoltaic cells. Many forms of electric lighting and electric motors are now by far the most abundant secondary converters that use electricity to produce light and kinetic energy for an enormous variety of stationary machines used in industrial production as well as in agriculture, services, and households, and for land-based transportation.

Even ancient civilizations relied on a variety of energy converters. During antiquity, the most common designs for heating in cold climates ranged from simple hearths (still in common use in Japanese rural households during the 19th century) to ingenious Roman hypocausts and their Asian variants, including Chinese kang and Korean ondol. Mills powered by animate labor (slaves, donkeys, oxen, horses) and also by water (using wheels to convert its energy to rotary motion) were used to grind grains and press oil seeds. Oil lamps and wax and tallow candles provided (usually inadequate) illumination. And oars and sails were the only two ways to propel premodern ships.

By the end of the medieval era, most of these converters saw either substantial growth in size or capacity or major improvements in production quality and operational reliability. The most prominent new converters that became common during the late Middle Ages were taller windmills (used for pumping water and for a large variety of crop processing and industrial tasks), blast furnaces (used to smelt iron ores with charcoal and limestone to produce cast iron), and gunpowder propelled projectiles (with chemical energy in the mixture of potassium nitrate, sulfur and charcoal instantly converted into explosive kinetic energy used to kill combatants or to destroy structures).

Premodern civilizations also developed a range of more sophisticated energy converters relying on gravity or natural kinetic energies. Falling water powered both simple and highly elaborate clepsydras and Chinese astronomical towers—but the pendulum clock dates from the early modern era: it was invented by Christiaan Huygens in 1656. And for wonderment and entertainment, rich European, Middle Eastern and East Asian owners displayed humanoid and animal automata that included musicians, birds, monkeys and tigers, and also angels that played, sang, and turned to face the sun, and were powered by water, wind, compressed air, and wound springs (Chapuis and Gélis 1928).

The construction and deployment of all traditional inanimate energy converters intensified during the early modern period (1500–1800). Waterwheels and windmills became more common and their typical capacities and conversion efficiencies were increasing. Smelting of cast iron in charcoal-fueled blast furnaces reached new highs. Sail ships broke previous records in displacement and maneuverability. Armies relied on more powerful guns, and manufacturing of assorted automata and other mechanical curiosities reached new levels of complexity. And then, at the beginning of the 18th century, came, slowly, the epochal departure in human energy use with the first commercial installations of steam engines.

The earliest versions of the first inanimate prime mover energized by the combustion of coal—fossil fuel created by photosynthetic conversion of solar radiation 106–108 years ago—were extremely wasteful and delivered only reciprocating motion. As a result, they were used for decades only for water pumping in coal mines, but once the efficiencies improved and once new designs could deliver rotary motion, the engines rapidly conquered many old industrial and transportation markets and created new industries and new travel options (Dickinson 1939; Jones 1973). A greater variety of new energy converters was invented and commercialized during the 19th century than at any other time in history: in chronological sequence, they include water turbines (starting in 1830s), steam turbines, internal combustion engines (Otto cycle), and electric motors (all three during the 1880s), and diesel engines (starting in the 1890s).

The 20th century added gas turbines (first commercial applications during the 1930s), nuclear reactors (first installed in submarines in the early 1950s, and for electricity generation since the late 1950s), photovoltaic cells (first in satellites during the late 1950s), and wind turbines (modern designs starting during the 1980s). I will follow all of these advances in a thematic, rather than chronological order, dealing first with harnessing wind and water (traditional mills and modern turbines), then with steam-powered converters (engines and turbines), internal combustion engines, electric light and motors, and, finally, with nuclear reactors and photovoltaic (PV) cells.

Harnessing Water and Wind

We cannot provide any accurate timing of the earliest developments of two traditional inanimate prime movers, waterwheels (whose origins are in Mediterranean antiquity) and windmills (first used in early Middle Ages). Similarly, their early growth can be described only in simple qualitative terms and we can trace their subsequent adoption and the variety of their uses but have limited information about their actual performance. We get on a firmer quantitative ground only with the machines deployed during the latter half of the 18th century and we can trace accurately the shift from waterwheels to water turbines and the growth of these hydraulic machines.

In contrast to the uninterrupted evolution of water-powered prime movers, there was no gradual shift from improved versions of traditional windmills to modern wind-powered machines. Steam-powered electricity generation ended the reliance on windmills in the early 20th century, but it was not until the 1980s that the first modern wind turbines were installed in a commercial wind farm in California. The subsequent development of these machines, aided both by subsidies and by the quest for the decarbonization of modern electricity generation, brought impressive design and performance advances as wind turbines have become a common (even dominant) choice for new generation capacity.

Waterwheels

The origins of waterwheels remain obscure but there is no doubt that the earliest use of water for grain milling was by horizontal wheels rotating around vertical axes attached directly to millstones. Their power was limited to a few kW and larger vertical wheels (Roman hydraletae), with millstones driven by right-angle gears, became common in the Mediterranean world at the beginning of the common era (Moritz 1958; White 1978; Walton 2006; Denny 2007). Three types of vertical wheels were developed to match best the existing water flow or to take advantage of an artificially enhanced water supply delivered by stream diversions, canals, or troughs (Reynolds 2002). Undershot wheels (rotating counterclockwise) were best suited for faster-flowing streams, and the power of small machines was often less than 100 W, equivalent to a steadily working strong man. Breast wheels (also rotating counterclockwise) were powered by both flowing and falling water, while gravity drove overshot wheels with water often led by troughs. Overshot wheels could deliver a few kW of useful power, with the best 19th-century designs going above 10 kW.

Waterwheels brought a radical change to grain milling. Even a small mill employing fewer than 10 workers would produce enough flour daily to feed more than 3,000 people, while manual grinding with quern stones would have required the labor of more than 200 people for the same output. Waterwheel use had expanded far beyond grain milling already during the Roman era. During the Middle Ages, common tasks relying on water power ranged from sawing wood and stone to crushing ores and actuating bellows for blast furnaces, and during the early modern era English waterwheels were often used to pump water and lift coal from underground mines (Woodall 1982; Clavering 1995).

Premodern, often crudely built, wooden wheels were not very efficient compared to modern metal machines but they delivered fairly steady power of unprecedented magnitude and hence opened the way to incipient industrialization and large-scale production. Efficiencies of early modern wooden undershot wheels reached 35–45%, well below the performance of overshots at 52–76% (Smeaton 1759). In contrast, later all-metal designs could deliver up to 76% for undershots and as much as 85% for overshots (Müller 1939; Muller and Kauppert 2004). But even the 18th-century wheels were more efficient than the contemporary steam engine and the development of these two very different machines expanded in tandem, with wheels being the key prime movers of several important pre-1850 industries, above all textile weaving.

In 1849 the total capacity of US waterwheels was nearly 500 MW and that of steam engines reached about 920 MW (Daugherty 1927), and Schurr and Netschert (1960) calculated that American waterwheels kept supplying more useful power than all steam engines until the late 1860s. The Tenth Census showed that in 1880, just before the introduction of commercial electricity generation, the US had 55,404 waterwheels with a total installed power of 914 MW (averaging about 16.5 kW per wheel), which accounted for 36% of all power used in the country’s manufacturing, with steam supplying the rest (Swain 1885). Grain milling and wood sawing were the two leading applications and Blackstone River in Massachusetts had the highest concentration of wheels in the country, prorating to about 125 kW/ha of its watershed.

There is not enough information to trace the growth of average or typical waterwheel capacities but enough is known to confirm many centuries of stagnation or very low growth followed by steep ascent to new records between 1750–1850. The largest installations combined the power of many wheels. In 1684 the project designed to pump water for the gardens of Versailles with 14 wheels on the River Seine (Machine de Marly) provided about 52 kW of useful output, but that averaged less than 4 kW/wheel (Brandstetter 2005; figure 3.1). In 1840 the largest British installations near Glasgow had a capacity of 1.5 MW in 30 wheels (average of 50 kW/wheel) fed from a reservoir (Woodall 1982). And Lady Isabella, the world’s largest waterwheel built in 1854 on the Isle of Man to pump water from Laxey lead and zinc mines, had a theoretical peak of 427 kW and actual sustained useful power of 200 kW (Reynolds 1970).

Figure 3.1

Machine de Marly, the largest waterwheel installation of the early modern era, was completed in 1684 to pump water from the River Seine to the gardens of Versailles. Detail from a 1723 painting by Pierre-Denis Martin also shows the aqueduct in the background. The painting’s reproduction is available at wikimedia.

Installations of new machines dropped off rapidly after 1850 as more efficient water turbines and more flexible heat engines took over the tasks that had been done for centuries by waterwheels. Capacity growth of a typical installation across the span of some two millennia was thus at least 20-fold and perhaps as much as 50-fold. Archaeological evidence points to common unit sizes of just 1–2 kW during the late Roman era; at the beginning of the 18th century most European waterwheels had capacities of 3–5 kW and only few of them rated more than 7 kW; and by 1850 there were many waterwheels rated at 20–50 kW (Smil 2017a). This means that after a long period of stagnation (almost a millennium and a half) or barely noticeable advances, typical capacities grew by an order of magnitude in about a century, doubling roughly every 30 years. Their further development was rather rapidly truncated by the adoption of new converters, and the highly asymmetrical S-curve created by pre-1859 development had, first gradually and then rapidly, collapsed, with only a small number of waterwheels working by 1960.

Water Turbines

Water turbines were conceptual extensions of horizontal waterwheels operating under high heads. Their history began with reaction turbine designs by Benoît Fourneyron. In 1832 his first machine, with 2.4 m rotor operating with radial outward flow and head of just 1.3 m, had rated capacity of 38 kW, and in 1837 its improved version, installed at Saint Blaisien spinning mill, had power of 45 kW under heads of more than 100 m (Smith 1980). A year later, a better design was patented in the US by Samuel B. Howd and, after additional improvements, introduced in Lowell, MA by a British-American engineer, James B. Francis, in 1849; it became widely known as the Francis turbine. This design remains the most commonly deployed large-capacity hydraulic machine suitable for medium to high generating heads (Shortridge 1989). Between 1850 and 1880 many industries located on streams replaced their waterwheels by these turbines. In the US, Massachusetts was the leading state: by 1875, turbines supplied 80% of its stationary power.

With the introduction of Edisonian electricity systems during the 1880s, water turbines began to turn generators. The first small installation (12.5 kW) in Appleton, Wisconsin, began to operate in 1882, the same year Edison’s first coal-powered station was completed in Manhattan (Monaco 2011). By the end of 1880s, the US had about 200 small hydro stations and another turbine design to deploy. An impulse machine, suitable for high water heads and driven by water jets impacting the turbine’s peripheral buckets, was developed by Lester A. Pelton. The world’s largest hydro station, built between 1891 and 1895 at Niagara Falls, had ten 5,000 hp (3.73 MW) turbines.

In 1912 and 1913, Viktor Kaplan filed patents for his axial flow turbine, whose adjustable propellers were best suited for low water heads. The first small Kaplan turbines were built already in 1918, and by 1931 four 35 MW units began to operate at the German Ryburg-Schwörstadt station on the Rhine. Although more than 500 hydrostations were built before WWI, most of them had limited capacities and the era of large projects began in the 1920s in the Soviet Union and in the 1930s in the US, in both countries led by a state policy of electrification. But the largest project of the Soviet program of electrification was made possible only by US expertise and machinery: Dnieper station, completed in 1932, had Francis turbines rated at 63.38 MW, built at Newport News, and GE generators (Nesteruk 1963).

In the US, the principal results of government-led hydro development were the stations built by the Tennessee Valley Authority in the east, and the two record-setting projects, Hoover and Grand Coulee dams, in the west (ICOLD 2017; USDI 2017). The Hoover dam on the Colorado River was completed in 1936 and 13 of its 17 turbines rated 130 MW. Grand Coulee construction took place between 1933 and 1942 and each of the 18 original turbines in two power houses could deliver 125 MW (USBR 2016). Grand Coulee’s total capacity, originally just short of 2 GW, was never surpassed by any American hydro station built after WWII, and it was enlarged by the addition of a third powerhouse (between 1975 and 1980) with three 600 MW and three 700 MW turbines. Just as the Grand Coulee upgrading was completed (1984–1985), the plant lost its global primacy to new stations in South America.

Tucuruí on Tocantins in Brazil (8.37 GW) was completed in 1984. Guri on Caroní in Venezuela (10.23 GW) followed in 1986, and the first unit of what was to become the world’s largest hydro station, Itaipu on Paraná, on the border between Brazil and Paraguay, originally 12.6 GW, now 14 GW, was installed in 1984. Itaipu now has 20 700 MW turbines, Guri has slightly larger turbines than Grand Coulee (730 MW), and Tucuruí’s units rate only 375 and 350 MW. The record unit size was not surpassed when Sanxia (Three Gorges) became the new world record holder with 22.5 GW in 2008: its turbines are, as in Itaipu, 700 MW units. A new record was reached only in 2013 when China’s Xiangjiaba got the world’s largest Francis turbines, 800 MW machines designed and made by Alstom’s plant in Tianjin (Alstom 2013; Duddu 2013).

The historical trajectory of the largest water turbine capacities forms an obvious S-curve, with most of the gains taking place between the early 1930s and the early 1980s and the formation of a plateau once the largest unit size approaches 1,000 MW (figure 3.2). Similarly, when the capacity of generators is expressed in megavolt amperes (MVA), the logistic trajectory of maximum ratings rises from a few MVA in 1900 to 200 MVA by 1960 and to 855 MVA in China’s Xiluodu on the Jinsha River completed in 2013 (Voith 2017). The trajectory forecast points to only marginal gains by 2030 but we already know that the asymptote will be reached due to the construction of China’s (and the world’s) second-largest hydro project in Sichuan: Baihetan dam on the Jinsha River (between Sichuan and Yunnan, under construction since 2008) will contain 16 1,000 MW (1 GW) units when completed in the early 2020s.

Figure 3.2

Logistic growth of maximum water turbine capacities since 1895; inflection point was in 1963. Data from Smil (2008) and ICOLD (2017).

There is a very high probability that the unit capacity of 1 GW will remain the peak rating. This is because most of the world’s countries with large hydro generation potential have either already exploited their best sites where they could use such very large units (US, Canada) or have potential sites (in sub-Saharan Africa, Latin America, and monsoonal parts of Asia) that would be best harnessed with the 500–700 MW units that now dominate record-size projects in China, India, Brazil, and Russia. In addition, Baihetan aside, environmental concerns make the construction of another station with capacity exceeding 15 GW rather unlikely.

Windmills and Wind Turbines

The earliest Persian and Byzantine windmills were small and inefficient but medieval Europe eventually developed larger wooden post mills which had to be turned manually into the wind. Taller and more efficient tower mills became common during the early modern era not only in the Netherlands but in other flat and windy Atlantic regions (Smil 2017a). Improvements that raised their power and efficiency eventually included canted edge boards to reduce drag on blades and, much later, true airfoils (contoured blades), metal gearings, and fantails. Windmills, much as waterwheels, were used for many tasks besides grain milling: oil extraction from seeds and water pumping from wells were common, and draining of low-lying areas was the leading Dutch use (Hill 1984). In contrast to heavy European machines, American windmills of the 19th century were lighter, more affordable but fairly efficient machines relying on many narrow blades fastened to wheels and fastened at the top of lattice towers (Wilson 1999).

The useful power of medieval windmills was just 2–6 kW, comparable to that of early waterwheels. Dutch and English mills of the 17th and 18th centuries delivered usually no more than 6–10 kW, American mills of the late 19th century rated typically no more than 1 kW, while the largest contemporary European machines delivered 8–12 kW, a fraction of the power of the best waterwheels (Rankine 1866; Daugherty 1927). Between the 1890s and 1920s, small windmills were used in a number of countries to produce electricity for isolated dwellings, but cheap coal-fired electricity generation brought their demise and wind machines were resurrected only during the 1980s following OPEC’s two rounds of oil price increases.

Altamont Pass in northern California’s Diablo Range was the site of the first modern large-scale wind farm, built between 1981 and 1986: its average turbine was rated at just 94 kW and the largest one was capable of 330 kW (Smith 1987). This early experiment petered out with the post-1984 fall in world oil prices, and the center of new wind turbine designs shifted to Europe, particularly to Denmark, with Vestas pioneering larger unit designs. Their ratings rose from 55 kW in 1981 to 500 kW a decade later, to 2 MW by the year 2000, and by 2017 the largest capacity of Vestas onshore units reached 4.2 MW (Vestas 2017a). This growth is captured by a logistic growth curve that indicates only limited future gains. In contrast, the largest offshore turbine (first installed in 2014) has capacity of 8 MW, which can reach 9 MW in specific site conditions (Vestas 2017b). But by 2018 neither of the 10 MW designs—SeaTitan and Sway Turbine, completed in 2010 (AMSC 2012)—had been installed commercially.

Average capacities have been growing more slowly. The linear increase of nameplate capacities of American onshore machines doubled from 710 kW in 1998–1999 to 1.43 MW in 2004–2005 but subsequent slower growth raised the mean to 1.79 MW in 2010 and 2 MW in 2015, less than tripling the average in 17 years (Wiser and Bollinger 2016). Averages for European onshore machines have been slightly higher, 2.2 MW in 2010 and about 2.5 MW in 2015. Again, the growth trajectories of both US and EU mean ratings follow sigmoidal courses but ones that appear much closer to saturation than does the trajectory of maximum turbine ratings. Average capacities of a relatively small number of European offshore turbines remained just around 500 kW during the 1990s, reached 3 MW by 2005, 4 MW by 2012, and just over 4 MW in 2015 (EWEA 2016).

Figure 3.3

Comparison of early growth stages of steam (1885–1913) and wind (1986–2014) turbines shows that the recent expansion is not unprecedented: maximum unit capacities of steam turbines were growing faster (Smil 2017b).

During the 28 years between 1986 and 2014, the maximum capacities of wind turbines were thus growing by slightly more than 11% a year, while the Vestas designs increased by about 19% a year between 1981 and 2014, doubling roughly every three years and eight months. These high growth rates have been often pointed out by the advocates of wind power as proofs of admirable technical advances opening the way to an accelerated transition from fossil fuels to noncarbon energies. In reality, those gains have not been unprecedented as other energy converters logged similar, or even higher, gains during early stages of their development. In the 28 years between 1885 and 1913, the largest capacity of steam turbines rose from 7.5 kW to 20 MW, average annual exponential growth of 28% (figure 3.3). And while the subsequent growth of steam turbine capacities pushed the maximum size by two orders of magnitude (to 1.75 GW by 2017), wind turbines could never see similar gains, that is, unit capacities on the order of 800 MW.

Even another two consecutive doublings in less than eight years will be impossible: they would result in a 32 MW turbine before 2025. The Upwind project published a predesign of a 20 MW offshore turbine based on similarity scaling in 2011 (Peeringa et al. 2011). The three-bladed machine would have a rotor diameter of 252 m (more than three times the wing span of the world’s largest jetliner, an Airbus A380), hub diameter of 6 m, and cut-in and cut-out wind speeds of 3 and 25 m/s. But doubling turbine power is not a simple scaling problem: while a turbine’s power goes up with the square of its radius, its mass (that is its cost) goes up with the cube of the radius (Hameed and Vatn 2012). Even so, there are some conceptual designs for 50 MW turbines with 200 m long flexible (and stowable) blades and with towers taller than the Eiffel tower.

Of course, to argue that such a structure is technically possible because the Eiffel tower reached 300 m already in 1889 and because giant oil tankers and container ships are nearly 400 m long (Hendriks 2008) is to commit a gross categorical mistake as neither of those structures is vertical and topped by massive moving parts. And it is an enormous challenge to design actual blades that would withstand winds up to 235 km/h. Consequently, it is certain that the capacity growth of wind turbines will not follow the exponential trajectory established by 1991–2014 developments: another S-curve is forming as the rates of annual increase have just begun, inexorably, declining.

And there are other limits in play. Even as maximum capacities have been doubling in less than four years, the best conversion efficiencies of larger wind turbines have remained stagnant at about 35%, and their further gains are fundamentally limited. Unlike large electric motors (whose efficiency exceeds 99%) or the best natural gas-fired furnaces (with efficiencies in excess of 97%), no wind turbine can operate with similarly high efficiency. The maximum share of the wind’s kinetic energy that can be harnessed by a turbine is 16/27 (59%) of the total flow, a limit known for more than 90 years (Betz 1926).

Steam: Boilers, Engines, and Turbines

Harnessing steam generated by the combustion of fossil fuel was a revolutionary shift. Steam provided the first source of inanimate kinetic energy that could be produced at will, scaled up at a chosen site, and adapted to a growing variety of stationary and mobile uses. The evolution began with simple, inefficient steam engines that provided mechanical energy for nearly two centuries of industrialization, and it has reached its performance plateaus with large, highly efficient steam turbines whose operation now supplies most of the world’s electricity. Both converters must be supplied by steam generated in boilers, devices in which combustion converts the chemical energy of fuels to the thermal and kinetic energy of hot (and now also highly pressurized) working fluid.

Boilers

The earliest boilers of the 18th century were simple riveted copper shells where steam was raised at atmospheric pressure. James Watt was reluctant to work with anything but steam at atmospheric pressure (101.3 kPa) and hence his engines had limited efficiency. During the early 19th century, operating pressures began to rise as boilers were designed for mobile use. Boilers had to be built from iron sheets and assume a horizontal cylindrical shape suitable for placement on vessels or wheeled carriages. Oliver Evans and Richard Trevithick, the two pioneers of mobile steam, used such high-pressure boilers (with water filling the space between two cylindrical shells and a fire grate placed inside the inner cylinder) and by 1841 steam pressure in Cornish engines had surpassed 0.4 MPa (Warburton 1981; Teir 2002).

Better designs were introduced as railroad transportation expanded and as steam power conquered shipping. In 1845, William Fairbairn patented a boiler that circulated hot gases through tubes submerged in the water container, and in 1856 Stephen Wilcox patented a design with inclined water tubes placed over the fire. In 1867, Wilcox and George Herman Babcock established Babcock, Wilcox & Company to make and to market water tube boilers, and the company’s design (with numerous modifications aimed at improving safety and increasing efficiency) remained the dominant choice for high-pressure boilers during the remainder of the 19th century (Babcock & Wilcox 2017). In 1882 Edison’s first electricity-generating station in New York relied on four coal-fired Babcock & Wilcox boilers (each capable of about 180 kW) producing steam for six Porter-Allen steam engines (94 kW) that were directly connected to Jumbo dynamos (Martin 1922). By the century’s end, boilers supplying large compound engines worked with pressures of 1.2–1.5 MPa.

Growth of coal-fired electricity generation required larger furnace volumes and higher combustion efficiencies. This dual need was eventually solved by the introduction of pulverized coal-fired boilers and tube-walled furnaces. Before the early 1920s, all power plants burned crushed coal (pieces of 0.5–1.5 cm) delivered by mechanical stokers onto moving grates at the furnace’s bottom. In 1918, the Milwaukee Electric Railway and Light Company made the first tests of burning pulverized coal. The fuel is now fine-milled (with most particles of less than 75 μm in diameter, similar to flour), blown into a burner, and burns at flame temperatures of 1600–1800°C. Tube-walled furnaces (with steel tubes completely covering the furnace’s interior walls and heated by radiation from hot combustion gases) made it easier to increase the steam supply demanded by larger steam turbines.

The large boilers of the late 19th century supplied steam at pressures of no more than 1.7 MPa (standard for the ships of the Royal Navy) and temperature of 300°C; by 1925, pressures had risen to 2–4 MPa and temperatures to 425°C, and by 1955 the maxima were 12.5 MPa and 525°C (Teir 2002). The next improvement came with the introduction of supercritical boilers. At the critical point of 22.064 MPa and 374°C, steam’s latent heat is zero and its specific volume is the same as liquid or gas; supercritical boilers operate above that point where there is no boiling and water turns instantly into steam (a supercritical fluid). This process was patented by Mark Benson in 1922 and the first small boiler was built five years later, but large-scale adoption of the design came only with the introduction of commercial supercritical units during the 1950s (Franke 2002).

The first supercritical boiler (31 MPa and 621°C) was built by Babcock & Wilcox and GE in 1957 at Philo 6 unit in Ohio, and the design rapidly diffused during the 1960s and 1970s (ASME 2017; Franke and Kral 2003). Large power plants are now supplied by boilers producing up to 3,300 t of steam per hour, with pressures mostly between 25 and 29 MPa and steam temperatures up to 605°C and 623°C for reheat (Siemens 2017a). During the 20th century, the trajectories of large boilers were as follows: typical operating pressures rose 17 times (1.7 to 29 MPa), steam temperatures had doubled, maximum steam output (kg/s, t/h) rose by three orders of magnitude, and the size of turbogenerators served by a single boiler increased from 2 MW to 1,750 MW, an 875-fold gain.

Stationary Steam Engines

Simple devices demonstrating the power of steam have a long history but the first commercially deployed machine using steam to pump water was patented by Thomas Savery in England only in 1699 (Savery 1702). The machine had no piston, limited work range, and dismal efficiency (Thurston 1886). The first useful, albeit still highly inefficient, steam engine was invented by Thomas Newcomen in 1712 and after 1715 it was used for pumping water from coal mines. Typical Newcomen engines had power of 4–6 kW, and their simple design (operating at atmospheric pressure and condensing steam on the piston’s underside) limited their conversion efficiency to no more than 0.5% and hence restricted their early use only to coal mines with ready on-site supply of fuel (Thurston 1886; Rolt and Allen 1997). Eventually John Smeaton’s improvements doubled that low efficiency and raised power ratings up to 15 kW and the engines were also used for water pumping in some metal mines, but only James Watt’s separate condenser opened the way to better performances and widespread adoption.

The key to Watt’s achievement is described in the opening sentences of his 1769 patent application:

My method of lessening the consumption of steam, and consequently fuel, in fire engines consists of the following principles: First, that vessell in which the powers of steam are to be employed to work the engine, which is called the cylinder in common fire engines, and which I call the steam vessell, must during the whole time the engine is at work be kept as hot as the steam that enters it Secondly, in engines that are to be worked wholly or partially by condensation of steam, the steam is to be condensed in vessells distinct from the steam vessells or cylinders, although occasionally communicating with them. These vessells I call condensers, and whilst the engines are working, these condensers ought at least to be kept as cold as the air in the neighbourhood of the engines by application of water or other cold bodies. (Watt 1769, 2)

When the extension of the original patent expired in1800, Watt’s company (a partnership with Matthew Boulton) had produced about 500 engines whose average capacity was about 20 kW (more than five times that of typical contemporary English watermills, nearly three times that of the late 18th-century windmills) and whose efficiency did not surpass 2.0 %. Watt’s largest engine rated just over 100 kW but that power was rapidly raised by post-1800 developments that resulted in much larger stationary engines deployed not only in mining but in all sectors of manufacturing, from food processing to metal forging. During the last two decades of the 19th industry, large steam engines were also used to rotate dynamos in the first coal-fired electricity-generating stations (Thurston 1886; Dalby 1920; von Tunzelmann 1978; Smil 2005).

The development of stationary steam engines was marked by increases of unit capacities, operating pressures, and thermal efficiencies. The most important innovation enabling these advances was a compound steam engine which expanded high-pressure steam first in two, then commonly in three, and eventually even in four stages in order to maximize energy extraction (Richardson 1886). The designed was pioneered by Arthur Woolf in 1803 and the best compound engines of the late 1820s approached a thermal efficiency of 10% and had slightly surpassed it a decade later. By 1876 a massive triple-expansion two-cylinder steam engine (14 m tall with 3 m stroke and 10 m flywheel) designed by George Henry Corliss was the centerpiece of America’s Centennial Exposition in Philadelphia: its maximum power was just above 1 MW and its thermal efficiency reached 8.5% (Thompson 2010; figure 3.4).

Figure 3.4

Corliss steam engine at America’s Centennial Exposition in Philadelphia in 1876. Photograph from the Library of Congress.

Stationary steam engines became the leading and truly ubiquitous prime movers of industrialization and modernization and their widespread deployment was instrumental in transforming every traditional segment of the newly industrializing economies and in creating new industries, new opportunities, and new spatial arrangements which went far beyond stationary applications. During the 19th century, their maximum rated (nameplate) capacities increased more than tenfold, from 100 kW to the range of 1–2 MW and the largest machines, both in the US and the UK, were built during the first years of the 20th century just at the time when many engineers concluded that low conversion efficiencies made these machines an inferior choice compared to rapidly improving steam turbines (their growth will be addressed next).

In 1902, America’s largest coal-fired power plant, located on the East River between 74th and 75th Streets, was equipped with eight massive Allis-Corliss reciprocating steam engines, each rated at 7.45 MW and driving directly a Westinghouse alternator. Britain’s largest steam engines came three years later. First, London’s County Council Tramway power station in Greenwich installed the first of its 3.5 MW compound engines, nearly as high as it was wide (14.5 m), leading to Dickinson’s (1939, 152) label of “a megatherium of the engine world.” And even larger steam engines were built by Davy Brothers in Sheffield. In 1905, they installed the first of their four 8.9 MW machines in the city’s Park Iron Works, where it was used for hot rolling of steel armor plates for nearly 50 years. Between 1781 and 1905, maximum ratings of stationary steam engines thus rose from 745 W to 8.9 MW, nearly a 12,000-fold gain.

The trajectory of this growth fits almost perfectly a logistic curve with the inflection point in 1911 and indicating further capacity doubling by the early 1920s—but even in 1905 it would have been clear to steam engineers that this not going to happen, that the machine, massive and relatively inefficient, had reached its performance peak. Concurrently, operating pressures increased from just above the atmospheric pressure of Watt’s time (101.3 kPa) to as much as 2.5 MPa in quadruple-expansion machines, almost exactly a 25-fold gain. Highest efficiencies had improved by an order of magnitude, from about 2% for Watt’s machines to 20–22% for the best quadruple-expansion designs, with the reliably attested early 20th-century records of 26.8% for a binary vapor engine in Berlin and 24% for a cross-compound engine in London (Croft 1922). But common best efficiencies were only 15–16%, opening the way for a rapid adoption of steam turbines in electricity generation and in other industrial uses, while in transportation steam engines continued to make important contributions until the 1950s.

Steam Engines in Transportation

Commercialization of steam-powered shipping began in 1802 in England (Patrick Miller’s Charlotte Dundas) and in 1807 in the US (Robert Fulton’s Clermont), the former with a 7.5 kW modified Watt engine. In 1838, Great Western, powered by a 335 kW steam engine, crossed the Atlantic in 15 days. Brunel’s screw-propelled Great Britain rated 745 kW in 1845, and by the 1880s steel-hulled Atlantic steamers had engines of mostly 2.5–6 MW and up to 7.5 MW, the latter reducing the crossing to seven days (Thurston 1886). The famous pre-WWI Atlantic liners had engines with a total capacity of more than 20 MW: Titanic (1912) had two 11.2 MW engines (and a steam turbine), Britannic (1914) had two 12 MW engines (and also a steam turbine). Between 1802 and 1914, the maximum ratings of ship engines rose 1,600-fold, from 7.5 kW to 12 MW.

Steam engines made it possible to build ships of unprecedented capacity (Adams 1993). In 1852 Duke of Wellington, a three-deck 131-gun ship of the line originally designed and launched as sail ship Windsor Castle and converted to steam power, displaced about 5,800 t. By 1863 Minotaur, an iron-clad frigate, was the first naval ship to surpass 10,000 t (10,690 t), and in 1906 Dreadnought, the first battleship of that type, displaced 18,400 t, implying post-1852 exponential growth of 2.1%/year. Packet ships that dominated transatlantic passenger transport between 1820 and 1860 remained relatively small: the displacement of Donald McKay’s packet ship designs grew from the 2,150 t of Washington Irving in 1845 to the 5,858 t of Star of Empire in 1853 (McKay 1928).

By that time metal hulls were ascendant. Lloyd’s Register approved their use in 1833, and in 1849 Isambard Kingdom Brunel’s Great Britain was the first iron vessel to cross the Atlantic (Dumpleton and Miller 1974). Inexpensive Bessemer steel was used in the first steel hulls for a decade before the Lloyd’s Register of Shipping accepted the metal as an insurable material for ship construction in 1877. In 1881 Concord Line’s Servia, the first large transatlantic steel-hull liner, was 157 m long with 25.9 m beam and a 9.8:1 ratio unattainable by a wooden sail ship. The dimensions of future steel liners clustered close to that ratio: Titanic’s (1912) was 9.6. In 1907 Cunard Line’s Lusitania and Mauritania displaced each nearly 45,000 t and just before WWI the White Star Line’s Olympic, Titanic, and Britannic had displacements of about 53,000 t (Newall 2012).

The increase of the maximum displacement by an order of magnitude during roughly five decades (from around 5,000 to about 50,000 t) implies an annual exponential growth rate of about 4.5% as passenger shipping was transformed by the transition from sails to steam engines and from wooden to steel hulls. Only two liners that were launched between the two world wars were much larger than the largest pre-WWI ships (Adams 1993). The Queen Mary in 1934 and Normandie in 1935 displaced, respectively, nearly 82,000 t and almost 69,000 t, while Germany’s Bremen (1929) rated 55,600 t and Italy’s Rex 45,800 t. After WWII, the United States came at 45,400 t in 1952 and France at 57,000 t in 1961, confirming the modal upper limit of great transatlantic liners at between 45,000 t and 57,000 t and forming another sigmoid growth curve (figure 3.5), and also ending, abruptly, more than 150 years of steam-powered Atlantic crossings. I will return to ships in chapter 4, where I will review the growth of transportation speeds.

Figure 3.5

Logistic curve of the maximum displacement of transatlantic commercial liners, 1849–1961 (Smil 2017a).

Larger ships required engines combining high power with the highest achievable efficiency (in order to limit the amount of coal carried) but inherent limits on steam engine performance opened the way for better prime movers. Between 1904 and 1908 the best efficiencies recoded during the British Marine Engine Trials of the best triple- and quadruple-expansion steam engines ranged between 11% and 17%, inferior to both steam turbines and diesels (Dalby 1920). Steam turbines (which could work with much higher temperatures) and Diesel engines began to take over marine propulsion even before WWI. For diesel engines, it was the first successful niche they conquered before they started to power trucks and locomotives (during the 1920s) and automobiles (during the 1930s).

But decades after they began their retreat from shipping, steam engines made a belated and very helpful appearance: they were chosen to power 2,710 WWII Liberty (EC2) class ships built in the US and Canada that were used to carry cargo and troops to Asia, Africa, and Europe (Elphick 2001). These triple-expansion machines were based on an 1881 English design, worked with inlet pressure of 1.5 MPa (steam came from two oil-fired boilers), and could deliver 1.86 MW at 76 rpm (Bourneuf 2008). This important development illustrates a key lesson applicable to many other growth phenomena: the best and the latest may not be the best in specific circumstances. Relatively low-capacity, inefficient, and outdated steam engines were the best choice to win the delivery war. Their building did not strain the country’s limited capacity to produce modern steam turbines and diesel engines for the Navy, and a large number of proven units could be made by many manufacturers (eventually by 18 different companies) inexpensively and rapidly.

Experiments with a high-pressure steam engine suitable to power railroad locomotives began at the same time as the first installations of steam engines in vessels (Watkins 1967). Richard Trewithick’s simple 1803 locomotive was mechanically sound, and many designs were tested during the next 25 years before the commercial success of Robert Stephenson’s Rocket in the UK (winner of 1829 Rainhill locomotive trails to select the best machine for the Liverpool and Manchester Railway) and The Best Friend of Charleston in the US in 1830 (Ludy 1909). In order to satisfy the requirements of the Rainhill trials of a locomotive weighing 4.5 tons needed to pull three times its weight with a speed of 10 mph (16 km/h) using a boiler operating at a pressure of 50 lbs/in2 (340 kPa). Stephenson’s Rocket, weighing 4.5 t, was the only contestant to meet (and beat) these specifications by pulling 20 t at average speed of 16 mph.

The approximate power (in hp) of early locomotives can be calculated by multiplying tractive effort (gross train weight in tons multiplied by train resistance, equal to 8 lbs/ton on steel rails) by speed (in miles/h) and dividing by 375. Rocket thus developed nearly 7 hp (about 5 kW) and a maximum of about 12 hp (just above 9 kW); The Best Friend of Charleston performed with the identical power at about 30 km/h. More powerful locomotives were needed almost immediately for faster passenger trains, for heavier freight trains on the longer runs of the 1840s and 1850s, and for the first US transcontinental line that was completed in May 1869; these engines also had to cope with greater slopes on mountain routes.

Many improvements—including uniflow design (Jacob Perkins in 1827), regulating valve gear (George H. Corliss in 1849), and compound engines (introduced during the 1880s and expanding steam in two or more stages)—made steam locomotives heavier, more powerful, more reliable, and more efficient (Thurston 1886; Ludy 1909; Ellis 1977). By the 1850s the most powerful locomotive engines had boiler pressures close to 1 MPa and power above 1 MW, exponential growth of close to 20%/year spanning two orders of magnitude in about 25 years. Much slower expansion continued for another 90 years, until the steam locomotive innovation ceased by the mid-1940s. By the 1880s the best locomotives had power on the order of 2 MW and by 1945 the maximum ratings reached 4–6 MW. Union Pacific’s Big Boy, the heaviest steam locomotive ever built (548 t), could develop 4.69 MW, Chesapeake & Union Railway’s Allegheny (only marginally lighter at 544 t) rated 5.59 MW, and Pennsylvania Railroad’s PRR Q2 (built in Altoona in 1944 and 1945, and weighing 456 t) had peak power of 5.956 MW (E. Harley 1982; SteamLocomotive.com 2017).

Consequently, the growth of maximum power ratings for locomotive steam engines—from 9 kW in 1829 to 6 MW in 1944, a 667-fold gain—was considerably slower than the growth of steam engines in shipping, an expected outcome given the weight limits imposed by the locomotive undercarriage and the bearing capacity of rail beds. Steam boiler pressure increased from 340 kPa in Stephenson’s 1829 Rocket to more than 1 MPa by the 1870s, and peak levels in high-pressure boilers of the first half of the 20th century were commonly above 1.5 MPa—record-breaking in its speed (203 km/h in July 1938), Mallard worked with 1.72 MPa—and reached 2.1 MPa in PRR Q2 in 1945, roughly a 6.2-fold gain in 125 years. That makes for a good linear fit with average growth of 0.15 MPa/decade. Thermal efficiencies improved from less than 1% during the late 1820s to 6–7% for the best late 19th and the early 20th-century machines. The best American pre-WWI test results were about 10%, and the locomotives of the Paris-Orleans Railway of France, rebuilt by André Chapelon, reached more than 12% thermal efficiency starting in 1932 (Rhodes 2017). Efficiency growth was thus linear, averaging about 1% per decade.

Steam Turbines

Charles Algernon Parsons patented the first practical turbine design in 1884 and immediately built the first small prototype machine with capacity of just 7.5 kW and low efficiency of 1.6% (Parsons 1936). That performance was worse than that of 1882 steam engine that powered Edison’s first power plant (its efficiency was nearly 2.5%) but improvements followed rapidly. The first commercial orders came in 1888, and in 1890 the first two 75 kW machines (efficiency of about 5%) began to generate electricity in Newcastle. In 1891, a 100 kW, 11% efficient machine for Cambridge Electric Lighting was the first condensing turbine that also used superheated steam (all previous models were exhausting steam against atmospheric pressure, resulting in very low efficiencies).

The subsequent two decades saw exponential growth of turbine capacities. The first 1 MW unit was installed in 1899 at Germany’s Elberfeld station, in 1903 came the first 2 MW machine for the Neptune Bank station near Newcastle, in 1907 a 5 MW 22% efficient turbine in Newcastle-on-Tyne, and in 1912 a 25 MW and roughly 25% efficient machine for the Fisk Street station in Chicago (Parsons 1911). Maximum capacities thus grew from 75 kW to 25 MW in 24 years (a 333-fold increase), while efficiencies improved by an order of magnitude in less than three decades. For comparison, at the beginning of the 20th century the maximum thermal efficiencies of steam engines were 11–17% (Dalby 1920). And power/mass ratios of steam turbines rose from 25 W/kg in 1891 (five times higher than the ratio for contemporary steam engines) to 100 W/kg before WWI. This resulted in compact sizes (and hence easier installations) and in large savings of materials (mostly metals) and lower construction costs.

The last steam engine-powered electricity-generation plant was built in 1905 in London but the obvious promise of further capacity and efficiency gains for steam turbines was interrupted by WWI and, after the postwar recovery, once again by the economic crisis of the 1930s and by WWII. Steam turbogenerators became more common as post-1918 electrification proceeded rapidly in both North America and Europe and US electricity demand rose further in order to supply the post-1941 war economy, but unit capacities of turbines and efficiencies grew slowly. In the US, the first 110 MW unit was installed in 1928 but typical capacities remained overwhelmingly below 100 MW and the first 220 MW unit began generating only in 1953.

But during the 1960s average capacities of new American steam turbogenerators had more than tripled from 175 MW to 575 MW, and by 1965 the largest US steam turbogenerators, at New York’s Ravenswood station, rated 1 GW (1,000 MW) with power/mass ratio above 1,000 W/kg (Driscoll et al. 1964). Capacity forecasts anticipated 2 GW machines by 1980 but reduced growth of electricity demand prevented such growth and by the century’s end the largest turbogenerators (in nuclear power plants) were the 1.5 GW Siemens at Isar 2 nuclear station and the 1.55 GW Alstom unit at Chooz B1 reactor.

The world’s largest unit, Alstom’s 1.75 GW turbogenerator, was scheduled to begin operation in 2019 at France’s Flamanville station, where two 1,382 MW units have been generating electricity since the late 1980s (Anglaret 2013; GE 2017a). The complete trajectory of maximum steam turbine capacities—starting with Parsons’s 7.5 kW machine in 1884 and ending in 2017 shows an increase by five orders of magnitude (and by three orders of magnitude, from 1 MW to 1.5 GW during the 20th century) and a near-perfect fit for a four-parameter logistic curve with the inflection year in 1963 and with unlikely prospects for any further increase (figure 3.6).

Figure 3.6

Growth of maximum steam turbine capacities since 1884. Five-parameter logistic curve, inflection year in 1954, asymptote has been already reached. Data from Smil (2003, 2017a).

As already noted in the section describing the growth of boilers, working steam pressure rose about 30-fold, from just around 1 MPa for the first commercial units to as high as 31 MPa for supercritical turbines introduced in the 1960s (Leyzerovich 2008). Steam temperatures rose from 180°C for the first units to more than 600°C for the first supercritical units around 1960. Plans for power plants with ultra-supercritical steam conditions (pressure of 35 MPa and temperatures of 700/720°C) and with efficiency of up to 50% were underway in 2017 (Tumanovskii et al. 2017). Coal-fired units (boiler-turbine-generator) used to dominate the largest power plants built after WWII, and at the beginning of the 20th century they still generated about 40% of the world’s electricity. But after a temporary increase, caused by China’s extraordinarily rapid construction of new coal-fired capacities after the year 2000, coal is now in retreat. Most notably, the share of US coal-fired generation declined from 50% in the year 2000 to 30% in 2017 (USEIA 2017a).

US historical statistics allow for a reliable reconstruction of average conversion efficiency (heat rates) at thermal power plants (Schurr and Netschert 1960; USEIA 2016). Rates rose from less than 4% in 1900 to nearly 14% in 1925, to 24% by 1950, surpassed 30% by 1960 but soon leveled off and in 2015 the mean was about 35%, a trajectory closely conforming to a logistic curve with the inflection point in 1931 (figure 3.7). Flamanville’s 1.75 GW Arabelle unit has design efficiency of 38% and its power/mass ratio is 1,590 W/kg (Modern Power Systems 2010). Further substantial growth of the highest steam turbine capacities is unlikely in any Western economy where electricity demand is, at best, barely increasing or even declining, and while the modernizing countries in Asia, Latin America, and Africa need to expand their large-scale generating capacities, they will increasingly rely on gas turbines as well as on new PV and wind capacities.

Figure 3.7

Logistic growth (inflection year in 1933, asymptote at 36.9%) of average efficiency of US thermal electricity-generating plants. Data from Schurr and Netschert (1960) and USEIA (2016).

Although diesel engines have dominated shipping for decades, steam turbines, made famous by powering record-breaking transatlantic liners during the early decades of the 20th century, have made some contributions to post-1950 waterborne transport. In 1972 Sea-Land began to use the first (SL-7 class) newly built container ships that were powered by GE 45 MW steam turbines. Diesels soon took over that sector, leaving tankers transporting liquefied natural gas (LNG) as the most important category of shipping relying on steam turbines. LNG tankers use boil-off gas (0.1–0.2% of the carrier’s capacity a day; they also use bunker fuel) to generate steam. America’s large aircraft carriers are also powered by turbines supplied by steam from nuclear reactors (ABS 2014).

Internal Combustion Engines

Despite their enormous commercial success and epoch-making roles in creating modern, high-energy societies, steam engines, inherently massive and with low power/mass ratio (the combination that could be tolerated for stationary designs and with low fuel costs), could not be used in any applications that required relatively high conversion efficiency and high power/mass ratio, limiting their suitability for mobile uses to rails and shipping and excluding their adoption for flight. High efficiency and high power/mass ratio were eventually supplied by steam turbines, but the requirements of mechanized road (and off-road) transport were met by two new kinds of the 19th-century machines, by internal combustion engines. Gasoline-fueled Otto-cycle engines power most of the world’s passenger cars and other light-duty vehicles and diesel engines are used for trucks and other heavy machinery and also for many European automobiles.

Reciprocating gasoline-fueled engines were also light enough to power propeller planes and diesels eventually displaced steam in railroad freight and marine transport. Gas turbines are the only 20th-century addition to internal combustion machines. These high power/mass designs provide the only practical way to power long-distance mass-scale global aviation, and they have become indispensable prime movers for important industrial and transportation systems (chemical syntheses, pipelines) as well as the most efficient, and flexible, generators of electricity.

Gasoline Engines

Steam engines rely on external combustion (generating steam in boilers before introducing it into cylinders), while internal combustion engines (fueled by gasoline or diesel fuel) combine the generation of hot gases and conversion of their kinetic energy into reciprocating motion inside pressurized cylinders. Developing such machines presented a greater challenge than did the commercialization of steam engines and hence it was only in 1860, after several decades of failed experiments and unsuccessful designs, that Jean Joseph Étienne Lenoir patented the first viable internal combustion engine. This heavy horizontal machine, powered by an uncompressed mixture of illuminating gas and air, had a very low efficiency (only about 4%) and was suitable only for stationary use (Smil 2005).

Fifteen years later, in 1877, Nicolaus August Otto patented a relatively light, low-power (6 kW), low-compression (2.6:1) four-stroke engine that was also fueled by coal gas, and nearly 50,000 units were eventually bought by small workshops (Clerk 1909). The first light gasoline-fueled engine suitable for mobile use was designed in Stuttgart in 1883 by Gottlieb Daimler and Wilhelm Maybach, both former employees of Otto’s company. In 1885 they tested its version on a bicycle (the prototype of a motorcycle) and in 1886 they mounted a larger (820 W) engine on a wooden coach (Walz and Niemann 1997). In one of the most remarkable instances of independent technical innovation, Karl Benz was concurrently developing his gasoline engine in Mannheim, just two hours by train from Stuttgart. By 1882 Benz had a reliable, small, horizontal gasoline-fueled engine, and then proceeded to develop a four-stroke machine (500 W, 250 rpm, mass of 96 kg) that powered the first public demonstration of a three-wheeled carriage in July 1886 (figure 3.8).

Figure 3.8

Carl Benz (with Josef Brecht) at the wheel of his patent motor car in 1887. Photograph courtesy of Daimler AG, Stuttgart.

And it was Benz’s wife, Bertha, who made, without her husband’s knowledge, the first intercity trip with the three-wheeler in August of 1888 when she took their two sons to visit her mother, driving some 104 km to Pforzheim. Daimler’s high-rpm engine, Benz’s electrical ignition, and Maybach’s carburetor provided the functional foundations of automotive engines but the early motorized wooden carriages which they powered were just expensive curiosities. Prospects began to change with the introduction of better engines and better overall designs. In 1890 Daimler and Maybach produced their first four-cylinder engine, and their machines kept improving through the 1890s, winning the newly popular car races. In 1891 Emile Levassor, a French engineer, combined the best Daimler-Maybach engine with his newly designed, car-like rather than carriage-like, chassis. Most notably, he moved the engine from under the seats in front of the driver (crankshaft parallel with the vehicle’s long axis), a design helping later aerodynamic body shape (Smil 2005).

During the last decade of the 19th century, cars became faster and more convenient to operate, but they were still very expensive. By 1900 cars benefited from such technical improvements as Robert Bosch’s magneto (1897), air cooling, and the front-wheel drive. In 1900 Maybach designed a vehicle that was called the “first modern car in all essentials” (Flink 1988, 33): Mercedes 35, named after the daughter of Emil Jellinek who owned a Daimler dealership, had a large 5.9 L, 26 kW engine whose aluminum block and honeycomb radiator lowered its weight to 230 kg. Other modern-looking designs followed in the early years of the 20th century but the mass-market breakthrough came only in 1908 with Henry Ford’s Model T, the first truly affordable vehicle (see figure 1.2). Model T had a 2.9 L, 1 -kW (20 hp) engine weighing 230 kg and working with a compression ratio of 4.5:1.

Many cumulative improvements were made during the 20th century to every part of the engine. Beginning in 1912, dangerous cranks were replaced by electric starters. Ignition was improved by Gottlob Honold’s high-voltage magneto with a new spark plug in 1902, and more durable Ni-Cr spark plugs were eventually replaced by copper and then by platinum spark plugs. Band brakes were replaced by curved drum brakes and later by disc brakes. The violent engine knocking that accompanied higher compression was eliminated first in 1923 by adding tetraethyl lead to gasoline, a choice that later proved damaging both to human health and to the environment (Wescott 1936).

Two variables are perhaps most revealing in capturing the long-term advances of automotive gasoline engines: their growing power (when comparing typical or best-selling designs, not the maxima for high-performance race cars) and their improving power/mass ratio (whose rise has been the result of numerous cumulative design improvements). Power ratings rose from 0.5 kW for the engine powering Benz’s three-wheeler to 6 kW for Ford’s Model A introduced in 1903, to 11 kW for Model N in 1906, 15 kW for Model T in 1908, and 30 kW for its successor, a new Model A that enjoyed record sales between 1927 and 1931. Ford engines powering the models introduced a decade later, just before WWII, ranged from 45 to 67 kW.

Larger post-1950 American cars had engines capable mostly of more than 80 kW and by 1965 Ford’s best-selling fourth-generation Fairlane was offered with engines of up to 122 kW. Data on the average power of newly sold US cars are available from 1975 when the mean was 106 kW; it declined (due to a spike in oil prices and a sudden preference for smaller vehicles) to 89 kW by 1981, but then (as oil prices retreated) it kept on rising, passing 150 kW in 2003 and reaching the record level of nearly 207 kW in 2013, with the 2015 mean only slightly lower at 202 kW (Smil 2014b; USEPA 2016b). During the 112 years between 1903 and 2015 the average power of light-duty vehicles sold in the US thus rose roughly 34 times. The growth trajectory was linear, average gain was 1.75 kW/year, and notable departures from the trend in the early 1980s and after 2010 were due, respectively, to high oil prices and to more powerful (heavier) SUVs (figure 3.9).

Figure 3.9

Linear growth of average power of US passenger vehicles, 1903–2020. Data from Smil (2014b) and USEPA (2016b).

The principal reason for the growing power of gasoline engines in passenger cars has been the increasing vehicle mass that has taken place despite the use of such lighter construction materials as aluminum, magnesium, and plastics. Higher performance (faster acceleration, higher maximum speeds) was a secondary factor. With few exceptions (German Autobahnen being the best example), speed limits obviate any need for machines capable of cruising faster than the posted maxima—but even small cars (such as Honda Civic) can reach maxima of up to or even above 200 km/h, far in excess of what is needed to drive lawfully.

Weight gains are easily illustrated by the long-term trend of American cars (Smil 2010b). In 1908 the curb weight of Ford’s revolutionary Model T was just 540 kg and three decades later the company’s best-selling Model 74 weighed almost exactly twice as much (1,090 kg). Post-WWII weights rose with the general adoption of roomier designs, automatic transmissions, air conditioning, audio systems, better insulation, and numerous small servomotors. These are assemblies made of a small DC motor, a gear-reduction unit, a position-sensing device (often just a potentiometer), and a control circuit: servomotors are now used to power windows, mirrors, seats, and doors.

As a result, by 1975 (when the US Environmental Protection Agency began to monitor average specifications of newly sold vehicles) the mean inertia weight of US cars and light trucks (curb weight plus 200 lbs, or 136 kg) reached 1.84 t. Higher oil prices helped to lower the mean to 1.45 t by 1981, but once they collapsed in 1985 cars began to gain weight and the overall trend was made much worse by the introduction of SUVS and by the increasing habit of using pickups as passenger cars. By 2004 the average mass of newly sold passenger vehicles reached a new record of 1.86 t, that was slightly surpassed (1.87 t) in 2011, and by 2016 this mean declined only marginally to 1.81 t (Davis et al. 2016; USEPA 2016b). The curb weight of the average American light-duty vehicle had thus increased roughly three times in a century.

Until the 1960s both European and Japanese cars weighed much less than US vehicles but since the 1970s their average masses have shown a similarly increasing trend. In 1973 the first Honda Civic imported to North America weighed just 697 kg, while the 2017 Civic (LX model, with automatic transmission and standard air conditioning) weighs 1,247 kg, that is about half a tonne (nearly 80%) more than 44 years ago. Europe’s popular small post-WWII cars weighed just over half a tonne (Citroen 2 CV 510 kg, Fiat Topolino 550 kg), but the average curb weight of European compacts reached about 800 kg in 1970 and about 1.2 t in the year 2000 (WBCSD 2004). Subsequently, average mass has grown by roughly 100 kg every five years and the EU car makers now have many models weighing in excess of 1.5 t (Cuenot 2009; Smil 2010b). And the increasing adoption of hybrid drives and electric vehicles will not lower typical curb weights because these designs have to accommodate either more complicated power trains or heavy battery assemblies: the Chevrolet Volt hybrid weighs 1.72 t, the electric Tesla S as much as 2.23 t.

The power/mass ratio of Otto’s heavy stationary horizontal internal combustion engine was less than 4 W/kg, by 1890 the best four-stroke Daimler-Maybach automotive gasoline engine reached 25 W/kg, and in 1908 Ford’s Model T delivered 65 W/kg. The ratio continued to rise to more than 300 W/kg during the 1930s and by the early 1950s many engines (including those in the small Fiat 8V) were above 400 W/kg. By the mid-1950s Chrysler’s powerful Hemi engines delivered more than 600 W/kg and the ratio reached a broad plateau in the range of 700–1,000 W/kg during the 1960s. For example, in 1999 Ford’s Taunus engine (high performance variant) delivered 830 W/kg, while in 2016 Ford’s best-selling car in North America, Escape (a small SUV), was powered by a 2.5 L, 125 kW Duratec engine and its mass of 163 kg resulted in a power/mass ratio of about 770 W/kg (Smil 2010b). This means that the power/mass densities of the automotive gasoline engine have increased about 12-fold since the introduction of the Model T, and that more than half of that gain took place after WWII.

Before leaving the gasoline engines I should note the recent extreme ratings of high-performance models and compare them to those of the car that was at the beginning of their evolution, Maybach’s 1901 Mercedes 35 (26 kW engine, power/mass ratio of 113 W/kg). In 2017 the world’s most powerful car was the Swedish limited-edition Megacar Koenigsegg Regera with a 5 L V8 engine rated at 830 kW and electric drive rated at 525 kW, giving it the actual total combined propulsion of 1.11 MW—while the top Mercedes (AMG E63-S) rated “only” 450 kW. This means that the maximum power of high-performance cars has risen about 42-fold since 1901 and that the Regera has more than eight times the power of the Honda Civic.

Finally, a brief note about the growth of gasoline-fueled reciprocating engines in flight. Their development began with a machine designed by Orville and Wilbur Wright and built by their mechanic Charles Taylor in 1903 in the brothers’ bicycle workshop in Dayton, Ohio (Taylor 2017). Their four-cylinder 91 kg horizontal engine was to deliver 6 kW but eventually it produced 12 kW, corresponding to a power/mass ratio of 132 W/kg. Subsequent improvement of aeroengines was very rapid. Léon Levavasseur’s 37 kW Antoinette, the most popular pre-WWI eight-cylinder engine, had a power/mass ratio of 714 W/kg, and the American Liberty engine, a 300 kW mass-produced machine for fighter aircraft during WWI, delivered about 900 W/kg (Dickey 1968). The most powerful pre-WWII engines were needed to power Boeing’s 1936 Clipper, a hydroplane that made it possible to fly, in stages, from the western coast of the United States to East Asia. Each of its four radial Wright Twin Cyclone engines was rated at 1.2 MW and delivered 1,290 W/kg (Gunston 1986).

High-performance aeroengines reached their apogee during WWII. American B-29 (Superfortress) bombers were powered by four Wright R-3350 Duplex Cyclone radial 18-cylinder engines whose versions were rated from 1.64 to 2.76 MW and had a high power/mass ratio in excess of 1,300 W/kg, an order of magnitude higher than the Wrights’ pioneering design (Gunston 1986). During the 1950s Lockheed’s L-1049 Super Constellation, the largest airplane used by airlines for intercontinental travel before the introduction of jetliners, used the same Wright engines. After WWII spark-ignited gasoline engines in heavy-duty road (and off-road) transport were almost completely displaced by diesel engines, which also dominate shipping and railroad freight.

Diesel Engines

Several advantageous differences set Diesel’s engines apart from Otto-cycle gasoline engines. Diesel fuel has nearly 12% higher energy density compared to gasoline, which means that, everything else being equal, a car can go further on a full tank. But diesels are also inherently more efficient: the self-ignition of heavier fuel (no sparking needed) requires much higher compression ratios (commonly twice as high as in gasoline engines) and that results in a more complete combustion (and hence a cooler exhaust gas). Longer stroke and lower rpm reduce frictional losses, and diesels can operate with a wide range of very lean mixtures, two to four times leaner than those in a gasoline engine (Smil 2010b).

Rudolf Diesel began to develop a new internal combustion engine during the early 1890s with two explicit goals: to make a light, small (no larger than a contemporary sewing machine), and inexpensive engine whose use by independent entrepreneurs (machinists, watchmakers, repairmen) would help to decentralize industrial production and achieve unprecedented efficiency of fuel conversion (R. Diesel 1913; E. Diesel 1937; Smil 2010b). Diesel envisaged the engine as the key enabler of industrial decentralization, a shift of production from crowded large cities where, he felt strongly, it was based on inappropriate economic, political, humanitarian, and hygienic grounds (R. Diesel 1893).

And he went further, claiming that such decentralized production would solve the social question as it would engender workers’ cooperatives and usher in an age of justice and compassion. But his book summarizing these ideas (and ideals)—Solidarismus: Natürliche wirtschaftliche Erlösung des Menschen (R. Diesel 1903)—sold only 300 of 10,000 printed copies and the eventual outcome of the widespread commercialization of Diesel’s engines was the very opposite of his early social goals. Rather than staying small and serving decentralized enterprises, massive diesel engines became one of the principal enablers of unprecedented industrial centralization, mainly because they reduced transportation costs, previously decisive determinants of industrial location, to such an extent that an efficient mass-scale producer located on any continent could serve the new, truly global, market.

Diesels remain indispensable prime movers of globalization, powering crude oil and LNG tankers, bulk carriers transporting ores, coal, cement, and lumber, container ships (the largest ones now capable of moving more than 20,000 standard steel containers), freight trains, and trucks. They move fuels, raw materials, and food among five continents and they helped to make Asia in general, and China in particular, the center of manufacturing serving the worldwide demand (Smil 2010b). If you were to trace everything you wear, and every manufactured object you use, you would find that all of them were moved multiple times by diesel-powered machines.

Diesel’s actual accomplishments also fell short of his (impossibly ambitious) efficiency goal, but he still succeeded in designing and commercializing an internal combustion engine with the highest conversion efficiency. He did so starting with the engine’s prototype, which was constructed with a great deal of support from Heinrich von Buz, general director of the Maschinenfabrik Augsburg, and Friedrich Alfred Krupp, Germany’s leading steelmaker. On February 17, 1897, Moritz Schröter, a professor of theoretical engineering at Technische Universität in Munich, was in charge of the official certification test that was to set the foundation for the engine’s commercial development. While working at its full power of 13.5 kW (at 154 rpm and pressure of 3.4 MPa), the engine’s thermal efficiency was 34.7% and its mechanical efficiency reached 75.5 % (R. Diesel 1913).

As a result, the net efficiency was 26.2%, about twice that of contemporary Otto-cycle machines. Diesel was justified when he wrote to his wife that nobody’s engine had achieved what his design did. Before the end of 1897 the engine’s net efficiency reached 30.2%—but Diesel was wrong when he claimed that he had a marketable machine whose development would unfold smoothly. The efficient prototype required a great deal of further development and the conquest of commercial markets began in 1903, and it was not on land but on water, with a small diesel engine (19 kW) powering a French canal boat. Soon afterwards came Vandal, an oil tanker operating on the Caspian Sea and on the Volga with three-cylinder engines rated at 89 kW (Koehler and Ohlers 1998), and in 1904 the world’s first diesel-powered station began to generate electricity in Kiev.

The first oceangoing ship equipped with diesel engines (two eight-cylinder four-stroke 783 kW machines) was the Danish Selandia, a freight and passenger carrier that made its maiden voyage to Tokyo and back to Copenhagen in 1911 and 1912 (Marine Log 2017). In 1912 Fionia became the first diesel-powered transatlantic steamer of the Hamburg-American Line. Diesel adoption proceeded steadily after WWI, got really underway during the 1930s, and accelerated after WWII with the launching of large crude oil tankers during the 1950s. A decade later came large container ships as marine diesel engines increased both in capacity and efficiency (Smil 2010b).

In 1897 Diesel’s third prototype engine had capacity of 19.8 bhp (14.5 kW), in 1912 Selandia’s two engines rated 2,100 bhp. In 1924 Sulzer’s engine for ocean liners had a 3,250 bhp engine, and a 4,650 bhp machine followed in 1929 (Brown 1998). The largest pre-WWII marine diesels were around 6,000 bhp, and by the late 1950s they rated 15,000 bhp. In 1967 a 12-cylinder engine was capable of 48,000 hp, in 2001 MAN B&W-Hyundai’s machine reached 93,360 bhp and in 2006 Hyundai Heavy Industries built the first diesel rated at more than 100,000 bhp: 12K98MC, 101,640 bhp, that is 74.76 MW (MAN Diesel 2007).

That machine held the record of the largest diesel for just six months until September 2006 when Wärtsilä introduced a new 14-cylinder engine rated at 80.1 MW (Wärtsilä 2009). Two years later modifications of that Wärtsilä engine increased the maximum rating to 84.42 MW and MAN Diesel now offers an 87.22 MW engine with 14 cylinders at 97 rpm (MAN Diesel 2018). These massive engines power the world’s largest container ships, with OOCL Hong Kong, operating since 2017, the current record holder (figure 3.10). Large-scale deployment of new machines saw efficiencies approaching 40% after WWI, in 1950 a MAN reached 45% efficiency, and the best two-stroke designs now have efficiencies of 52% (surpassing that of gas turbines at about 40%, although in combined cycle, using exhaust gas in a steam turbine, they now go up to 61%) and four-stroke diesels are a bit behind at 48%.

Figure 3.10

OOCL Hong Kong, the world’s largest container ship in 2019, carries an equivalent of 21,413 twenty-foot standard units. Diesels and container vessels have been the key prime movers of globalization. Photo available at wikimedia.

On land, diesels were first deployed in heavy locomotives and trucks. The first diesel locomotive entered regular service in 1913 in Germany. Powered by a Sulzer four-cylinder two-stroke V-engine, it could sustain speed of 100 km/h. After WWI yard switching locomotives were the first railroad market dominated by diesels (steam continued to power most passenger trains) but by the 1930s the fastest trains were diesel-powered. In 1934 streamlined stainless-steel Pioneer Zephyr had a 447 kW, eight-cylinder, two-stroke diesel-electric drive that made it possible to average 124 km/h on the Denver-Chicago run (Ellis 1977; ASME 1980).

By the 1960s steam engines in freight transport (except in China and India) were replaced by diesels. Ratings of American locomotive diesel engines increased from 225 kW in 1924 for the first US-made machine to 2 MW in 1939, and the latest GE Evolution Series engines have power ranging from 2.98 to 4.62 MW (GE 2017b). But modern locomotive diesels are hybrids using diesel-electric drive: the engine’s reciprocating motion is not transmitted to wheels but generates electricity for motors driving the train (Lamb 2007). Some countries converted all of their train traffic to electricity and all countries use electric drive for high-speed passenger trains (Smil 2006b).

After WWII diesels had also completely conquered markets for off-road vehicles, including agricultural tractors and combines, and construction machinery (bulldozers, excavators, cranes). In contrast, diesels have never conquered the global passenger car market: they account for large shares of all cars in much of Europe, but remain rare in North America, Japan, and China. In 1933 Citroën offered a diesel engine option for its Rosalie and in 1936 Mercedes-Benz, with its 260D model, began the world’s most enduring series of diesel-powered passenger cars (Davis 2011). Automotive diesels eventually became quite common in post-WWII Europe thanks to their cheaper fuel and better economy.

In 2015 about 40% of the EU’s passenger cars were diesel-powered (with national shares as high as about 65% in France and 67% in Belgium), compared to only 3% of all light-duty vehicles in the US (Cames and Helmers 2013; ICCT 2016). The largest recent SUV diesels (the Audi Q7 offered between 2008 and 2012) rate 320 kW, an order of magnitude more powerful than the engine (33 kW) in the pioneering 1936 Mercedes design. The power of diesels in most popular sedans (Audi 4, BMW 3, Mercedes E, VW Golf) ranges mostly between 70 and 170 kW. The power of diesel cars produced by Mercedes-Benz rose from 33 kW in 1936 (260D) to 40 kW in 1959 (180D), 86 kW in 1978 (turbo-charged 300SD), 107 kW in 2000 (C220 CDI), and 155 kW in 2006 (E320 BlueTec), a fivefold gain for the company’s designs in seven decades.

Diesel’s engines are here to stay because there are no readily available mass-scale alternatives that could keep integrating the global economy as conveniently and as affordably as do diesels powering ships, trains, and trucks. But the engine’s advantages in passenger vehicles have been declining as prices of diesel fuel have risen, as the efficiency gap between diesel and gasoline engines has narrowed (the best gasoline engines are now only about 15% behind), and as new environmental regulations require much cleaner automotive fuels. Stricter regulations for marine shipping will also have an impact on the future growth of heavy diesel engines.

Gas Turbines

Gas turbines have a long history of conceptual and technical development, with the idea patented first before the end of the 18th century and with the first impractical designs (using more energy than they produced) dating to the beginning of the 20th century (Smil 2010b). Their first working versions were developed for military use, concurrently and entirely independently, during the late 1930s and the early 1940s in the UK by Frank Whittle and in Nazi Germany by Hans Pabst von Ohain (Golley and Whittle 1987; Conner 2001; Smil 2010b). Serial production of jet engines began in 1944 and the first British and German fighter jets began to fly in August of that year, too late to affect the war’s outcome. Continuous improvements of both military and civilian versions have continued ever since (Gunston 2006; Smil 2010b).

The performance of jet engines is best compared in terms of their maximum thrust and thrust-to-weight (T/W) ratio (the ideal being, obviously, the lightest possible engine developing the highest possible thrust). HeS 3, von Ohain’s first engine powering an experimental aircraft (Heinkel-178) in August 1937, had thrust of just 4.9 kN. Whittle’s W.1A engine that powered the first Gloster E.28/29 flight in April 1941 developed 4.6 kN and had T/W 1.47:1 (Golley and Whittle 1987). Because of its development of early British military jet engines, Rolls-Royce was able to introduce in 1950 the world’s first axial flow jet engine, the 29 kN (5.66:1 T/W) Avon that was used first in a bomber and later in various military and commercial airplanes. But the British Comet, the world’s first jetliner, was powered by low-thrust (22.3 kN) and very low thrust/weight ratio de Havilland Ghost Mk1 engines. In 1954, just 20 months after it began, the project was suspended after two major fatal accidents following takeoffs from Rome, later attributed to stress cracks around the plane’s square windows leading to catastrophic decompression of the fuselage.

By the time a redesigned version of the Comet was ready in October 1958 it had two competitors. The Soviet Tupolev Tu-104 (with 66.2 kN Mikulin engines) began commercial service in September 1956, and Boeing 707 (with four Pratt & Whitney JT3 turbojets rated at 80 kN, thrust/weight ratio of 3.5–4.0) launched Pan Am’s transatlantic service in October 1958; a year later the airline deployed the Boeing for its round-the-world series of flights.

All of the early engines were turbojets derived from military designs that were not the most suitable choice for commercial aviation due to their relatively low propulsion efficiencies. Whittle realized this limitation of turbojets during the earliest days of his work on jet propulsion when he talked about replacing a low-mass high-velocity jet by a high-mass low-velocity (understood in relative terms) jet (Golley and Whittle 1987). This was eventually accomplished by turbofan engines, using an additional turbine to tap a part of the propulsive power and deploy it to rotate a large fan placed in front of the main compressor to force additional air (compressed to only twice the inlet pressure) to bypass the engine and exit at speed only about half as fast (450 vs. 900 m/s) as the compressed air forced through the engine’s combustor.

Unlike turbojets, whose thrust peaks at high speeds, turbofans have peak thrust during low speeds, a most desirable property for takeoffs of heavy passenger jets. Moreover, turbofans are also much quieter (as bypass air envelopes the fast-moving hot exhaust) and higher bypass ratios lower specific fuel consumption, but this improvement is limited due to the diameter of fans and mounting of engines (very large engines would have to be mounted on elevated wings). Frank Whittle patented the bypass idea already in 1936, and in 1952 Rolls-Royce built the first jet engine with a bypass, albeit just 0.3:1. By 1959, P&W’s JT3D (80.1 kN) had a bypass ratio of 1.4:1 and in 1970 the company introduced the first high-bypass engine, JT9D (initially 210 kN, bypass ratio 4/8:1, T/W 5.4–5.8) designed to power the Boeing 747 and other wide-body jetliners (Pratt & Whitney 2017).

GE demonstrated an engine with an 8:1 bypass ratio already in 1964 and it became available as TF39 in 1968 for military C-5 Galaxy planes. The first engines of the GE90 family, designed for the long-range Boeing 777, entered service in 1996: they had thrust of 404 kN and a bypass ratio of 8.4. A larger variant followed, and the world’s most powerful turbofan engine—GE90–115B rated initially at 512 kN, with bypass ratio 9:1, and thrust-to-weight ratio of 5.98—was deployed commercially for the first time in 2004. The engine with the highest bypass ratio in 2017 (12.5:1) was P&W’s PW1127G, a geared turbofan powering Bombardier’s CSeries and Airbus 320neo, while the Rolls-Royce Trent 1000 (used by Boeing 787) has a 10:1 ratio.

Technical advances of gas turbines in flight have been well documented and hence we can follow the growth trajectories of jet engines in detail (Gunston 2006; Smil 2010a). Maximum engine thrust went from 4.4 kN (von Ohain’s He S3B in 1939) to 513.9 (and in tests up to 568) kN (GE’s GE90–115B certified in 2003, in service a year later). This has resulted in an almost perfectly linear growth trajectory with an average annual gain of nearly 8 kN (figure 3.11). Obviously, other critical specifications rose in tandem. Overall pressure ratio (compression ratio) was 2.8 for HE S3B and 4.4 for Whittle’s W.2. British Avon reached 7.5 by 1950, P&W’s JT9D surpassed 20 in 1970, and in 2003 GE90–115B set a new record of 42, an order of magnitude larger than Whittle’s engine. Similarly, total air mass flow rose by an order of magnitude, from about 12 kg/s for the earliest Whittle and von Ohain designs to 1,360 kg/s for GE90–115B.

Figure 3.11

Linear fit of the maximum thrust of jet engines. Data from Smil (2010b).

The dry mass of jet engines rose from 360 kg for HE S3B to nearly 8.3 t for GE90–115B (that is almost as much as the DC3, the most popular propeller commercial aircraft introduced in 1936), and thrust-to-weight ratio increased from just 1.38 for HE S3B and 1.6 for W.2 to about 4.0 by the late 1950s, to 5.5 during the late 1980s with GE90–11B at about 6, and maximum bypass ratios went from 0.3:1 in 1952 to 12.5:1 in 2016 (figure 3.12). Larger fans have been required to produce high bypass ratios, with their diameters growing from 1.25 m for Conway in 1952 to 2.35 m for JT9D in 1970 and 3.25 m for GE90–115B in 2004. Increases in operating efficiency (declines in specific fuel consumption per unit of thrust, per km or per passenger-km flown) have been impressive: early turbofans were 20–25% more efficient than the best turbojets, by the 1980s the efficiency of high-bypass turbofans was up to 35% better, and the best designs of the early 21st century consume only about 30% as much fuel for every seat-kilometer as did the engines of the late 1950s (figure 3.13).

Figure 3.12

Evolution of the bypass ratio in commercial jetliners. Data from specifications for GE, P&W, and Rolls-Royce engines and from Ballal and Zelina (2003). Maximum ratios have seen linear growth that averaged about 2.2 units per decade.

Figure 3.13

Evolution of jetliner efficiency in terms of relative fuel consumption from Boeing 707 (1958) to Boeing 787–10 (2017). Data from Ballal and Zelina (2003) and from www.compositesworld.com.

Much as in the case of diesel engines in shipping and heavy land transport, gas turbines in flight are here to stay as there is no alternative to powering jetliners flying hundreds of people on routes ranging from commuter hops to intercontinental journeys (the record one now lasting more than 17 hours). At the same time, turbofans are not expected to grow substantially in terms of their thrust, thrust-to-weight ratio, or bypass ratio. Capacities are also not expected to rise more than 800 people (the maximum for Airbus A380 double-decker, although in existing configurations it boards typically between 500 and 600). Moreover, preferred point-to-point flights (rather than channeling the traffic through a limited number of hubs using the largest-capacity planes) creates higher demand for efficient jetliners capable to transport 200–400 people on long flights (a niche now exploited by Boeing’s 787).

The development of the gas turbine achieved its first great commercial applications in flight, but stationary units have eventually become highly successful as flexible, highly efficient and affordable means of generating electricity or delivering rotary power (especially for compressors). Commercial uses of stationary gas turbines began with electricity generation by the Swiss Brown Boveri Corporation at the municipal power station in Neuchâtel in 1939 (ASME 1988). The world’s first operating gas turbine had nameplate capacity of 15.4 MW (3,000 rpm, inlet temperature 550°C) and its low efficiency (merely 17.4%, and hence generating just 4 MWe) was due to high energy consumption (75% of the input) by its compressor and to the absence of any heat recovery. But the design worked so reliably that the machine failed only after 63 years, in 2002!

A post-WWII preference for large central stations slowed down the expansion of relatively small gas turbine units (their total capacity grew to just 240 MW by 1960, less than a single typical steam turbogenerator installed in a coal-fired station). Strong takeoff came only in reaction to the November 1965 blackout that left about 30 million people in the northeastern US without electricity for as long as 13 hours (US FPC 1965). Obviously, small gas-fired units could be rapidly deployed in such emergencies and the US utilities ordered 4 GW of new capacity in 1968, and 7 GW in 1971. As a result, the stationary gas turbine capacity owned by the utilities increased from 1 GW in 1963 to nearly 45 GW by 1975, the largest unit size rose from 20 MW in 1960 to 50 MW in 1970, and GE produced its first 100 MW machine in 1976 (Hunt 2011).

The subsequent slowdown (caused by a combination of rising natural gas prices and falling electricity demand following OPEC’s two rounds of oil price increases) was reversed by the late 1980s and by 1990 almost half of all new US generating capacity was installed in gas turbines (Smock 1991). By the 1990s it also became less common to use gas turbines alone and in most new installation they are now combined with a steam turbine: hot exhaust leaving gas turbines produces steam (in a heat recovery generator) that is used to power a steam turbine, and power plants generating electricity with these combined-cycle gas turbine arrangements have recently set new records by surpassing 60% efficiency (Siemens 2017a; Larson 2017).

By 2017 the world’s largest gas turbines were offered by GE and Siemens. SGT5–8000H is the most powerful 50-Hz gas turbine produced by Siemens; its gross output is 425 MW in simple cycle and 630 MW in combined cycle operation, with overall efficiency of 61% (Siemens 2017b). The world’s largest machines are now made by GE: 9HA.2 delivers 544 MW (with efficiency of 43.9%) in simple cycle, and in combined cycle it rates 804 MW, with overall efficiency up to 63.5% and with start-up time of less than 30 minutes (GE 2017c). Gross output of stationary gas turbines has thus progressed from 15.4 MW in 1939 to 544 MW in 2015, or roughly a 35-fold increase in 76 years. But there is no doubt that the growth of stationary gas turbines will not follow the course suggested by the continuation of the best logistic fit because that would bring enormously powerful machines with capacities in excess of 2 GW by 2050.

The growing capacity of the largest stationary units has met the needs of utilities as they shift from coal toward more efficient, and more flexible, ways of generation, but since the 1960s smaller gas turbines have also become indispensable in many industrial applications. Most notably, they power compressors used to move natural gas in pipelines. Compressor stations are sited at regular intervals (typically on the order of 100 km on major trunk lines) and they use machines of 15–40 MW capacity. Gas turbine-driven centrifugal compressors are common in the oil and gas industry, in refineries and in chemical syntheses, most notably in Haber-Bosch ammonia plants.

Nuclear Reactors and PV Cells

I have put these two completely different modes of modern energy conversion into the same section simply as a matter of convenience: after investigating the growth of external (steam) and internal (gasoline, diesel) combustion engines and water, wind, steam, and gas turbines, these are the only remaining converters of major economic consequence in the early 21st century. Nuclear fission, proceeding in several types of reactors, has already made a significant contribution to the global primary energy supply since its commercial beginnings during the late 1950s, while land-based conversion of sunlight by photovoltaic modules to transmit electricity into national and international grids is the latest addition to large-scale generation. Besides the differences in their longevity and modes of operation, the two conversions also have very different prospects.

Regardless of its actual rates of future expansion, it is obvious that direct conversion of solar radiation to electricity—natural energy flow that could be harnessed with a much higher power density than any other renewable resource—has an assured and expansive future. In contrast, fission-based electricity generation has been in retreat in all Western economies (and in Japan), and although new reactors are being built in a number of countries (above all in China, India, and Russia) pending retirement of reactors built during the decades of rapid nuclear expansion (1970–1990), will not make nuclear generation relatively more important than in the past. In 1996 fission supplied nearly 18% of the world’s electricity, by 2016 the share was down to 11% and even the best scenario for 2040 does not see it rising above 12% (WNA 2017).

Nuclear Reactors

Nuclear reactors could be put into the same energy converter category as boilers: their role is to produce heat that is used to generate steam whose expansion rotates turbogenerators to produce electricity. Of course, the heat production rests on an entirely different conversion: instead of combustion (rapid oxidation) of carbon in fossil fuels that could be done in simple metallic vessels, nuclear reactors proceed by controlled fission of an isotope of uranium, the heaviest stable element, and they are among the most complex and technically demanding engineering assemblies. But their growth has been always constrained by economic and technical imperatives (Mahaffey 2011).

In order to operate as economically as possible and to meet the new capacity requirements during the decades of rising electricity demand, the minimum unit capacities of nuclear reactors had to be above 100 MW. Only the first British Magnox reactors at Calder Hall (construction started in 1953) and Chapelcross (construction started in 1955) stations were smaller, with gross capacity of 60 MW, while the subsequent early units installed at Bradwell, Berkeley, and Dungeness ranged from 146 to 230 MW (Taylor 2016). British reactors commissioned during the 1970s had gross unit capacities between 540 and 655 MW and during the 1980s the largest unit rated 682 MW.

The French decision to develop large nuclear capacity as the best way to reduce dependence on imported crude oil was based on an economically optimal repetition of standard, and relatively large, reactor sizes (Hecht 2009). Most French reactors have capacities of 900 MWe (951–956 MW gross), the second most common class is rated at 1,300 MWe (1,363 MW gross), and there are also four reactors capable of 1,450 MWe (1,561 MW gross). Commissioning of the first reactor of the 1,650 MWe class at Flamanville has been repeatedly delayed. Many US reactors built during the 1970s and 1980s have capacities in excess of 1 GW, with the largest units between 1,215 and 1,447 MWe.

Development of nuclear reactor capacities has been obviously constrained by overall electricity demand and by the evolution of steam turbine capacities (covered earlier in this chapter). Operational reliability, minimized cost, and the need to conform to regional or national requirements for base-load generation (capacity factors of nuclear reactors are commonly above 90%) have been more important considerations than the quest for higher unit capacities. But the average size of reactors has grown larger: only two reactors commissioned during the 1960s had capacity in excess of 1 GW and the mean rating was just 270 MW—while 60 reactors under construction in 2017 ranged from 315 MW in Pakistan to 1.66 GW in China, with a modal rating (17 reactors) of 1 GWe, the capacity of the most commonly built Chinese unit (WNA 2017).

PV Modules

Photovoltaics has been—together with nuclear reactors and gas and wind turbines—one of the four post-WWII additions to commercial means of large-scale electricity generation. The photovoltaic principle (generation of electric current in a material exposed to sunlight) was discovered by Antoine Henri Becquerel in 1839, the PV effect in selenium was identified in 1876 and in cadmium sulfide in 1932, but the first practical applications came only after researchers at Bell Laboratories invented a silicon PV cell in 1954 (Fraas 2014). Their cells initially converted just 4% of incoming radiation to electricity but subsequent improvements during the remainder of the decade, mainly thanks to the work of Hoffman Electronics, pushed efficiency to 8% by 1957 and 10% by 1959 (USDOE 2017).

The first installation of very small PV cells (~1 W) came in 1958 on the Vanguard satellite, and other space applications followed soon: very high price of the cells was a small portion of the overall cost of a satellite and its launch. In 1962 Telstar, the world’s first telecommunications satellite, carried a 14 W array and progressively larger modules have followed. For example, Landsat 8 (an Earth-observation satellite) carries an array of triple-junction cells with capacity of 4.3 kW. Land-based PV generation got its first stimulus with the two rounds of rapid increases of world oil prices during the 1970s and the world’s first 1 MW PV facility was installed in Lugo, California in 1982, followed two years later by a 6 MW plant. The return to low oil prices postponed any new large commercial uses, but development of more efficient monocrystalline silicon cells, and introduction of new cell types, continued.

The best research cell efficiencies have unfolded as follows (NREL 2018; figure 3.14). Single silicon cells without a concentrator reached 20% in 1986 and 26.1% by 2018, while the Shockley–Queisser limit restricts the maximum theoretical efficiency of a solar cell to 33.7% (Shockley and Queisser 1961). The cheapest amorphous silicon cells began during the late 1970s with efficiencies of just 1–2%, reached 10% by 1993, and 14% by 2018. In contrast, the efficiency of copper indium gallium selenide amorphous cells rose from about 5% in 1976 to 22.9% by 2018, nearly matching that of monocrystalline silicon. Currently the most efficient research cells are single junction gallium arsenide and multijunction cells (figure 3.14). These cells are made of two to four layers of different semiconducting materials that absorb solar radiation in different wavelengths and hence boost the conversion efficiency: three-junction cells with a concentrator were the first to surpass the 40% efficiency mark (in 2007), and four-junction cells with a concentrator reached 46% in 2015 (NREL 2018).

Figure 3.14

Record conversion efficiencies of research PV cells. Simplified from NREL (2018).

The modular nature of photovoltaics makes it possible to install units ranging from a few square centimeters to large power plants whose peak power is now in hundreds of MW. As a result, growth of large PV installations has been primarily limited by the cost of modules (which has been steadily declining) and the expense of providing the requisite high-voltage transmission from those sunny places that do not have an existing connection to a national grid. As already noted, a California solar farm already aggregated 1 MW in 1982, but the first installation reaching 10 MW followed only in 2006 (Germany’s Erlasee Park with 11.4 MWp), and in 2010 Ontario’s Sarnia (97 MWp) came close to 100 MW. In 2011 China’s Golmud Park reached 200 MW and in 2014 California’s Topaz Farm was the first to aggregate 500 MW.

The largest PV solar park on grid in 2017 was Tengger Desert in Nigxia with 1.547 GWp while China’s Datong in Shanxi province and India’s Kurnool Ultra Mega Solar Park in Andhra Pradesh had 1 GW each but the Datong plant is to be eventually expanded to 3 GWp (SolarInsure 2017). This means that between 1982 and 2017 capacities of the largest solar park grew by three orders of magnitude (1,500 times to be exact), with recent rapid increases made possible by the declining costs of PV panels. Naturally, the location of these plants (latitude, average cloud cover) determines their capacity factors: for fixed panels they range from only about 11% for German plants to about 25% for the facilities in the US Southwest.

Electric Lights and Motors

All electric lights and motors are, obviously, secondary energy converters turning electricity produced by steam turbogenerators or water, wind and gas turbines and PV cells into illumination or into mechanical energy (motors as inverse generators). Electric lights are now by far the most numerous energy converters and more than 130 years after their introduction we do not even notice their truly revolutionary consequences for human development and modern civilization. But electric motors, too, have become ubiquitous; however, as they are nearly always hidden—built into an enormous variety of electrical and electronic devices, ranging from washing machines and dough mixers to laptop computers, mounted behind metal and plastic panels in cars, and rotating incessantly behind the walls of industrial enterprises—most people are not even aware of the multitude of their now indispensable services.

And while there are some needs for extraordinarily powerful lights and very large electric motors, the growth of these energy converters should be traced primarily in terms of their improving efficiency, durability, and reliability rather than in terms of increasing unit capacity. Indeed, the two markets with rapidly growing demand for electric motors—electronic devices and cars—need, respectively, devices of tiny or small-to-moderate capacities. For example, a brushless DC 5V/12V electric motor energizing a CD-ROM in a laptop operates mostly with power of just 2–7 W, while the power of DC window lift motors (now installed in every car, together with other motors needed to lock doors and adjust seats) ranges mostly from 12 to 50 W (NXP Semiconductors 2016).

Electric Lights

In order to appreciate the revolutionary nature of even the first, highly inefficient, electric lights it is necessary to compare their performance with that of their common predecessors. Burning candles converted as little as 0.01% and no more than 0.04% of the chemical energy in wax, tallow or paraffin into dim, unstable, and uncontrollable light. Even Edison’s first light bulbs, with filaments of carbonized paper, were 10 times more efficient than candles. But by the early 1880s less convenient gas lights (introduced just after 1800 and burning coal gas made by distillation of coal) had a slight edge with efficiencies up to 0.3%. That changed soon when osmium filaments were introduced in 1898 and raised efficiency to 0.6%; that was doubled by 1905 with tungsten filaments in a vacuum, and doubled again when light bulbs were filled with inert gas (Smil 2005). Fluorescent lights, introduced during the 1930s, raised efficiency above 7%, above 10% after 1950, and close to 15% by the year 2000.

The best comparison of light sources is in terms of their luminous efficacy, the ratio expressing the generation of visible light per unit of power, lumens per watt (lm/W), with a theoretical maximum of 683 lm/W. The history of lighting has seen luminous efficacies (all data in lm/W) rising from 0.2 for candles to one to two for coal-gas light and less than five for early incandescent light bulbs to 10–15 for the best incandescent lights and up to about 100 for fluorescent lights (Rea 2000). High-intensity discharge lights were the most efficacious source of indoor lighting at the beginning of the 21st century, with maxima slightly above 100 lm/W. Efficacy improvements of nearly all of these light sources have seen long periods of stagnation or slow linear growth but the future belongs to light emitting diodes (LED) whose light spectrum is suitable for indoor or outdoor applications (Bain 2015; figure 3.15. Their use began as small lights in electronics and cars and by 2017 their efficacies had surpassed 100 lm/W, and by 2030 they are expected to save 40% of US electricity used for lighting (Navigant 2015).

Figure 3.15

Light efficacy (lm/W) since 1930. Based on Osram Sylvania (2009) and subsequent efficacy reports.

Gains in efficiency of electricity conversion and generation have combined to deliver a unit of light at a tiny fraction of its traditional costs. By the end of the 20th century the average cost of US lighting was just 0.0003% of its 1800 rate; inversely, this meant that for the same price consumers received about 3,300 times more illumination (Nordhaus 1998), definitely one of the highest overall performance gains for an energy converter. Fouquet (2008) ended up with a similar long-term fraction: his calculations show that in the year 2000 a lumen of British light cost just 0.01% of its value in 1500, a 10,000-fold gain in 500 years (with virtually all of it taking place during the last 150 years of the period), and 1% of its value in 1900, a 100-fold gain in a century.

Electric Motors

Electric motors had a protracted evolution. Michael Faraday discovered electromagnetic induction in 1839 (Faraday 1839), but it took more than four decades before the first small DC motors became commercialized and nearly 60 years before the introduction of AC motors (Pope 1891). As long as batteries remained the only means of reliable electricity supply, direct current motors remained small and uncommon. A stencil-making pen used in duplicating documents powered by Edison’s small DC motor sold several thousand units during the late 1870s (Burns 2018), but the real beginnings of commercial diffusion (in industry, for streetcars) came only with the first central electricity-generating plants (starting in 1882) and with the production of polyphase motors in the late 1880s.

Tesla dated his original electric motor idea to 1882 but his design for a polyphase motor was first patented only in 1888 after he had spent several years in the US (Tesla 1888). Tesla patented a two-phase machine, and the first three-phase motor, a design that quickly became dominant, was built by a Russian engineer, Mikhail Osipovich Dolivo-Dobrowolsky, working in Germany for AEG. Tesla sold all of his motor patents to Westinghouse, whose small fan-powering motor (125 W) sold almost 10,000 units during the 1890s (Hunter and Bryant 1991). Rapid adoption of electric motors in industry also began during the last decade of the 19th century and accelerated after 1900. As already noted in the introduction to this section, variables that are commonly used to trace the growth of many energy converters and other artifacts—their capacity, size, mass, or efficiency—are not the most appropriate measures for appraising the advances in design, production, and deployment of electric motors.

In the first place this is due to the fact that this category of secondary energy converters (all motors are essentially generators running backward) includes different operating modes: the basic divides are between DC and AC motors, and between induction and synchronous motors within the latter category (Hughes 2006). In addition, electric motors are inherently highly efficient, particularly when they work at or near their full capacity. Although there are several pathways lowering the overall performance (due to friction, windage and hysteresis and eddy and ohmic losses), full-load efficiencies of electric motors were around 70% even for the early commercial models made at the end of the 19th century; that left a limited range for further improvement and by the beginning of the 21st century we came close to practical performance limits. The latest standards adopted by the National Electrical Manufacturers Association in the US specify minimum full-load efficiencies of 95% for motors rated at more than 186.4 kW (250 hp) and 74–88.5% for small motors of 750 W to 7.5 kW (Boteler and Malinowski 2015).

But the most important reason why tracing growth in unit capacity or mass are not the most revealing variables to quantify is that the physical attributes of electric motors are determined by the specific requirements of their deployment and that the miniaturization and mass-production of small-capacity motors for uses in electronics and mechatronics have been as (if not more) important than the growth of capacities for industrial or transportation applications demanding higher power. Improving the ability to meet a specific functionality—be it extended durability in a dusty environment or underwater, long-term delivery of steady power, or the ability to provide sudden high-torque requirements—as well as a rapid expansion of global demand for electric motors used in new applications (especially in household and electronic consumer products and in cars where the motor size is inherently limited) are thus more revealing attributes than a secular increase in power ratings.

As a result, the two most notable growth phenomena marking the post-1890 history of electric motors have been their unusually rapid conquest of the industrial market during the early decades of the 20th century and their mass-scale nonindustrial deployment during recent decades. In North America and in Western Europe the electrification of industrial processes in general, and manufacturing in particular, was accomplished in a matter of three to four decades (in the US the process was essentially over by the late 1920s), while nonindustrial uses of electric motors expanded slowly for many decades before their deployment accelerated during the closing decades of the 20th century and then reached unprecedented heights—with worldwide annual unit additions now exceeding 10 billion units—from the year 2000.

Data derived from censuses of manufacturing illustrate the rapid diffusion of electric motors and the retreat of steam (Daugherty 1927; Schurr et al. 1990). In 1899, 77% of all power installed in the country’s manufacturing enterprises was in steam engines and 21% in waterwheels and turbines. During the next three decades the total mechanical power installed in US manufacturing increased 3.7-fold but the aggregate capacity of electric motors grew nearly 70-fold and the converters whose share of manufacturing power was less than 5% in 1900 supplied 82% of all power in US factories. The share rose further during the 1930s to reach 90% in 1940, with the growth trajectory following closely a sigmoid course (figure 3.16). But a long-term plateau had formed at a slightly lower level: by 1954 the share had declined to 85% and stayed close to that high share for the remainder of the 20th century as other efficient converters (gas turbines and diesel engines) claimed the rest.

Figure 3.16

Logistic fit (inflection point in 1916, asymptote of 89.9%) of the share of power in US manufacturing supplied by electric motors, 1909–1950. Data from Daugherty (1927) and Schurr et al. (1990).

But this rapid shift to electric motors was far from being just a transition to a new kind of energy converter. Moving from shafts to wires had profound consequences for factory organization, quality and safety of the working environment, and labor productivity (Devine 1983). Reciprocating motion produced by steam engines was transferred from the converter (waterwheel, water turbine, or steam engine) first by mainline shafts, running the length of a building under factory ceilings, and then by parallel countershafts and belts. Obviously, these arrangements were inconvenient and dangerous. They were inefficient (due to frictional losses), unsafe (slipping belts), and noisy and they did not allow any individual precision adjustment of rotation or torque. Moreover, any problem with the main converter or with any part of an elaborate transmission setup (easily caused by a slipped belt) necessitated a temporary outage of the entire system.

Electric motors changed all that. They were first used to power shorter shafts serving a small number of machines, but soon the unit drive became the norm. As a result, ceilings were opened to natural light or to installation of electric lighting and adequate heating and (later) air conditioning; highly efficient motors also allowed individualized precision control, machines and workstations could be switched on and off without affecting the entire system, and expansion or machine upgrading could be handled easily by requisite rewiring. Electric drive was a key factor in the near-doubling of US manufacturing productivity during the first 30 years of the 20th century, as well as in another doubling that was accomplished by the late 1960s (Schurr 1984). Small and efficient servomotors are now deployed not only in such common industrial tasks as metal cutting and forming, woodworking, spinning, and weaving but also to power conveyor belts (now also indispensable for preparing the rising volume of e-commerce orders), position PV panels (solar tracking for maximum efficiency), and open doors automatically.

The second notable growth trajectory associated with the adoption of nonindustrial electric motors began slowly with the diffusion of major household appliances (white goods), first in the US already before WWII, then in post-1950 Europe and Japan. In 1930 almost 10% of US households had a small refrigerator, 90% penetration was reached by 1955, and the rate was close to 100% by 1970 (Felton 2008). To run their compressors, refrigerators need durable, low-vibration, low-noise electric motors (now typically 550–750 W) and the demand for these devices increased further with rising ownership of freezers. Clothes washers (whose market penetration was much slower and reached about 80% of US households by the year 2000) have electric motors now rating mostly between 500 and 1,000 W.

Other commonly owned appliances powered by small electric motors include clothes dryers, dishwashers, and heating, ventilation and air-conditioning equipment (including heat recovery ventilators that run nonstop in highly insulated homes, as do natural gas furnace fans when the heating cycle is on). Electric motors in some kitchen appliances are also fairly powerful, 1,000–1,200 W in some food processors, although 400–600 W is the most common range. New markets for small electric motors have been created by growing ownership of electric tools and garden tools (including electric lawnmowers and trimmers).

Again, some of these tools must be fairly powerful (drills up to 1 kW, saws well over 1 kW), others have tiny electric motors. But by far the largest surge in demand for small electric motors has been due to the rapid sequential adoption of new electronic devices. Mass production and low prices of these consumer goods were made possible by the rise of transistors, then integrated circuits and, eventually, of microprocessors, but most of these devices still need mechanical (rotary) energy (for turntables, assorted drives, and cooling fans), with electric motors as the only viable choice to meet that demand (desktop and laptop computers have three to six small motors to run disk drives and ventilation fans).

The electrification of cars powered by internal combustion engines began just a few years after the introduction of the first mass-made commercial automobile. Ford’s Model T, introduced in 1908, had no electric motors (starting the engine required laborious, and sometimes dangerous, cranking) but in 1911 Charles F. Kettering patented the first electric starter, in 1912 GM ordered 12,000 of these labor-saving devices for its Cadillacs, and by 1920 electric starters became common, with Model T switching to them in 1919 (Smil 2006b). Power steering became common in the US during the 1960s and small electric motors have eventually taken over functions (ranging from opening windows to lifting gates) that relied for generations on human power.

In cars, small electric motors start engines, enable power steering, run water pumps, operate antilock brakes, seatbelt retractors and windshield wipers, adjust seats, run heating and air-conditioning fans, open and close windows, door locks, and now also sliding doors, lift gates, and fold in side mirrors. Electric parking brakes are the latest addition to these applications. Nonessential electric motors became common first in luxury cars but their use has been steadily percolating downwards, with even basic North American, European and Japanese sedans having about 30 of them, while the total can be three to four times as many in upscale models. The total weight of luxury car motors is on the order of 40 kg, with seat control, power steering and starter motors accounting for about half of the total mass (Ombach 2017). Global sales of automotive electric motors surpassed 2.5 billion units in 2013 and with the demand growing by 5–7% a year they were to reach 3 billion in 2017 (Turnbough 2013).

In aggregate, Turnbough (2013) estimated that global sales of small nonindustrial motors would rise from 9.8 billion units in 2012 to 12 billion in 2018, and their cumulative operating total would be far larger than the aggregate number of all (larger, and longer-lasting) industrial units. The smallest electric motors are now also the most ubiquitous because almost every cell phone has a vibration alert produced by the spinning of an eccentrically mounted tiny (typically 1 cm long and 4 mm in diameter) motor that now wholesales for as little as 50 cents apiece. These mini-motors use linear voltage regulators and constant voltage supply. Global sales of cell phones rose from 816 million in 2005 to 1.42 billion in 2016, creating an unprecedented demand for those tiny motors. Small vibrator motors are also commonly included in toys.

Finally, a few numbers to trace the trajectory of the largest industrial motors. In rail transportation, maximum power rose from 2.2 kW capacity of the first electric mini-railway (just a 300 m long circular track) built in 1879 at the Berlin Trade Fair by Werner von Siemens, to 6.45–12.21 MW of asynchronous motors installed in two pairs of power cars powering French rapid trains (TGV, Thalys, Eurostar), an increase of three orders of magnitude. Synchronous motors, best suited for constant speed applications and commonly used to power pipeline and refinery compressors, pumps, fans, and conveyors have seen exponential growth during the second half of the 20th century, from only about 5 MW in 1950 to more than 60 MW in 2000; the highest voltages stayed at between 10 and 15 kV for decades and only early in the 21st century did they rise suddenly to nearly 60 kV (Kullinger 2009).