Abbott, Wallace Calvin (1857–1921)
An American doctor and chemist, and the founder of the medical company Abbott Laboratories, Dr. Wallace Calvin Abbott designed a method of manufacturing tablets that contained an exact amount of any given drug, and this allowed patients to receive a more exact dose of medicines. He was born on October 12, 1857, in Bridgewater, Vermont, the son of Luther Abbott, a farmer from Woodstock, Windsor County, Vermont, and his wife, Weltha (née Barrows). Educated at the state Normal School and then at St. Johnsbury College, Vermont, he proceeded to Dartmouth College and then studied medicine at the University of Michigan. After several years involved in treating patients, he also ran a small pharmacy. In 1886 he moved to Chicago, and two years later, at age thirty, he decided to establish his own manufacturing business, although he continued practicing as a physician.
During Abbott’s first few years running his own business, he was involved in making medicines from the active (alkaloid) part of a medicinal plant that could then be made into small pills that he termed dosimetric granules. The eminent Belgian surgeon Adolphe Burggraeve (1806–1902) had first suggested this idea as being possible, but he had never been able to produce tablets for sale in any large quantity. This was because when the tablets were being made, it was difficult for any individual pill to have an exact dose of the medicine.
Dr. Abbott wanted a system that would allow for an exact quantity of any individual medicine to be included in a pill, as the initial attempts to make tablets often led to doses being too small or sometimes too much. Abbott found that the use of dosimetric granules was far more effective with patients who had previously been given inexact doses. Many medical companies quickly saw the importance of this discovery, as it minimized side effects. In 1900 the company became the Abbott Alkaloidal Company, and it grew massively during World War I when pharmaceuticals made in Germany were not allowed into the United States. It was from this time that the company started to reap enormous profits, becoming Abbott Laboratories, a publicly listed company with its headquarters at Abbott Park, near North Chicago, Illinois, and employing seventy-two thousand people. The author of a number of books, including, with W. F. Waugh, A Text-Book of Alkaloidal Therapeutics and A Text-Book of Alkaloidal Practice, Dr. Abbott died on July 4, 1921, in Chicago.
—Justin Corfield
Further Reading
“Wallace Calvin Abbott, M.D.” Boston Medical and Surgical Journal 185, no. 3 (July 21, 1921): 97–98.
Acheson, Edward Goodrich (1856–1931)
Edward Goodrich Acheson was an American chemist who was the inventor of carborundum, an abrasive substance that can be used as a substitute for diamonds. He manufactured that and graphite, lodging sixty-nine patents relating to this and other products. He was born on March 9, 1856, in Washington, Pennsylvania, the son of William Acheson and Sarah Diana (née Ruple). Edward Acheson was the second son and fourth child in the family; his father was a local grocer. When Acheson was five, the family moved to Monticello (now Gosford), Pennsylvania, with William Acheson taking up a position as a manager of a blast furnace. As a boy, Edward Acheson attended local schools, but the depression of the 1870s ended his education. He started working as a timekeeper at the blast furnace. On March 5, 1873, he managed to secure his first patent with a rock-boring machine that he had devised for use in coal mines. In that year, Acheson’s father died, and young Edward managed to find work as a ticket agent for the Allegheny Valley Railroad, working his way up to be assistant surveyor, then assistant engineer on the railroads, and finally as a recorder for the gauges on oil tanks.
In 1880, Acheson decided to go to New York to work in the burgeoning electrical industry; his interest was excited by reading Scientific American. Initially Acheson started working under John Kruesi at the laboratory of Thomas Edison at Menlo Park, New Jersey. During that time, he conducted experiments on carbon, which Edison was using to develop his electric light bulb. In the following year he installed the first electric lights for Edison in Europe, going to Italy, Belgium, and France, where he was at the Paris Exposition of 1881.
On his return to the United States, in 1884 Acheson left the employment of Edison and started working as a supervisor of a rival factory manufacturing electric lamps. His initial attempt to work by himself failed, and he returned to Gosford. He had invented an anti-induction telephone wire and briefly worked for the Standard Underground Cable Company, later working in the construction of an electrical plant at Monongahela, Pennsylvania.
Soon Acheson’s work became focused on an attempt to make an abrasive substance made from iron ore. This led nowhere, but he started working on plans to produce his own artificial diamonds (cubic zirconia) in an electric furnace. He conducted a number of experiments, and one of these involved heating a mixture of clay and coke in an iron bowl with a carbon arc light. The result was the creation of shiny, hexagonal crystals. Initially Acheson thought that these crystals were a compound formed from carbon and alumina coming from the clay. They were, in fact, silicon carbide, made from electronically fusing clay and carbon and forming the second hardest substance next to diamond. To help develop this discovery, he built an electricity plant at Port Huron. Acheson devised the term carborundum to describe the new material, and on February 28, 1893, he lodged a patent on his highly effective abrasive substance.
Subsequently Acheson discovered the effects of heating carborundum to a high temperature, discovering that silicon vaporizes at a temperature of 4,150˚C (7,500˚F), leaving behind a graphitic carbon. In 1896 Acheson was granted a patent for this new process, and this led to the establishment of the Acheson Graphite Company, which he incorporated in 1899. He was also the inventor of a number of other substances, such as siloxicon and Egyptianized clay, Oildag, and Aquadag. He was awarded the John Scott Medal in 1894 for his original discovery of carborundum, and he received it again in 1901 for the production of artificial graphite. He also was given the Rumford Medal of the American Academy of Arts and Sciences in 1908. In 1909 the University of Pittsburgh awarded him a degree, and in the following year he received the Perkin Research Medal. His work was exhibited at the Paris Exposition in 1900 and the St. Louis Exposition in 1904. Acheson was awarded the Royal Order of the Poland Star (Sweden), and he was given honorary membership of the Imperial Russian Technical Society and the Swedish Technical Society.
Altogether Acheson received sixty-nine patents for his inventions, and he established a number of companies to develop these products, the main one being the Carborundum Company located in Niagara Falls, New York. In 1925 Acheson donated $25,000 to the American Electrochemical Society to establish a prize, the Edward Goodrich Acheson Biennial Prize, to be awarded every two years. On December 15, 1884, he married Margaret C. Maher of Brooklyn, New York, and they had five sons and four daughters. He died from pneumonia on July 6, 1931, in New York City, survived by his widow and seven of his children. In 1997 Acheson was inducted into the National Inventors Hall of Fame.
—Justin Corfield
Edward Acheson
Further Reading
Chandler, Charles F. Edward Goodrich Acheson: Inventor-Scientist-Industrialist. Port Huron, MI: Acheson Industries, 1965.
Hubbard, Elbert. Edward G. Acheson. Whitefish, MT: Kessinger Publishing, 2005.
Szymanowitz, Raymond. Edward Goodrich Acheson: Inventor, Scientist, Industrialist: A Biography. New York: Vantage Press, 1971.
Although the concept of aerosol can be traced as far back as the eighteenth century, the first workable aerosol spray can was invented in Oslo, Norway, in 1926 by Erik Rothheim, a chemical engineer. It is a classic example of a technology spawned by the Industrial Revolution that has multiple productive uses while at the same time producing numerous unintended consequences that can bring about dangerous results.
Aerosol spray is a type of dispersing system that creates a mist of liquid particles that are emitted from a can or bottle. When the combiner’s valve is opened, the liquid is forced out of a small hole as energy as an aerosol or mist. As gas expands to drive out the payload, some propellant evaporates inside the can to maintain even pressure. Outside the can, the droplets of propellant evaporate quickly, leaving the payload suspended as very fine particles or droplets. Typical liquids dispersed in this way are insecticides, deodorants, and paints.
The aerosol spray can was first put to good use in the United States in 1941 by Lyle Goodloe and William Sullivan. Their design of a refillable spray can dubbed the “bug bomb” was patented in 1943 and is the ancestor of many popular spray products. Pressurized by liquefied gas which gave it propellant qualities, the small, portable can enabled soldiers to defend against malaria-carrying bugs by spraying inside their tents. The U.S. government, which held the patent rights to aerosol spray, made this technology available to anyone who wanted to use it, and in 1948, three companies were granted licenses to manufacture aerosols. Two of them, Chase Products and Claire Manufacturing, continue to manufacture aerosols.
For many years, the most common propellants used in aerosol spray cans were chlorofluorocarbons (CFCs), but since the Montreal Protocol came into force in 1989, they have been replaced in most countries due to the negative effects CFCs have on Earth’s ozone layer. The most common replacements are mixtures of volatile hydrocarbons such as propane, n-butane, and isobutene. Dimethyl ether and methyl ether are also used. All of these substances are quite flammable. Nitrous oxide and carbon dioxide are used to deliver foodstuffs such as whipped cream and cooking spray while medicinal aerosols use hydrofluoroalkanes or 1,1,1,2,7,3,3-heptafluoropropane.
Modern aerosol spray products have three major parts: the can, the valve, and the activator or button. The can is usually lacquered tinplate (steel with a layer of tin) and is made of two or three pieces of metal crimped together. Aluminum cans are also used, generally for more expensive products. The valve is crimped to the ridge of the can. The design of this component is important in determining the spray rate. The user depresses the activator to open the valve, and the product is released. The shape and size of the nozzle in the activator controls the spread of the spray.
Two major health problems have been linked to aerosol cans: deliberate inhalation of the contents in order to gain a “high” from the propellant; and the piggybacking of more dangerous particles into the respiratory tract. Nevertheless, these devices remain in common use.
See also: Carson, Rachel; Dow Chemical Company; Occupational Safety and Health Administration (OSHA).
—Kenneth E. Hendrickson Jr.
Further Reading
Friedlander, Sheldon K. Smoke, Dust, and Haze: Fundamentals of Aerosol Behavior. New York: Wiley, 1977.
Gaffney, Jeffrey S., and Nancy A. Marley. Urban Aerosols and Their Impacts: Lessons Learned from the World Trade Center Tragedy. Washington, DC: American Chemical Society, 2006.
Hinds, William C. Aerosol Technology: Properties, Behavior, and Measurement of Airborne Particles. New York: Wiley, 1982.
Preining, Othmar E., James Davis, and Peter Brimblecombe. History of Aerosol Science. Proceedings of the Symposium on the History of Aerosol Science, held in Vienna, Austria, August 31 to September 2, 1999. Sponsored by Österreichische Akademie der Wissenschaften. Wien, Austria, 2000.
Twomey, S. Atmospheric Aerosols. Amsterdam: Elsener Scientific Publishing Company, 1977.
Aerospace comprises the atmosphere of Earth and surrounding space. Typically, the term is used to refer to the industry that researches, designs, manufactures, operates, and maintains vehicles moving through air and space. Aerospace is a very diverse field, with many commercial, industrial, and military applications.
The field of aerospace has been investigated for centuries, but modern aerospace began with the first powered flight at Kitty Hawk on December 17, 1903, by the Wright Brothers. Since that time, aerospace has developed into one of the most diverse and fast-paced fields of today. From the hot-air balloons of the eighteenth century to the first wood-and-cloth plane of Wilbur and Orville Wright to the first manned mission to the moon on Apollo 11 to the new and exciting aircraft being developed by companies such as Boeing, Airbus, and Bombardier, aerospace has come a long way in a short time.
Aerospace manufacturing is an industry that produces aircraft, guided missiles, space vehicles, aircraft engines, propulsion units, and related parts, mostly for the government. In the European Union, aerospace companies such as EAS, BAE Systems, Thales, Dassault, Saab, and Finmeccanica account for most of the global aerospace industry and research effort, with the European Space Agency being one of the largest consumers of aerospace technology and products. In the People’s Republic of China, Beijing, Xi’an, Chengdu, Shenyang, and Nanchang are major research and manufacturing centers. China has developed the significant capability to design, test, and produce military aircraft, missiles, and space vehicles. Like China, India is a major center. Hindustan Aeronautics, Limited, the National Aerospace Laboratories, and the Indian Space Research Organisation are headquartered there. In Russia, large aerospace companies like Oboronprom and the United Aircraft Building Corporation are among the major global players in this industry.
In the United States, the Department of Defense and the National Aeronautics and Space Administration (NASA) are the two biggest consumers of aerospace technology and products. The Bureau of Labor Statistics has reported that the aerospace industry employed 499,000 people in 2004. Most of these jobs were in Washington and California. The leading aerospace manufacturers in the world are Boeing, United Technologies Corporation, and Lockheed Martin Corporation.
Aerospace engineering is the branch of engineering that concerns aircraft, spacecraft, and related topics. Aerospace engineering deals with craft that leave Earth’s atmosphere, while aeronautical engineering deals with craft that stay within Earth’s atmosphere. Modern flight vehicles must withstand severe conditions such as differences in atmospheric pressure and temperature, or heavy structural loads applied upon vehicle components. Numerous problems must be found and solved during the design and manufacture of a flight vehicle. As a result, they are usually the products of a complex mixture of various technologies and sciences such as aerodynamics, avionics, materials science, and propulsion. The process of combining these various branches of knowledge is known as aerospace engineering.
NASA is an agency of the U.S. government responsible for the nation’s public space program. NASA was established on July 29, 1958, by the National Aeronautics and Space Act. In addition to the space program, NASA is responsible for long-term civilian and military aerospace research. Since February 2006, NASA’s self-described mission statement is to “pioneer the future in space exploration, scientific discovery, and aeronautics research.”
Following the Soviet space program’s launch of the world’s first human-made satellite, Sputnik, in October of 1957, U.S. attention turned more seriously toward its own embryonic space efforts. Congress, alarmed by the perceived threat to U.S. security and technological leadership, urged swift action. This led to the creation of NASA and the passage of the National Defense Education Act. NASA began operations on October 1, 1958, with eighty employees, one of whom was Wernher von Braun, a German rocket scientist, who became an American citizen after World War II. He is regarded today as the father of the U.S. space program. NASA’s earliest programs involved research into human space flight. The Mercury program was designed to discover if a man could survive in space. On May 5, 1961, astronaut Alan B. Shepard Jr. became the first American in space when he rode Freedom 7 on a fifteen-minute suborbital flight. John Glenn was the first American to orbit the earth on February 20, 1962, during a five-and-a-quarter hour flight on Friendship 7.
Project Gemini was originated to deal with problems relative to a Moon mission. The first Gemini flight with astronauts on board was Gemini III flown by Virgil “Gus” Grissom and John W. Young on March 23, 1965. Other missions followed, proving that long-duration human space flight was possible and that the rendezvous and docking with another vehicle in space was possible. The Gemini program also gathered data on the effects of weightlessness on human beings.
The Apollo program was designed to land humans on the Moon and bring them safely back to Earth. Apollo 1 ended tragically when all the astronauts inside died due to a fire in the command module during an experimental simulation. Because of this incident, there were several unmanned tests before men boarded spacecraft again. Apollo 8 and Apollo 10 tested various components while orbiting the Moon, and they returned with photographs. On July 20, 1969, Apollo 11 landed the first man on the Moon, Neil Armstrong. Apollo 13 did not land on the Moon due to a malfunction, but it did return photos. The six missions that landed on the Moon returned a wealth of scientific data and four hundred kilograms of lunar samples.
Skylab was the first space station the United States launched into orbit. The seventy-five-metric-ton station was in Earth’s orbit from 1973 to 1979, and crews visited it three times, in 1973 and 1974. Skylab was originally intended to study gravitational anomalies in other solar systems, but the assignment was curtailed due to lack of funding and interest. A space shuttle was due to dock with and to elevate Skylab to a higher safe altitude, but Skylab reentered the atmosphere and was destroyed in 1979 before the first shuttle could be launched.
The space shuttle became the major focus of NASA in the late 1970s and 1980s. Planned to be frequently launchable and mostly reusable, four space shuttles were built by 1985. The shuttle program had its problems. Flights were much more expensive than projected, and even after the Challenger disaster in 1986 highlighted the risks of space flight, the public lost interest. This marked a low point for NASA. Nonetheless, the shuttle has been used to launch milestone projects like the Hubble Space Telescope (HST). The HST was created with a relatively small budget of $2 billion, but it continued operation for several years and has delighted both scientists and the public. Some of the images it has returned have become legendary. The HST is a joint project between NASA and the European Space Agency (ESA), and its success has paved the way for greater collaboration between the agencies.
In 1995 Russia-American cooperation began with the Shuttle-Mir mission and an American vehicle docked with a Russian craft, a full-fledged space station. This cooperation continues, with Russia and America the two biggest partners in the largest space station ever built—the International Space Station (ISS). The strength of this cooperation was even more evident when NASA began to rely on Russian launch vehicles to service the ISS following the 2003 Columbia disaster, which grounded the shuttle fleet for over two years. This disaster, which killed the crew of six Americans and one Israeli, triggered a serious reexamination of NASA’s priorities.
Meanwhile, during the 1990s, NASA faced shrinking budgets, and under the leadership of NASA’s ninth administrator, Daniel Goldin, the agency pioneered the so-called faster, better, cheaper approach that enabled NASA to supposedly cut costs while still delivering a wide variety of aerospace programs. The method was criticized and reevaluated following the losses of two Mars probes in 1999.
NASA’s continuing investigations include in-depth surveys of Mars and Saturn and studies of the earth and sun. Also, NASA spacecraft are presently (2008) en route to Mercury and Pluto, with a mission to Jupiter in the planning stage. On January 14, 2004, President George W. Bush announced a new plan for NASA’s future, dubbed Vision for Space Exploration. According to this plan, humans will return to the Moon by 2018 and set up outposts as a potential resource for future missions. The space shuttle program will be retired in 2010 and be replaced by the Orion program in 2014. The Orion vehicle will be capable of both docking with the ISS and leaving the Earth’s orbit. Yet the future of ISS is uncertain. Construction will be completed, but planning beyond that is less clear.
NASA has been responsible for many successful space projects, including over 150 manned missions. Many of the notable manned missions were from the Apollo program. The space shuttle program has also been a success despite the loss of two of the four shuttles and the death of their crews. There have been numerous unmanned space missions as well, including at least one to each of the other planets in our solar system (except Pluto, which has been downgraded to the status of “nonplanet”). There has been considerable success with the Mars missions, and NASA remains the only space agency to have launched missions to the outer solar system beyond the asteroid belt. The Cassini probe, launched in 1997 and in orbit around Saturn since mid-2004, is investigating Saturn and its inner satellites.
See also: Globalization.
—Kenneth E. Hendrickson Jr.
Further Reading
Anderson, Robert. Through Turbulent Times. New York: Newcomer Society, 1984.
Biddle, Wayne. Barons of the Sky. New York: Simon & Schuster, 1991.
Bilstein, Roger E. The Enterprise of Flight: The American Aviation and Aerospace Industry. Washington, DC: Smithsonian Institution Press, 2001.
Bright, Charles D. The Jet Makers: The Aerospace Industry from 1945 to 1972. Lawrence, KS: Regents Press of Kansas, 1978.
Bromberg, Joan Lisa. NASA and the Space Industry. Baltimore, MD: Johns Hopkins University Press, 1999.
Fleming, Thomas J. Conquerors of the Sky. New York: Forge, 2003.
Pattillo, Donald M. Pushing the Envelope: The American Aircraft Industry. Ann Arbor, MI: University of Michigan Press, 1998.
Texier, François. Industrial Diversification and Innovation: An International Study of the Aerospace Industry. Cheltenham, UK: Edward Elgar Publishing, 2000.
In 1955 the two major American trade union organizations, the American Federation of Labor (AFL) and the Congress of Industrial Organizations (CIO) merged and became the American Federation of Labor and Congress of Industrial Organizations, which, because of the length of its name, is generally referred to as the AFL-CIO.
The two organizations had a long history of antagonism, with both seeking to have unions affiliate with them. This process certainly fractured the union movement as employers sought to exploit the differences between the two groups to maximum advantage. Similarly, when a union affiliated with one group called a strike, the employers often sought to deal with the other. As a result, the aim in forming the AFL-CIO was to end all of this. It took place at a period of prosperity in the United States, and for fifty years the AFL-CIO represented the vast majority of union members throughout the United States.
The AFL-CIO in its structure is an entirely voluntary federation to which labor unions affiliate themselves. This offers them the general support of the trade union movement when in trouble, and the solidarity of the movement is often crucial to the success of any industrial action or threat of such. As a result the AFL-CIO does not control the running of individual unions, some of which are strong and some of which have relatively few members. However, it does specifically exclude unions that have “policies and activities [that] are consistently directed toward the achievement of the program or purposes of authoritarianism, totalitarianism, terrorism and other forces that suppress individual liberties and freedom of association.” Some conditions about the running of individual unions were finally introduced in 2001, but these were largely noncontroversial as they largely were to ensure better governance of trade unions.
The first president of the AFL-CIO was George Meany (1894–1980). A Roman Catholic and a member of the Democratic Party, he was born in New York City and started work as an apprentice plumber when he was sixteen. He was then involved in the Plumbers Local Union, and from 1934 until 1939 he was the president of the New York State Federation of Labor. From 1940 until 1952 he was the secretary-treasurer of the AFL, and then for three years he was the last president of the AFL. He remained in charge of the AFL-CIO from the merger in 1955 until 1979. During this time, the AFL-CIO had supported the Democrats in presidential elections, except for in 1972, when Meany felt that he could not in good conscience offer support to George McGovern. The rest of the time the network, the numbers of unionists upon whom it could call, and the infrastructure as well as the money were very important in election campaigns.
When George Meany retired, he was replaced by (Joseph) Lane Kirkland (1922–1999). From South Carolina, he was also a Democrat. He was an Episcopalian and a master mariner by trade, serving as a deck officer in the U.S. Merchant Navy during World War II. He then worked in the Hydrographic Office of the Navy and as a member of the research staff of the AFL, and then as the assistant director of the Department of Social Security of the AFL-CIO with the merger. He was also a member of the President’s Commission on Financial Structure and Regulation from 1970 until 1972. In 1992 the AFL-CIO supported Bill Clinton’s presidential bid—indeed, they had supported him for many years, and endorsed him soon after he offered himself as a candidate for the Democrats. Kirkland retired in 1995 and was succeeded as acting president by Thomas R. Donahue, who had worked for Meany as his executive assistant and played an important role in the AFL-CIO’s support for the independent Polish trade union movement Solidarity. The next president was John J. Sweeney, who held the position of president of the AFL-CIO from 1995 until 2009. Sweeney, from the Bronx, was the son of Irish migrants and was a clerk at IBM before deciding to work in the union movement with the International Ladies’ Garment Workers’ Union. It was during his presidency that the AFL-CIO also faced a major test when the New Unity Partnership was formed, uniting some of the major unions in the AFL-CIO. The defeat of John Kerry in the 2004 elections caused them great anguish, and in 2005, as the AFL-CIO was about to prepare for its fiftieth anniversary convention, three large unions announced that they would withdraw from the Federation. The Service Employees International Union, the International Brotherhood of Teamsters, and the United Food and Commercial Workers International Union formed the Change to Win Federation, with the Laborers’ International Union of North America and the United Farm Workers being both members of the AFL-CIO and the Change to Win Federation.
In 2009 Richard Trumka, an Italian American from Nemacolin, near Pittsburgh, Pennsylvania, was elected as the fifth president of the AFL-CIO. He became a miner when he was nineteen but left to study law and became a staff attorney with the United Mine Workers, leading them in the successful Pittston Coal Company Strike in 1989.
See also: Lockout; Trades Union Congress.
—Justin Corfield
AFL CIO badge.
Further Reading
Buhle, Paul. Taking Care of Business: Samuel Gompers, George Meany, Lane Kirkland, and the Tragedy of American Labor. New York: Monthly Review Press, 1999.
Goulden, Joseph C. Meany. New York: Atheneum, 1972.
Nixon, Richard. The Memoirs of Richard Nixon. London: Sidgwick & Jackson, 1978.
Puddington, Arch. Lane Kirkland: Champion of American Labor. Hoboken, NJ: John Wiley and Sons, 2005.
Robinson, Archie. George Meany and His Times: A Biography. New York: Simon & Schuster, 1981.
Manufactured mainly by Dow Chemical Company and Monsanto, the chemicals known as Herbicide Orange and Agent LNX were given the code name Agent Orange. They were herbicides and defoliants that were used as part of the chemical warfare program of the U.S. military during the Vietnam War. The South Vietnamese and U.S. soldiers were facing a Communist insurgency from 1960, and the Communists were able to use bases in jungles. This led to the U.S. military (as part of Operation Ranch Hand) to spray defoliants on the jungle areas where Vietnamese Communists were based. In a related operation, Agent Blue was used to destroy food crops in North Vietnam.
A total of twenty million U.S. gallons (seventy-six million liters) of chemical herbicides and defoliants were mixed with jet fuel and then sprayed over jungle areas. It led to a massive decline in agricultural production in the countryside, forcing many villagers to move to cities and causing the urban population of South Vietnam to rise from 2.8 million in 1958 to 8 million by 1972.
Agent Orange barrels at Johnston Atoll, Pacific, 1973. U.S. government photograph.
Agent Orange was used on some 3.1 million hectares—nearly 18 percent of the total forested area of Vietnam. It not only destroyed crops and jungle areas, it also killed or maimed up to four hundred thousand people according to current estimates by the Vietnamese government, and it resulted in up to five hundred thousand children born with birth defects. The wildlife in the areas attacked was devastated.
In addition to the Vietnamese casualties and the long-term ecological effects on large parts of Vietnam, many U.S. veterans later reported rises in nerve problems, skin and respiratory disorders, and also vastly increased rates of cancer, which led to large numbers of court cases and litigation.
—Justin Corfield
Further Reading
Đại, Lê Cao. Agent Orange in the Viet Nam War: History and Consequences. Hanoi: Vietnam Red Cross Society, 2000.
Griffiths, Philip Jones. Agent Orange: “Collateral Damage” in Vietnam. London: Trolley/Alpen Editions, 2004.
Schecter, Arnold and Thomas Gasiewicz. Dioxins and Health. New York: Plenum Press, 1994.
Schuck, Peter H. Agent Orange on Trial: Mass Toxic Disasters in the Courts. Cambridge, MA: Belknap Press of Harvard University Press, 1986.
Wilcox, Fred. Waiting for an Army to Die: The Tragedy of Agent Orange. New York: Seven Stories Press/Vintage Books, 2011.
An airplane is a heavier-than-air vehicle designed to use the pressures created by its motor passing through the air to lift and transport useful loads. Although airplanes exist in many forms adapted for diverse purposes, they all use power to overcome aerodynamic resistance—or drag—to achieve forward motion through the air. The air flowing over specially designed wing surfaces produces pressure patterns that are dependent upon the shape of the surface angle at which the air approaches the wing, physical properties of the air, and velocity. These pressure patterns acting over the wing surface produce the lift force necessary for flight.
The major parts of a fixed-wing aircraft include the fuselage, which carries the crew, passengers, and cargo; a flight-sustaining wing system; stabilizing tail surfaces; altitude-control devices such as rudders; a thrust-providing power source; and a landing support system. Also, most aircraft have the following instruments: an airspeed indicator, an altimeter that indicates altitude above mean sea level, an altitude indicator that shows the exact relation of the aircraft about its pitch and roll axes, a turn indicator that helps the pilot maintain the aircraft in a coordinated altitude while turning, a rate of climb indicator that shows the rate at which the aircraft is climbing or descending, and a horizontal situation indicator that shows the position and movement of the aircraft as seen from above in relation to the ground.
Although credit for the invention of the powered, controllable, fixed-wing aircraft is usually assigned to the Wright Brothers, several efforts to produce fixed-wing aircraft preceded them. Sir George Cayley (1773–1857), the inventor of the science of aerodynamics, built and flew gliders as early as 1804, and he built his first passenger-carrying glider in 1853. In 1856, Frenchman Jean-Marie Le Bris (1817–1872) made the first powered flight by having his glider pulled by a horse. In 1889, American John J. Montgomery (1858–1911) made a controlled flight in a glider. Clément Ader (1841–1925) made a successful flight of fifty meters in 1890 and continued to experiment with flight throughout the decade. The Wright brothers made their first successful flights on December 17, 1903, and continued their work until by 1905 they had produced a craft capable of fully controlled, stable flight for substantial periods of time. Thus, it is to the early twentieth century that we attribute the birth of the age of flight.
Until World War I, fixed-wing aircraft were thought to be little more than toys. However, their use in combat brought them serious attention, and in the 1920s they were shown to be capable of carrying passengers and cargo on very long journeys. Today, fixed-wing aircraft represent the major form of swift, economical transportation for civilians and are a vital offensive and defensive weapon for all modern militarized nations, despite the fact that they may soon be superseded by rockets.
See also: Bell, Alexander Graham; Bell, Larry; Boeing; Ford, Henry; Sikorsky, Igor.
—Kenneth E. Hendrickson Jr.
Further Reading
Cooksley, Peter C. The Men Who Changed the World: The Aviation Pioneers, 1903–1914. Stroud: Sutton, 2003.
Dee, Richard. The Man Who Discovered Flight: George Cayley and the First Airplane. Toronto: McClelland & Stewart, 2007.
Heppenheimer, T. A. First Flight: The Wright Brothers and the Invention of the Airplane. Hoboken, NJ: Wiley, 2003.
Tobin, James. To Conquer the Air: The Wright Brothers and the Great Race for Flight. New York: Free Press, 2003.
This was the “model village” that was planned by Sir Edward Akroyd (1810–1887), a worsted manufacturer in Halifax, Yorkshire, England. Akroyd had inherited James Akroyd and Sons, Ltd., from his father in 1847, and it soon became one of the largest worsted manufacturers in England. His mills were located at Haley Hill in Halifax and at Copley, two miles away. Akroyd, who was active in a large number of community organizations, was keen to improve the life of his workers, and this led him to design and build a model village at Boothtown, Halifax, which was named after him—Akroydon. He wanted his workers to live in better conditions than most industrial workers, and it started with Akroyd buying 62,435 acres (252.67 square kilometers) in 1855, and then drawing up plans to build 350 houses. The architect was George Gilbert Scott, who also designed All Souls’ Church in the village. The houses themselves were planned in the Gothic style to be similar to existing houses in Halifax. Because of clashes with the local Akroyd Town Building Association, only ninety houses were built.
See also: Fordlândia; Ashworth, Henry; Salt, Sir Titus.
—Justin Corfield
Sir Edward Akroyd
Further Reading
Brook, Roy. The Story of Huddersfield. London: MacGibbon & Kee, 1968.
Caffyn, Lucy. Workers’ Housing in West Yorkshire, 1750–1920. London: H.M.S.O., 1986.
Hargreaves, John A. “Akroyd, Edward (1810–1887).” Oxford Dictionary of National Biography. London: Oxford University Press, 2004.
History of James Akroyd and Son Ltd., privately published, 1874.
Alcorn, George Edward (1940– )
A physicist and assistant director for standards and excellence in applied engineering and technology at the Goddard Space Flight Center, George Edward Alcorn Jr. was born on March 22, 1940. He attended Occidental College in Pasadena, California, where he maintained an excellent academic record while earning eight letters in baseball and football. He graduated in 1962 with a BA in physics and earned a master’s degree in nuclear physics from Howard University in 1963. Four years later, he was awarded his PhD in atomic and nuclear physics from Howard.
After completing his PhD, Alcorn spent twelve years in industry. He was senior scientist at Philco-Ford, senior physicist at PerkinElmer, and advisory engineer at IBM. In 1978, he left IBM to join NASA. In 1984, he earned a patent for an imaging x-ray spectrometer using thermomigration of aluminum. For this achievement he was named NASA inventor of the year. At the same time he was serving as deputy project manager for advanced development, and in this position he was responsible for developing new technologies needed for the space station Freedom.
From 1990 to 1992, Dr. Alcorn’s primary duties concerned the management of technology programs and the evaluation of technologies required by the space station program. He also managed the Space Station Evolution Program, which was designed to ensure that over its thirty-year mission the space station develops properly while incorporating new capabilities. Since 1992, Alcorn has served as chief of Goddard’s Office of Commercial Programs, supervising programs for technology transfer, small-business innovation research, and the commercial use of space programs. He managed a shuttle-flight experiment in 1994 that involved a Robot Operated Material Processing System, or ROMPS. The experiment involved the manufacture of materials in the microgravity of space.
In 1999, Alcorn was awarded Government Executive magazine’s prestigious Government Technology Leadership Award for the development and commercialization of the Airborne Lidar Topographical Mapping System (ALTMS). In 2001, he received special congressional recognition by Congressmember Donna M. Christian-Christensen (D-VI) for his efforts in helping Virgin Islands businesses through the application of NASA technology and knowledge of technology programs.
Alcorn has been extensively involved in community service. In 1984, he was awarded a NASA-EEO Medal for his contributions in recruiting minority and women scientists and engineers and his assistance to minority businesses in establishing research programs. He is a founder of Saturday Academy, which is a weekend honors program designed to supplement and extend math and science training for inner-city students in grades six to eight. He also works with the Meyerhoff Foundation, whose goal is to encourage and support African American males interested in pursuing doctorates in science and engineering.
See also: Aerospace Industry; Industrial Robotics; Nuclear Power; Patents.
—Kenneth E. Hendrickson Jr.
Further Reading
Alcorn, George E., and Henry H. Plotkin. “New Markets for Advanced Space Systems.” Aerospace America 3, no. 11 (1996): 32.
Alger, Horatio Jr. (1832–1899)
An American author who wrote large numbers of boys’ adventure stories, Horatio Alger’s books were largely “rags to riches” stories of teenagers who made their fortunes during the industrialization of the United States after the American Civil War. As a result, many of them served as an inspiration to boys throughout the United States and also in many other countries around the world, with a number of biographers describing their subjects as “Horatio Alger figures” epitomizing the “rags to riches” scenario.
Horatio Alger Jr. was born on January 13, 1832, in Chelsea (now Revere), Massachusetts, the son of Horatio Alger, a Unitarian minister, and his wife, Olive (née Fenno). His parents were both strict Puritans. When he was ten, Horatio Jr. went to Gates Academy, Marlborough, Massachusetts, and when he was sixteen, he went to Harvard University. There he studied under the famous poet Henry Wadsworth Longfellow, and he hoped to become a poet himself. After completing university, he started teaching at schools, but soon he gave it up to return to Harvard to study religion, where he attended the Harvard Divinity School. With the outbreak of the American Civil War, he went to Europe, touring for ten months and returning in the early 1860s to an appointment as a Unitarian minister at Brewster, on Cape Cod, in December 1864. Soon afterward he was accused of misconduct involving two boys, and he left for New York City.
In New York he helped at the Newsboys’ Lodging House and studied the lives of impoverished children in the big city. He started writing for the magazine Student and Schoolmate, and this saw the appearance of his first major character, “Ragged Dick.” It was followed by the Luck and Pluck series from 1869 and Tattered Tom from 1871. The popularity led to the stories being turned into a book. Over the next thirty years he wrote more than 120 popular boys’ stories, and more than twenty million copies of his books were sold.
Essentially, the plots of many of Horatio Alger’s stories are similar. They begin with a poor, young boy who is struggling to escape the poverty in which he finds himself. Sometimes this poverty was a result of “mistakes” by his parents, at other times the boy comes from a family that has been poor for generations. The boy then strives to do his best, and with the Protestant work ethic, he works hard and by avoiding alcohol or gambling, or any other vices, he manages to make his position in life. The background is usually the grinding poverty of the backstreets of New York or another major U.S. city at the time of industrialization. For the most part, the boy betters his position through his own hard work and thrift. Often the boy tries to complete his education, but his attempts fail. This forces him into the workforce when young, and from there he works hard. However, this in itself does not provide the sole explanation for the boy’s success. Often he performs some major selfless activity such as returning stolen goods or saving a child from an overturned cart. This then attracts the attention of a wealthy man, who then helps the boy achieve his goals. For the most part, the boys themselves do no end up rich. The benefactors help many of them into middle-class jobs, and they are able to establish a career with the implication that they will help other boys who are from the same humble backgrounds as they are.
Horatio Alger did not make his own fortune through writing. His books sold well, but Alger used much of the money he made to help some of the poor boys with whom he conversed. These often provided the stories for his novels. He later lived with, and on more than one occasion was cheated by, some of the boys. He died on July 18, 1899, at Natick, Massachusetts, and he was buried at Glenwood Cemetery, South Natick. A large number of books were published posthumously. His sister burned all his papers, but Herbert R. Mayes, in 1928, published a “biographical novel” of Alger’s life, which he said was based on the author’s diaries and some surviving letters. However, in 1961 Frank Gruber was able to show that the diaries did not exist, and Mayes accepted this, even conceding that the letters he cited were also invented.
See also: Dickens, Charles; Gaskell, Elizabeth.
—Justin Corfield
Horatio Alger
Further Reading
Gardner, Ralph D. Horatio Alger, or the American Hero Era. Mondota, IL: Wayside Press, 1964.
Nackenoff, Carol. The Fictional Republic: Horatio Alger and American Political Discourse. New York: Oxford University Press, 1994.
Scharnhorst, Gary. Horatio Alger Jr. Boston: Twayne Publishers, 1980.
Scharnhorst, Gary, and Jack Bales. Horatio Alger Jr: An Annotated Bibliography of Comment and Criticism. Metuchen, NJ: Scarecrow Press, 1981.
———. The Lost Life of Horatio Alger Jr. Bloomington: Indiana University Press, 1985.
Tebbel, John. From Rags to Riches: Horatio Alger and the American Dream. New York: Macmillan, 1963.
All-China General Labour Federation
The major body coordinating the trade union movement in China, the All-China General Labour Federation was established in 1922, and three years later it became the All-China Federation of Trade Unions. Initially this body was supportive of, and was supported by, the Nationalist Kuomintang Party, which had been established by Dr. Sun Zhongshan (Sun Yat-sen). After his death in 1925, it was led by his successor, Chiang Kai-shek. Later, the Kuomintang considered the union to be too radical, but after the defeat of the Nationalists in the Chinese Civil War in 1949, it remained in operation in the People’s Republic of China.
When the All-China General Labour Federation was established in May 1922 under the chairmanship of Deng Zhongxia, trade unionism was rare. Most of the major factories in China were located close to or within the Treaty Ports, essentially foreign-run enclaves along the coast of China. There, Western powers persecuted trade unions and trade unionists. By contrast, the Kuomintang at this stage emerged as the supporters of the Chinese trade union movement, which, as a result, was stronger in southern China where the Kuomintang initially drew so much of its support.
On May 1, 1925, Deng Zhongxia stood down as the chairman of the Federation at its second conference, and Lin Weimin was elected. On that day, the Federation also changed its name and transformed itself into the All-China Federation of Trade Unions to continue to act as the body coordinating the trade union movement in China. The Third Congress was held in May 1926, in Canton, and Su Zhaozheng was elected the chairman. He had been active in organizing the general strike of seamen in Hong Kong in 1922. At the same time, Kuo Liang, a Communist from Hunan who had been active in building up a railway union in 1922, was elected to the Central Committee of the Federation. Kuo was one of those who saw the Federation as a possible vehicle not just for trade unionism—the railway union had been crushed in 1923—but also for political change in China through an alliance between it and the Communists.
During the Great Northern Expedition of 1926, the Kuomintang worked with the Communists but broke with them in the following year. With the defeat of the Zhang Zuolin (Chang Tso-lin) “Northern” government in 1928, the Kuomintang found themselves in control of much of China, and on a regional basis, some of their leaders started to work with Western interests against trade unions.
In November 1929, Su Zhaozheng died from illness, and during the late 1930s, Xiang Ying became the new chairman, a post he held until August 1948. Xiang Ying had been in charge of the rearguard action during the Long March, remaining in Jiangxi (Kiangsi) where he had to provide cover while Mao and the others escaped. During his long chairmanship, the Federation faced many restrictions under the Nationalist government of Chiang Kai-shek, but it survived. In August 1948 the Communist leader Liu Shaoqi became honorary chairman, and Chen Yun, a former typesetter in Shanghai who had then worked under Liu, took part in the Long March and was later first vice premier of the country.
After the proclamation of the People’s Republic of China in 1949 by Mao Zedong, the Federation started to play an important, if often symbolic, role in the new Communist government, initially with Liu Shaoqi, now the vice chairman of the People’s Republic, as honorary chairman of the Federation until December 1957, just before the start of the Great Leap Forward.
Although he supported the Federation—and at that time also supported Liu Shaoqi—Mao also recognized its potential power, and the Federation did not hold a congress between 1957 and October 1978, two years after the death of Mao. From 1953 until 1958 its actual chairman (as opposed to honorary one) was Lai Ruoyu; Liu Ningyi, who held that office from 1958 until 1966, succeeded him. The Federation then effectively fell into abeyance during the Great Proletarian Cultural Revolution, and Ni Zhifu was appointed in October 1978 as the chairman, remaining there until October 1993. He supported the rise of Deng Xiaoping and the modernizing of the Chinese economy.
In October 1993 he was succeeded by Wei Jianxing, a mechanical engineer who had joined the Communist Party in 1943 at the age of twelve. In December 2002, Wang Zhaoguo became the chairman of the Federation. Born in 1941, he had joined the Communist Party in 1965 and also had become closely associated with Deng Xiaoping.
In 2008 the All-China Federation of Trade Unions, the sole national trade union federation for workers in the People’s Republic of China, had 193 million members through affiliated trade unions, making it the largest trade union movement in the world.
See also: AFL-CIO; American Federation of Labor; Trades Union Congress.
—Justin Corfield
Further Reading
Chesneaux, Jean. The Chinese Labor Movement, 1919–1927. Stanford, CA: Stanford University Press, 1968.
Lai To Lee. Trade Unions in China: 1949 to the Present. Singapore: Singapore University Press, 1986.
Wilbur, Clarence Martin. The Nationalist Revolution in China, 1923–1928. Cambridge: Cambridge University Press, 1984.
This term was coined to cover sources of energy that were aimed at reducing or replacing the use of fuel sources that have been involved in the destruction of the environment or of individual ecosystems. In a more general way, it can also apply to the gradual replacement of energy sources over time. This saw coal replace wood, then the increased use of charcoal, and gradually the introduction of whale oil and then petroleum.
Generally the search for alternative energy has been associated with the need to reduce society’s dependence on fossil fuels. Some of this has to do with the obvious environmental impact of burning coal, such as causing smog in major cities around the world that resulted in the deaths of tens of thousands of people through pollution. This led to restrictions on the burning of coal in cities such as Paris and London. These moves coincided with the emergence of easily available and cheaper electricity. There were also problems associated with factories constructed around Western Europe and North America, and later elsewhere, during the Industrial Revolution, which saw areas of the countryside polluted by haze. Again, the use of electricity and the locating of power stations away from urban areas was seen as a solution, although with much of electricity production generated by the burning of fossil fuels elsewhere, some saw this as not providing a satisfactory solution to the problem.
The increasing use of oil around the world led to other problems involving the transportation of petroleum and the real risk of major fires. The increased use of automobiles led to many countries’ reliance on the consumption of oil, especially for automobiles but also for shipping. The oil price rises of 1973 to 1974, and again in 1979, led the government of the United States and those of other countries to try to reduce the reliance on oil and to seek alternative energy sources. This coincided with the development of nuclear power, but because of the controversy over the disposal of nuclear waste, nuclear power is not usually regarded as an alternative energy. Similarly, there have been environmental concerns over the production of biofuels, which has led to the destruction of tropical rain forests in Brazil, Indonesia, and other countries.
With the emergence of knowledge of global warming and climate change during the 1990s, there was an increased focus on developing alternative energies that would reduce the output of carbon dioxide and other greenhouse gases. For the most part, research into alternative energy sources has led to the development of power generation from renewable sources.
The major renewable sources include water from rivers or tides, and sunlight. Both have actually been used since ancient times. The first was used to power mills and was the original source of power used to run machines during the early years of the Industrial Revolution in Europe and North America, with most of the early factories being located along rivers. Sunlight has also been used in many places, especially in the Middle East (but also elsewhere in summer), to bake bricks and pots. Later, wind power was used for windmills. The modern harnessing of alternative energies has used tidal power and sunlight, as well as wind power, and these sources are being used to provide power that can be used elsewhere through the connection to electricity grids.
The use of particular forms of alternative energies usually depends on the particular circumstances within individual countries. Some countries have been able to harness hydroelectric power with great effect. However, other countries that lack suitable rivers are unable to do so. Similarly, there are many tropical countries that have been able to make great use of solar power, and in some countries such as Australia, government incentives encourage people to use solar power for their own homes and businesses.
A major area of research in recent years has been in wind farms. In some ways this is similar to the old-fashioned windmills, and Denmark has been heavily involved in the research and manufacture of equipment connected with harnessing wind power. The major difference is that the windmills now produce power for the electricity grid, not just for running machines located near the windmills themselves. The sheer number of windmills needed to generate power, however, has led to some criticism by people from an aesthetic point of view, as well as complaints about the sound of some of the machines that operate through the night. There have also been problems with bird life, although these are relatively minor. Some of the criticisms of the land-based wind farms have led to the development of floating wind farms that can be used in coastal areas. Norway has been involved in developing floating wind farms, and in Portugal designs of machines for the harnessing of wave power have also been tested.
The countries where the most focus has been placed into alternative technologies have been either those with no natural resources themselves or those that are keen to reduce their use of fossil fuels either for environmental reasons or because they also want to reduce the dependency on imported fuel sources.
See also: Gas-Powered Fuel Cell; Solar Cell.
—Justin Corfield
Further Reading
Godoy Simões, M., and Felix A. Farret. Renewable Energy Systems: Design and Analysis with Induction Generators. Boca Raton, FL: CRC Press, 2008.
Goswani, D. Yogi. Alternative Energy in Agriculture, 2 volumes. Boca Raton, FL: CRC Press, 1986.
Kutz, Myer, ed. Environmentally Conscious Alternative Energy Production. Hoboken, NJ: Wiley, 2007.
Schlager, Neil, and Jayne Weisblatt, eds. Alternative Energy, 3 volumes. Detroit: UXL, 2006.
Walisiewicz, Marek. Alternative Energy. London: Dorling Kindersley, 2002.
Altshuller, Genrikh Saulovich (1926–1998)
Genrikh Saulovich Altshuller was a Russian engineer, scientist, and writer who is credited with creating a theory of solving inventive problems. He was born on October 15, 1926, at Tashkent, Uzbekistan, soon after the end of the Russian Civil War in the region. He then worked as a clerk in a patent office, where he tried to develop some new generic rules about what makes ideas “new” and, hence, “patentable.” This became the Teoriya Resheniya Izobretatelskikh Zadatch (“Theory of Solving Inventive Problems”), or the “Theory of Inventive Problem Solving” (TIPS).
When Altshuller was only fifteen he received his first certificate to acknowledge that he was the inventor of an underwater apparatus. This was because the government of the Soviet Union did not issue patents to its people, as this would otherwise constitute the ownership of private property. Much influenced by the events of World War II, Altshuller, at the age of twenty, came up with a method of getting out of an immobilized submarine without diving equipment. This led to Altshuller getting a job with the Soviet navy in Baku in Azerbaijan, in the Department of Inventions Inspection for the Caspian Sea fleet. His task was to document and then analyze new inventions suggested by people in the navy and elsewhere.
Altshuller suffered arrest and political imprisonment under Joseph Stalin. After his release in 1954, he decided to live permanently in Baku. By 1969 he had, according to his own account, reviewed forty thousand patent and new-design applications to understand the system of patents. By this time he was working as a journalist, an essayist, and then writing science fiction published under a pseudonym “Genrikh Altov.” He wrote several books in collaboration with his wife, Valentina Zhuravleva. His first work of science fiction was published in 1958, with his last one published posthumously by his wife in 2002.
From 1946 until 1971 Altshuller worked on what he called the “Forty Inventive Principles,” in which he tried to explain the philosophical underpinning for an invention. The observation he later made was that “inventing represents the removal of a technical contradiction with the help of certain principles.” Although this applied to the engineering field, Altshuller also felt that this would cover many other fields, even nontechnical ones. This in turn led to the Theory of Solving Inventive Problems (TRIZ). The concept, outlined in a number of books and journal articles by Altshuller, attracted followers who applied it to areas of medical and biomedical research, as well as styles of business management and computer programming. From 1959 until 1985, Altshuller worked on the Algorithm of Inventive Problem Solving (ARIZ). Other projects that occupied his time included natural effects/scientific effects (1970–1980), substance-field analysis (1973–1981), separation principles (1973–1985), patterns of evolution (1975–1980), and 76 Standard Solutions (1977–1985).
It was not until the 1970s that a TRIZ movement developed among Russian technicians and engineers. By this time Altshuller was the acknowledged founder of the movement and its intellectual leader. There were TRIZ congresses and people who became TRIZ practitioners. Altshuller became a founding member and the president of the Russian TRIZ Association. Many members of the association have become prominent teachers and thinkers of the movement, and the TRIZ movement became popular both in Russia and also overseas. For many years Altshuller published articles on TRIZ in a popular Soviet science magazine, Izobretatel i Ratsionalizator (“Inventor and Innovator”).
With the problems in Azerbaijan that followed the breakup of the Soviet Union in 1991, Altshuller moved to Petrozavodsk in Karelia, in northwestern Russia, along with his wife and his granddaughter. It was not long before Petrozavodsk became the headquarters of the TRIZ Association. Genrikh Altshuller died on September 24, 1998, from complications from Parkinson’s disease. He is remembered by the TRIZ Association; the Anti TRIZ-Journal, published from February 2002; and also by the Altshuller Prize awarded for outstanding contributions to TRIZ each year from 2006.
—Justin Corfield
Further Reading
Altshuller, Genrich. Innovation Algorithm. Worcester, MA: Technical Innovation Center, 1973.
———. Creativity as an Exact Science. New York: Gordon and Breach, 1984.
Altshuller, Genrich, and H. Altov. And Suddenly the Inventor Appeared: TRIZ, the Theory of Inventive Problem Solving. Worcester, MA: Technical Innovation Center, 1996.
Pala, Surya, and A. Srikanth. TRIZ: A Framework for Innovation: Concepts and Cases. Punjagutta, Hyderabad, India: Icfai University Press, 2005.
Amalgamated Society of Engineers
This organization was founded in 1851 from a merger of a number of smaller engineering unions that covered builders, carpenters, and iron founders. The society rejected Chartism and supported Robert Owen, urging for gradual political change. However, it really only had skilled workers as members because they had to pay one shilling per week to join. The subscriptions allowed the society to have a well-organized headquarters in London and also full-time paid staff, as well as having the funds to fight employers and support its members during strikes. In spite of its relatively low membership, the Amalgamated Society of Engineers became a model for later craft unions. Soon after it was established, in 1852, there was a national lockout of its members. Weakened, it survived but was involved in fighting another national lockout in 1896.
The society’s first general secretary was William Allan, and John Burns succeeded him in 1875. In 1890 John Burns (1858–1943), a socialist and Liberal member of Parliament, took over as general secretary. George Nicoll Barnes (1859–1940), a Scottish leader of the Labour Party, replaced him in 1896. Jenkin Jones was elected in 1909, and he was replaced by Robert Young (1872–1957) in 1912. In 1919 Tom Mann (1856–1941) became the general secretary. He led the merger that resulted in the formation of the Amalgamated Engineering and Electrical Union.
—Justin Corfield
Further Reading
Hyman, Richard. Workers’ Union, 1898–1929. Oxford: Oxford University Press, 1971.
Robinson, Thomas Hoben. The Amalgamated Society of Engineers, 1851–1892: A Mid-Victorian “Model” Trade Society. Chicago: Amalgamated Society of Engineers, 1936.
The American Federation of Labor (AFL) was founded on December 4, 1886, at Columbus, Ohio, by Samuel Gompers, and it was the first of the federations that brought together labor unions in the United States. Prior to the foundation of the AFL, many trade unions came together to form the Knights of Labor, which helped in the coordination of many strikes. The Knights of Labor gradually declined, and the employers came together to fight them in the Haymarket Affair in Chicago in 1886 and the Great Southeast Railroad Strike in the same year.
The major difference between the AFL and the Knights of Labor was that the former ensured the autonomy of each trade union that was affiliated to it, and only allowed trade unions to join—the Knights of Labor had allowed some small employers to become members. Gradually, many of the trade unions that had been disenchanted with the Knights started to join the AFL, and the Knights ceased to be a major force in unionism in the United States.
Initially the AFL was dominated by the unions that catered to skilled workers. It did not discriminate against African American workers, but few of them were initially involved in the unions that were initially in the AFL. However, this changed in 1895 when the AFL admitted the International Association of Machinists, which discriminated against African American workers. This allowed for the admission of unions supporting segregation, and it was not long before the AFL started supporting new laws, including literacy tests, which reduced the number of unskilled migrants to the United States from southern and eastern Europe. In fact, in 1901 the AFL started lobbying Congress to support attempts to reauthorize the Chinese Exclusion Act of 1882. By that time, the Knights of Labor had ceased to be a major force, and the AFL became the only major national union body in the United States. This saw the United Mine Workers, the International Ladies’ Garment Workers’ Union, and the United Brewery Workers joining the AFL, although the craft unions still retained much of the power in the organization.
Gradually in the 1900s the AFL started taking on the task of healing rifts between unions, often with the threat of expulsion from their organization, and it also started chartering “federal unions” that brought together workers who had otherwise been excluded from the trade union system. However, demarcation disputes continued with both the Teamsters and the Brewers wanting to represent drivers of beer trucks, and Machinists and the International Typographical Union wanting the sole right to represent employees in print rooms. In 1913 the AFL expelled the Carriage, Wagon and Automobile Workers Union, which quickly faded away as a major force in that industry and was then disbanded.
In the labor disputes that took place after World War I, the AFL was in the forefront of many of the disputes. By this time it had been heavily involved in politics for more than twenty years, and it had supporters elected on city and town councils and also supported many state and federal politicians. However, it had also started to transform itself. By the 1920s, the leadership of Gompers and Peter J. McGuire, who had helped establish it—Gompers using the slogan “reward your friends and punish your enemies”—had ended. Gompers died in 1924, and the AFL then started to become more conservative. Unlike the Trades Union Congress in Britain and trade union movements elsewhere in the world, it had assiduously decided not to establish and support a labor party. Indeed, it distanced itself from the Socialist Labour Party of Daniel De Leon. Instead it tried to use the existing political system to try to get a legislative framework that would support the union movement. In 1924, however, the AFL did break from this tradition on one occasion, when it openly endorsed the presidential election bid of Robert M. La Follette Sr., after which it generally supported the Democratic Party, although many individual union leaders supported the Republicans.
Samuel Gompers had led the AFL from 1886 until 1924, although John McBride had been president briefly in 1894 to 1895. After Gompers died, William Green took over the presidency and held it until 1952. During his time running the AFL, there were many changes. The boom of the 1920s gave way to the Great Depression, and there were major changes in industrial relations laws and labor laws with the New Deal of Franklin D. Roosevelt. This led to the National Industrial Recovery Act of 1933 and the National Labor Relations Act two years later. The main problem, however, faced by the AFL during this period was the militancy of John L. Lewis of the United Mine Workers. He formed the Congress of Industrial Organizations, which was then expelled from the AFL. Lewis then led a battle to win over unions from the AFL, and the AFL managed to get new members, retaining an edge on the CIO but still badly hurt by its inability to represent all the unions, although it did retain close ties to Democratic Party machines in many major cities. It tried to block the Taft-Hartley Act of 1947, which restricted the role power of labor unions. In 1952 George Meany took over as president of the AFL, and three years later, on December 4, 1955, the AFL and the CIO merged to form the AFL-CIO, with Meany as president.
See also: Collective Bargaining; Trades Union Congress.
—Justin Corfield
Badge of the American Federation of Labor.
Further Reading
Buhle, Paul. Taking Care of Business: Samuel Gompers, George Meany, Lane Kirkland, and the Tragedy of American Labor. New York: Monthly Review Press, 1999.
Foner, Philip S. History of the Labor Movement in the United States, Vols. 1–9. New York: International Publishers, 1947–1991.
Gompers. Samuel. Seventy Years of Life and Labor: An Autobiography. Ithaca: Cornell University Press, 1984.
Kaufman, Stuart Bruce. Samuel Gompers and the Origins of the American Federation of Labor, 1848–1896. Westport, CT: Greenwood Press, 1973.
Mandel, Bernard. Samuel Gompers: A Biography. Yellow Springs, OH: Antioch Press, 1963.
Taft, Philip. The A.F. of L. in the Time of Gompers. New York: Harper & Brothers, 1957.
———. The A.F. of L. from the Death of Gompers to the Merger. New York: Harper & Brothers, 1959.
The American Plan is a term that most employers in the 1920s used to describe their policy of refusing to negotiate with unions. The policy promoted union-free open shops. As a result, union membership shrank from 5 million in 1920 to about 3.6 million in 1923.
In terms of United States labor relations, an open shop is a place of employment where one is not required to join or financially support a labor union as a condition of hiring or continued employment. Open shops are required by law in right-to-work jurisdictions and with employers such as the federal government. In contrast, a closed shop is one in which all employees must be members of a labor union prior to being employed, and a union shop is one in which an employee must join a union in order to retain employment. Finally, a dues shop is one in which employees pay dues or the equivalent to the union, but they may not be required to join the union. The open shop and the union shop are the labor arrangements permitted by the National Labor Relations Act and the Railway Labor Act.
The open shop was the slogan adopted by employers in the United States in the first decade of the twentieth century in their attempt to drive unions out of the construction industry. Construction craft unions, then and now, rely on controlling the supply of labor in particular trades and globalized areas as a means of maintaining union standards and establishing collective bargaining relations with employers in that field. In order to do that, construction unions, and others who work transitory and relatively brief hours, must require that employers hire only their members. Otherwise, employers may effectively undercut many of the gains, such as the eight-hour day, that unions have achieved over the past several decades.
The open shop was also a key component of the American Plan introduced in the 1920s, when employers attempted to reverse the gains made by unions during World War I. In that era the open shop was not only aimed at construction unions but also at unions in mass-production industries. The open shop represented not only the right to discriminate against union members in employment but also a steadfast opposition to collective bargaining of any sort.
U.S. labor law outlaws the open shop in this extreme form, in that it prohibits private-sector employers from refusing to hire employees because they are union members. This is still important in the construction industry, in which employers frequently impose obstacles to keep union members out of their workforce. For their part, construction unions have used this as a weapon against employers by sending union members to a nonunion contractor with the hope that employers will either refuse to hire them, thereby leading to potentially significant financial liability for this illegal procedure, or will hire these members, thus giving the union a foothold in attempting to organize it.
In its milder form, in which the open shop only represents an employer’s refusal to favor union members for employment, the open shop is legal. While the National Labor Relations Act permits construction employers to enter into prehire agreements, in which they agree to draw their workforces from a pool of workers dispatched by the union, employers are under no legal compulsion to enter into such agreements.
Nonunion construction employees have adopted the phrase merit shop to describe their operations. Unions see this as merely a code word that signifies the extreme form of the open shop. Merit shop employers also make efforts to establish their own apprenticeship programs.
The open shop is also legal in those states that have adopted right-to-work laws. In those cases employers are banned from enforcing union security arrangements and may not fire an employee for failure to pay dues under a union security clause that might be legal in another jurisdiction.
See also: American Federation of Labor; Capitalism; Fair Labor Standards Act of 1938; Lewis, John L.; New Deal; Paterson Silk Strike of 1913.
—Kenneth E. Hendrickson Jr.
Further Reading
Dubofsky, Melvyn, and Foster Rhea Dulles. Labor in America: A History. New York: Crowell, 1966.
Dunn, Robert W. The Americanization of Labor: The Employers’ Offensive Against the Trade Unions. New York: International Publishing, 1927.
Rayback, Joseph. A History of American Labor. New York: Macmillan, 1959.
Romero, Federico. The United States and the European Trade Union Movement, 1944–1951. Chapel Hill, NC: University of North Carolina Press, 1991.
The American Stock Exchange, known as AMEX, is located in New York City, and it traces its origins back to before the American War of Independence. Initially it began as an informal meeting place where people would gather and trade in stock, exchanging news about the success or otherwise of companies whose stock was being traded. Gradually the American Stock Exchange became the central venue where stockbrokers would meet and trade in securities and stock. It was only formalized in 1842 when the brokers met on the sidewalk at Broad Street, near Exchange Place.
With the establishment of the New York Stock Exchange (NYSE), AMEX found its market by basically dealing in stock that was not important enough to be listed on the New York Stock Exchange. And unlike the NYSE, the American Stock Exchange allowed many more people to buy and sell stock through these brokers; there are countless stories of relatively poor migrants and children of migrants who came to speculate on small amounts of stock. Trading also took place on Saturday morning, although many of the Jewish brokers did not attend those sessions.
Gradually there were more and more people meeting to buy and sell stock, but it continued to trade on the street or through the open windows of adjoining offices. All the trading at what was still known as the New York Curb Exchange was done on the street until June 27, 1921, when the market finally moved indoors, occupying the art deco building at 86 Trinity Place, Manhattan (constructed at a cost of $1.2 million), where it is still located. That building was declared a National Historic Landmark in 1978. Suggestions about moving inside had taken place as early as the 1870s, when E. S. Mendels raised the idea—this would certainly obviate the need to clear the road of snow on cold, winter mornings. The move indoors was eventually organized by Edward R. McCormick, the Curb president from 1914 until 1923. One curious aspect of the design of the indoors trading floor was that posts resembling street lights were erected in the hall, reminding people of its origins.
The idea of trading on the street saw clerks sitting in nearby offices, mainly in the Mills Building (which has since been demolished), waving and gesticulating to the brokers who roamed the pavement outside dressed in colored jackets. It became one of the sites of New York, but in 1921 when trading finally moved indoors, Thomas Cook and a few other brokers continued to maintain an outdoor market, until gradually they decided—one by one—to move indoors. The move in 1921 also showed a new confidence in spite of a fall in the prices of many shares at the NYSE in the previous two years, although many shares at AMEX defied this move. In 1953 the exchange was officially renamed the American Stock Exchange, but in spite of this it continued to be known as “the Curb,” with the more famous New York Stock Exchange referred to by the Curb traders as “the Stock Exchange.”
Because of the greater range of shares traded at AMEX, many have found it easier to make fortunes on AMEX, and many of the traders have been Irish, Eastern European Jewish, and Italian, with graduates from New York, Boston, and Duquesne universities rather than Harvard, Yale, and Princeton. There have also been a number of NYSE traders who have served their apprenticeship at AMEX. Gradually over time, the nature of items traded on the American Stock Exchange has changed.
During the 1920s there was no “index,” such as the Dow Jones at the NYSE. The result was that the “crash” was not as immediately noticeable. David Page was the president of the Curb during the bull market from 1925 until 1928, with William Muller having the task of running it during the period of the Wall Street crash. The volume of trade in 1929 reached a peak of 476 million shares traded, and in the following year it plummeted to 222 million, 110 million in 1931, and 57 million in 1932. The 1929 peak was not surpassed until 1961 when 488 million shares were traded. Another way of analyzing the Great Depression was the fall in the price of seats for brokers on AMEX, from between $150,000 to 254,000 in 1929 to a low of $70,000 in 1930, $16,500 in 1932, and $7,000 by 1939.
During the presidency of Howard Sykes from 1932 to 1934, there was much infighting and criticism from the government of Franklin Delano Roosevelt. E. Burd Grubb took over for a year and managed to restore some confidence. Under Edward T. McCormick, the official name was changed to AMEX. However, a scandal surrounding broker Jerry Re saw his brokerage company Re, Re and Sagarese expelled from AMEX and Re himself indicted on charges of stock manipulation.
In recent times, one of the most traumatic events at the American Stock Exchange was the crash of 1987, when the market was overwhelmed by people desperate to sell, and the management only narrowly managed to keep the exchange trading. Although the crash in stock prices was worldwide, it was hit more savagely because of the nature of the shares being traded at the American Stock Exchange. Eventually many of the companies that had traded on AMEX switched to being traded on NASDAQ, which operated by online trading. Because of the nature of NASDAQ, there was a much stronger focus on technology stock. A study of stock from forty-seven companies that had been listed on AMEX and that moved to NASDAQ recorded increases of up to 100 percent in the period from 1992 to 1995, mainly through increased speculation, which was easier on NASDQ because it had much lower costs of buying and selling, and it was also easier for companies to list their stock in the first place.
See also: Stock Exchange/Bourse.
Further Reading
American Stock Exchange. American Stock Exchange Guide. Chicago: American Stock Exchange, 1969.
Bruchey, Stuart. The Modernization of the American Stock Exchange, 1971–1989. New York: Garland, 1991.
Sobel, Robert. The Curbstone Brokers: The Origins of the American Stock Exchange. Washington, DC: BeardBooks, 1970.
———. AMEX: A History of the American Stock Exchange, 1921–1971. Washington, DC: BeardBooks, 1972.
American System (American School of Economics)
The American School of Economics represented the legacy of Alexander Hamilton, who in his Report on Manufacturers argued that the United States could not become fully independent until it was self-sufficient in all necessary economic practices. Hamilton based this economic system, in part, on the successive regimes of Colbert in France and Elizabeth I in England, while respecting the harsher aspects of mercantilism, such as seeking colonies for markets. As later defined by Senator Henry Clay (1777–1852), who became known as the Father of the American System because of his impassioned support for the idea, the American System was intended to unify the nation north to south, east to west, and city to farmer. The name American System was coined by Clay to distinguish it from the competing theory of economics at the time, represented by Adam Smith in his great work An Inquiry into the Nature and Causes of the Wealth of Nations (1776).
The American System included three cardinal policy points: the advocacy of protectionism, tariffs to protect American industries from foreign competition; government finance of internal improvements to speed commerce and develop industry; and the creation of a national bank to issue currency and encourage commerce.
Henry C. Carey (1793–1879), a leading American economist and advisor to Abraham Lincoln, added two additional points that further distinguished the American System from both Adam Smith and Karl Marx: government support for the development of science and public education; and rejection of the concept of class struggle in favor of a harmony of interests between owners and workers, farmers and manufacturing, and the wealthy and the working class.
Henry Clay first used the term American System in a speech before Congress in 1824, although he had been working in favor of its specifics for many years. Congress enacted portions of the system during Clay’s lifetime. The Second Bank of the United States was rechartered in 1816 for twenty years, and high tariffs were maintained from Hamilton’s time until 1832. On the other hand, the national system of internal improvements was never adequately funded due to sectional jealousies and constitutional questions about such expenditures.
Clay’s plan became a major part of the platforms of the National Republican Party and the Whig Party. It was supported by New England and the Mid-Atlantic states, which had a large manufacturing base, because it protected their factories from foreign competition. The South opposed the American System because its plantation owners depended upon the production of cotton for export. The American System produced lower demand for their cotton and created higher costs for manufactured goods. As a result of sectional tensions, the United States kept tariffs low from 1828 until the election of 1860. For similar reasons, the Second Bank of the United States was allowed to die in 1836.
Because of the dominance of the Democratic Party of Van Buren, Polk, and Buchanan, the American System was not embraced as the central economic policy of the United States until the presidency of Abraham Lincoln. During the Civil War, the Republican Party, which included many of the old Whigs, controlled the government. It tripled the average tariff, began to subsidize the construction of a transcontinental railroad, and created the National Banking System. Although there were some setbacks and at times powerful opposition, the United States continued these policies throughout the last half of the nineteenth century.
As the United States entered the twentieth century, the American System was the nation’s policy under such names as American Policy, Economic Nationalism, National System, Protective System, Protection Policy, or protectionism. This continued until 1913 when the Woodrow Wilson administration initiated the New Freedom policy that, among other things, replaced the National Banking System with the Federal Reserve System and lowered tariffs to revenue-only levels.
The resurgence of the Republican Party in 1920 brought a partial return to the American System through the restoration of high tariffs that continued through the decade of the 1920s. The New Deal restored infrastructure improvements through the numerous public works projects of such agencies as the Works Progress Administration (WPA), the Public Works Administration (PWA), and the Tennessee Valley Authority (TVA). The New Deal also brought massive restructuring to the Federal Reserve while investing in various ways in industry to stimulate production and control speculation. At the same time, the New Deal abandoned protective tariffs while embracing moderate protection through reciprocity and choosing to subsidize industry as a replacement. At the close of World War II, with the United States dominant in manufacturing with little competition, the era of free trade had begun. In 1973, tariffs were cut to all-time lows, the New Deal orientation toward reciprocity ended, and the United States began to move in the direction of free market policy. By 2008, although the Federal Reserve still operates the American banking system, this was essentially private, and government regulation of business had come to a virtual standstill. By 2009, with the government in the hands of liberals, a reversal seemed to be at hand, but it is too early to evaluate results.
See also: Bimetallism; Canals; Central Pacific Railroad; Depression of 1893; Great Depression; Marxism; North American Free Trade Agreement (NAFTA); Roosevelt, Franklin Delano; Union Pacific Railroad.
—Kenneth E. Hendrickson Jr.
Further Reading
Eaton, Clement. Henry Clay and the Art of American Politics. Boston: Little, Brown, 1957.
Mayo, Bernard. Henry Clay. Boston: Houghton-Mifflin, 1937.
Peterson, Merrill D. The Great Triumvirate: Webster, Clay and Calhoun. New York: Oxford University Press, 1987.
Remini, Robert. Henry Clay: Statesman for the Union. New York: W. W. Norton, 1991.
Wiltse, Charles M. John C. Calhoun: Nationalist. Indianapolis: Bobbs-Merrill Company, 1944.
When Christopher Columbus reached the Americas in 1492, he reported natives using tobacco much as it is used today as well as in religious ceremonies. Believed to have medicinal properties, tobacco was introduced into Europe and the rest of the world, becoming the chief commodity that the British colonists exchanged for European manufactured products. The tobacco trade triggered a significant increase in the production of manufactured goods in Great Britain, thus contributing to the rise of the Industrial Revolution. During the late nineteenth century, machine-manufactured tobacco products, especially cigarettes, emerged as a major commodity and played an integral role in the rise of the Second Industrial Revolution.
Tobacco is a plant that, when properly handled, can be converted into a substance that is smoked by millions of people. It is habit forming and dangerous, but it has been widely used since the seventeenth century.
Tobacco was first brought to the English colonies in North America in 1612 from the West Indies. A successful technique for curing it was developed by John Rolfe, who would later marry the Native American princess, Pocahontas, and soon it was the major cash crop in the Virginia Colony. Tobacco production proved to be labor intensive, and beginning in 1619 a long process began in which slaves from Africa and their descendants became the major labor force in the southern colonies.
Over the years tobacco contributed significantly to Virginia’s economy. In 1758 the colony produced seventy thousand hogsheads of tobacco, and production continued to increase. During the late colonial period, many of the founding fathers of the United States grew tobacco and owned slaves. Among them were George Washington, Thomas Jefferson, and James Madison. By 1860, one out of every four farms in Virginia owned slaves, many of whom toiled on tobacco plantations. The 1860 census revealed that in the South at large there were over one hundred plantation owners who owned more than one hundred slaves. By that time, cotton had surpassed tobacco as the main cash crop in the South, but there were still many tobacco plantations in operation.
The modern history of the tobacco industry began after the Civil War with the appearance of a new tobacco product—the machine-made cigarette. The industry experienced very high growth rates during the decade of the 1870s and the first half of the 1880s. During that period profits were good and producers were happy, but by the mid-1880s demand began to level off and the industry entered a period of stringent competition caused by overproduction. The invention and use of cigarette-making machines pioneered by James B. Duke replaced the hand-rolling system and contributed significantly to increased production.
In an effort to increase sales, tobacco product manufacturers began to advertise vigorously, and by the end of the 1890s advertising costs had risen to approximately 20 percent of the companies’ incomes. At that point they began to search for ways to stabilize the industry and increase profits. They considered and rejected the creation of a tobacco cartel, knowing that earlier experiments with cartels in the railroad, sugar, and other industries had failed. At length, most of the industry’s leaders agreed to create a horizontal corporation.
The American Tobacco Company was founded in 1890 by a group led by James B. Duke (1856–1925). It was the first major corporation formed as a holding company under New Jersey’s new general incorporation law. At first, it operated with numerous autonomous subdivisions, but soon it became centralized under the control of the men in its Manhattan headquarters. During the next decade, the cigarette combination extended its influence to other branches of the tobacco manufacturing industry. Using brutal competitive methods, such as price wars, industrial espionage, coercive agreements with jobbers, as well as massive advertising, American Tobacco gained control over the manufacture of smoking tobacco, chewing tobacco, and snuff, in addition to cigarettes. The company also engaged in vertical integration by acquiring farms, building processing plants and warehouses, and, of course, continuing to advertise and market its products aggressively.
If ever there was a “bad trust,” it was American Tobacco. In 1907, the federal government finally took action by prosecuting the firm under the terms of the Sherman Antitrust Act of 1890. In 1911, the Supreme Court upheld the conviction and ordered that American Tobacco be broken up into several companies. This process produced American Tobacco, R. J. Reynolds, Liggett and Myers, and Lorillard. But each of these firms was substantial in size, and the oligopoly they represented proved to be almost as profitable and stable as the monopoly American Tobacco had tried to create. These companies were able to avoid competition and control prices effectively as time went by.
The tobacco industry continues to be dominated by a few giant firms, most of which are located in the southern United States, particularly North Carolina and Virginia, and it continues to be highly profitable. However, in recent decades the industry has faced a major obstacle in the form of lawsuits claiming that tobacco causes cancer, that companies in the industry knew this, and that they deliberately understated the danger and thus contributed to the illness and death of many people. While they are not always successful, some of the class action suits brought against the industry by states have resulted in large cash settlements. Suits of this type will probably continue indefinitely. However, lawsuits against the tobacco industry are primarily restricted to the United States due to differences in the legal system in other countries. Smoking has declined somewhat in the United States, but business is still brisk for the industry in the world market. It is estimated that 5.5 billion cigarettes are smoked each year.
In the debate over the dangers of smoking, the industry usually argues that the connection between smoking and disease has not been proven. The industry also argues that commercial tobacco production is a vital part of the American and world economy. They point out that thousands of farmers in the United States make their living from raising tobacco, and they claim that the industry contributes billions of dollars of tax revenue to the federal government each year. In contrast, the opponents of the tobacco industry point out that 50 percent of all tobacco users worldwide die from tobacco-related causes, and the World Health Organization agrees, saying that about 650 million smokers will eventually die from a preventable cause. Moreover, smoking-related health problems contribute to rising health care costs, which offset the financial contributions the industry makes to the nation’s economy.
In recent years (the early twenty-first century) the U.S. government has reduced its efforts to control the actions of the tobacco industry to market its deadly product. President George W. Bush signed the World Health Organization’s Global Treaty on Tobacco Control in 2007, but it has not been ratified by the Senate. Moreover, the Supreme Court, under the control of a conservative majority, shows signs of adopting a more probusiness stance. In 2007 the court overturned an award of $79.5 million in punitive damages by the Oregon Supreme Court to Mayola Williams. Williams had sued Philip Morris, claiming the company was responsible for the cancer death of her husband.
Amid the continuing controversy, the tobacco industry continues to flourish. The leading producers of tobacco in the world are China, India, Brazil, the United States, the European Union, Zimbabwe, Turkey, Indonesia, and Russia. As of the year 2000 these nations were producing 4.8 million tons of tobacco per year. The largest tobacco companies are China National, which controls 32.7 percent of the world market; the Altria Group in the United States, which controls 17.3 percent; British-American Tobacco, with 16 percent; Japan Tobacco, with 9 percent; and R. J. Reynolds, at 2 percent. There is little chance that the production of tobacco, tobacco products, and their use will ever disappear from the earth.
See also: Capitalism; Corporation and Incorporation; Robber Barons; Slavery.
—Kenneth E. Hendrickson Jr.
Further Reading
Brandt, Allan M. The Cigarette Century: The Rise, Fall, and Deadly Persistence of the Product That Defined America. New York: Banc Books, 2007.
Burns, Eric. The Smoke of the Gods: A Social History of Tobacco. Philadelphia: Temple University Press, 2007.
Gately, Iain. La Diva Nicotina: The Story of How Tobacco Seduced the World. New York: Grove Press, 2001.
Lovell, Georgiana. You Are the Target. Big Tobacco: Lies, Scams—Now the Truth. Vancouver: Chryan Communications, 2002.
White, Lawrence. Merchants of Death: The American Tobacco Industry. New York: Beech Tree Books, 1988.
American War of Independence (1775–1783)
This conflict saw people in the American colonies fighting against the British for independence. Because of the long war and the increased demand for munitions by the British forces and their allies, there was an increase in demand for iron and gunpowder in Britain, with a resulting boom in some sectors of the economy. However, as soon as the war ended, the demand dropped off. There was also a diversification of the economy in the British West Indies.
Many of the countries involved in the war incurred large debts with Britain, increasing its national debt from £170 million to £250 million. This resulted in an interest bill of £9.5 million a year, which was easy to finance at a time when Britain was increasingly prosperous with the wealth generated from industrial production. However, the French spent 1.3 billion livres (about £56 million). With a smaller tax base, soon after the war over half of the French national revenue went into debt repayment, which led to financial crises and the French Revolution. For the United States, the debt problems were largely solved with the establishment of the First Bank of the United States.
The American War of Independence had another major effect on the economy of Britain. With trade with the United States disrupted, the tobacco trade collapsed, leading to major problems in Glasgow, which had grown prosperous from the trade. However, gradually trade did recover with the United States, although this stopped when the British barred neutral shipping from trading with the French and the United States passed the Non-Intercourse Act of 1811 prohibiting trade with Britain and resulting in the War of 1812.
See also: Bacon, Anthony.
—Justin Corfield
The “Boston Tea Party” on December 16, 1773, was a protest against the new stamp duty imposed on many imports including tea. The tensions from this led directly to the American War of Independence.
Further Reading
Boatner, Mark Mayo, III. Encyclopedia of the American Revolution. New York: D. McKay Co., 1966; revised 1974.
Greene, Jack P. The Reinterpretation of the American Revolution, 1763–1789. New York: Harper & Row, 1968.
Hoffman, Ronald. The Economy of Early America: The Revolutionary Period 1763–1790. Charlottesville: Published for the United States Capitol Historical Society by the University Press of Virginia, 1988.
Scott, H. M. British Foreign Policy in the Age of the American Revolution. Oxford: Clarendon Press, Oxford University Press, 1990.
Stephenson, Orlando W. “The Supply of Gunpowder in 1776.” American Historical Review 30, no. 2 (January 1925): 271–81.
Virtue, George O. British Land Policy and the American Revolution. Lincoln, NE: University at Lincoln, Nebraska, 1963.
Wahlke, John C. The Causes of the American Revolution. Boston: Heath, 1962.
Weigley, Russell F. The American Way of War. Bloomington: Indiana University Press, 1977.
In 1945 an American computer engineer, An Wang, arrived in the United States from China to do graduate work at Harvard University in Cambridge, Massachusetts. He earned a PhD in applied physics in 1948. That same year, at Harvard’s Computation Laboratory, Wang invented a doughnut-shaped magnetic core that could store information. A grouping of these magnetic cores served as the basic components of a computer’s memory. Smaller, faster, and more reliable than memory systems based on vacuum tubes or other media, magnetic core memory would revolutionize the computer industry.
An Wang was born February 7, 1920, in Shanghai, China. As a teenager, he enjoyed taking apart radios, and he built one of his own. His interest in radio led him to study communications engineering at Chiao-T’ung University in his hometown. By 1940, when Wang received his BS degree, World War II had thrown China into turmoil. Japanese troops occupied much of the northeastern part of the country, and Chinese forces fought to repel the invaders. Wang contributed to his country’s defense by designing radio receivers and transmitters for China’s army.
Wang left China for the United States in 1945, just months before Japan’s surrender to the Allies ended World War II. In the following year he earned his Master’s Degree in Communications Engineering, and the year after that his doctorate degree, before joining the staff of the Computation Laboratory. That lab had helped usher in the Information Age by building one of the first digital computers. Wang’s work on the magnetic core memory made computers more practical and helped in the design of more advanced computers. So did his development of the “write after read cycle,” a program that allowed data to remain stored in a computer’s memory. Until Wang came up with that program, data, as it was read, was also destroyed.
In 1951 Harvard decided to cut back on computer research, and Wang left the university. He used his personal savings, some $600, to start an electronics consulting business in Boston, which became Wang Laboratories later that year. At first, Wang’s firm—consisting mainly of An Wang—engaged primarily in research and development for other companies. It also sold magnetic cores. Wang set out to patent his magnetic core as a “pulse transfer controlling device.” The originality of this device lay in its ability to organize computer memory by precisely controlling the flow of magnetic energy among the cores in a computer.
Wang soon decided that his business needed an infusion of capital. He entered into negotiations with International Business Machines (IBM), to whom he hoped to sell his magnetic core memory patent, which was still pending. In the meantime, IBM hired him, under contract, to apply his knowledge of magnetic cores to IBM’s electronic calculator. Ten years later, Wang would build on this experience to design and build a more sophisticated calculator. The government finally issued Wang his patent on the magnetic core in 1956—several weeks after IBM completed its purchase of the patent for several million dollars.
That money allowed Wang to embark on an expansion of Wang Laboratories. By the end of the 1950s the company had around twenty employees and a handful of valuable government contracts. At that point the company began to limit its consulting work in order to concentrate more on developing and selling its own products.
Wang took this shift in focus one step further in the early 1960s by hiring a sales staff to sell Wang products directly to consumers. In 1965 his inventiveness gave that staff a popular new product to sell—a desktop scientific calculator known as the LOCI. This programmable electronic calculator, which sold at first for around $1,000, worked much faster than traditional machines, whose mechanical parts limited their speed. It also had the technology to do much more complex calculations than existing electronic calculators of comparable size, which made it especially attractive to scientists and engineers. As a result, the LOCI carved out a whole new market for desktop calculators, and it dominated that market for years. With its more than 1,200 transistors and a bewildering array of poorly identified keys, however, the LOCI was a complex machine and a challenge to operate. When a simpler, more practical Wang-engineered product, the 300-series calculator, came out, it quickly supplanted much of the LOCI sales.
In 1967 An Wang took his company public, selling some 210,000 shares of common stock. The company expanded overseas, opening sales offices in Europe and Asia. That same year, building on its calculator success, Wang Laboratories set about engineering its first computer. Wang, who served as the company’s president, had realized that his calculator sales would soon plummet thanks to the development of the integrated circuit, which was rapidly replacing transistors in electronics equipment.
By 1970 Wang Laboratories had entered the infant minicomputer market in earnest. Two models later, it had grabbed a respectable share of that market. But a general economic slowdown linked to the Arab oil embargo made the early 1970s a rough period for Wang. The company bounced back in 1976, when it broke into the office-computer market with the introduction of a word-processing minicomputer. Next, it began developing personal computers for use by office workers—a direct challenge to the dominant company in that field, IBM. Wang’s new products brought the company significant new revenue. In 1984 Wang Laboratories, now headquartered in Lowell, Massachusetts, reached $2 billion dollars in sales and employed some fifteen thousand workers.
The company’s fortunes hit a snag the next year. To finance its expanding product line, the firm had taken on a significant amount of debt. Now, growing competition from smaller and more powerful personal computers led to sagging sales for Wang’s minicomputers. After reorganizing the company’s marketing division in 1986, An Wang turned over the presidency of the company to his son, Frederick Wang. Earnings continued to drop over the next three years. In 1989 An Wang was forced to choose a replacement for his son.
In March of the next year An Wang died of cancer of the esophagus. By then this creative electronics engineer had acquired some forty patents, many of them crucial to advancing the field of information technology. As an executive he had built a company from the ground up and led it to great success and influence worldwide. Through philanthropy Wang had put his financial success to work in support of education and the arts. He gave generously to encourage computer science at Harvard, and his $4 million donation helped build what is today known as the Wang Center for the Performing Arts in Boston. Wang once said, “The theme of my philanthropy has been the same as my approach to technology: to find a need and fill it.”
—David Fasulo
Further Reading
Kenney, Charles. Riding the Runaway Horse: The Rise and Decline of Wang Laboratories. Boston, MA: Little, Brown, 1992.
Kepos, Paula, ed. International Directory of Company Histories, volume 6. Chicago: St. James Press, 1992.
Wang, An, with Eugene Linden. Lessons: An Autobiography. Reading, MA: Addison-Wesley, 1986.
As a political philosophy, anarchism holds that the state is, at best, undesirable or even unnecessary and is generally a harmful concept. Some have sought to find anarchist ideas among the writings of ancient philosophers, but most have seen it as a relatively modern political idea emerging from the abuse of power in early modern times. It became tied to a number of political theories, but it is most heavily associated with its opposition to the capitalist societies that emerged after the Industrial Revolution, with anarchist assassins involved in the killing of a number of prominent industrialists and society figures.
The term was first used in 1642 as one of abuse by the Royalists against their opponents at the start of the English Civil War. It then became associated with some of the activists in France during the French Revolution who were reacting against the power of the French monarchy and who feared the centralization of this power after 1789 in the hands of people such as Robespierre and later Napoleon. Some of the original ideas were generated by the English journalist William Godwin (1756–1836). The husband of Mary Wollstonecraft and the father of Mary Shelley, he viewed that “government by its very nature counteracts the improvement of original mind,” and as a result there should be gradual moves to limit the power of the state.
Two of the other early theorists of anarchism as a doctrine of individualism were Josiah Warren (1798–1874) in the United States and Pierre-Joseph Proudhon (1809–1865) in France—Proudhon being best known for his claim that “property is theft.” The Russian thinker Peter Kropotkin (1842–1921) then formulated his views on political and economic anarchism. Some of these ideas helped influence Stephan Khalturin, who was involved in an attempt to assassinate Tsar Alexander II in 1880. This and other attempts and actual assassinations in the 1890s and early 1900s tended to highlight what anarchists were against, and not what they supported.
The Industrial Revolution had led to the emergence of major industrial companies, and this was seen as systematically eroding individualism and also reducing any form of collective ownership, which was cherished by many anarchists. Thus, anarchists viewed industrialization to a great extent as an antithesis to their views, although anarchism does support the system of a market economy and the ownership of private property as a way of ensuring personal control rather than that of the state. The concept of absentee owners and shareholders with no real connection to the company in which they hold stock were also viewed by the anarchists with concern; however, share ownership with stock divided among workers, and hence essentially collective ownership, was acceptable.
Some of these anarchist ideas against authority came to prominence during the revolutions of 1848 in dislike of the Hapsburg and French monarchies. These certainly influenced Karl Marx. They also had a wide appeal among radicals in Russia such as Mikhail Bakunin who denounced Marx’s concept of the dictatorship of the proletariat, which, anarchists felt, was merely taking over from existing forms of government. The result was a rupture between Marxists and anarchists, although they did sometimes make common cause. Some collectivist anarchists did see merit in some communist ideas of establishing self-managing communes that would lead to a collective ownership and use of the means of production.
The International Congress of Anarchists met in Amsterdam in 1907, and this brought together many anarchist theorists. They took an increasingly antimilitarist line that was to get so many of their members into trouble during World War I, which they saw as a battle between conservative ruling classes and the emerging military-industrial complex (although that exact term had not yet been coined). With the Russian Revolution (1917), anarchists supported this initially, but they soon turned against the new Bolshevik government. It was not until the rise of fascism that anarchists came together with Marxists.
Within the trade union movement, although there were anarchists involved throughout the world, they were particularly strong in Spain, where the Confederación Nacional del Trabajo (National Confederation of Labor) was established in 1910 and was to play an important role during the Spanish Civil War in supporting the Republican government and making common cause with them against the Nationalists, although there were often battles between the anarchists, the socialists, and the communists.
After World War II, anarchism came to be redefined. Most began to accept that there was a role for the state, but that this should be minimal. This led to a view that government was essentially a “necessary evil” but that people should try to limit its power and ensure that the state was only there to regulate essential matters, including restrictions on its own size. The result was that some anarchists and libertarians have made common cause. Many of the anarchist movements are now focused on opposition to international banking groups and globalization, which they see as having eroded the power of smaller countries and increasing the political impotence of the less powerful in society and within the world at large. This has led to the formation of groups such as the Direct Action Network, which was involved in disrupting the World Trade Organization’s ministerial meeting at Seattle in November to December, 1999. There has also been heavy anarchist involvement in the environmental movement, as the Industrial Revolution has been seen as one of the major causes for the despoiling of the natural environment.
See also: Russian Revolution (1905); Lenin, Vladimir Ilyich; Mensheviks; Soviets.
—Justin Corfield
Further Reading
Goodway, David, ed. For Anarchism: History, Theory and Practice. London: Routledge, 1989.
Graham, Robert, ed. Anarchism: A Documentary History of Libertarian Ideas. Montreal: Black Rose Books, 2005.
Purkis, Jonathan, and James Bowen. Changing Anarchism: Anarchist Theory and Practice in a Global Age. Manchester: Manchester University Press, 2004.
Stringham, Edward. Anarchy and the Law: The Political Economy of Choice. New Brunswick: Transaction Publishers, 2007.
Ward, Colin. Anarchism: A Very Short Introduction. Oxford: Oxford University Press, 2004.
Woodcock, George. Anarchism: A History of Libertarian Ideas and Movements. Harmondsworth, Middlesex: Penguin Books, 1962.
Angerstein, Reinhold Rücker (1718–1760)
Swedish bureaucrat and early industrial historian
A Swedish metallurgist and civil servant, Reinhold Rücker Angerstein traveled around Europe during the mid-eighteenth century and described in his journal the start of the Industrial Revolution throughout Europe, providing great insight into the times and the technological developments of the period.
Reinhold Rücker Angerstein was born on October 25, 1718, at Vikmanshyttan, the son of Gustaf Angerstein and his wife, Anna, the daughter of a wealthy merchant from the town of Hedemora in central Sweden. The Angersteins had been ironmasters for several generations. His great-grandfather was a German ironmaster who moved to Sweden in 1639.
Angerstein was educated at the University of Uppsala. In 1734 his father died, and this forced him to return to the family business. He then started working for the Swedish Bergscollegium (“Mining Council”). In 1749, he started out on a tour of Europe financed by various Swedish trade associations, especially the Jernkontoret (Swedish Ironmasters’ Association), who were interested in finding out about industrial development in other countries.
Angerstein first went to Denmark, and from there through the German states to Saxony, and then to the lands of the Habsburgs—Bohemia, Carinthia, and Hungary. He then toured Austria, the Italian Peninsula, France, Spain, Portugal, and the United Provinces (Netherlands). Finally he came to Great Britain, arriving at Harwich on September 15, 1753. He made his way to London, which he used as a base for extensive tours of the country. His descriptions of the tin and copper mines in Cornwall are an interesting contrast to his criticisms of some of the industrial plants being established elsewhere in England. From Britain, he returned to Sweden in 1755.
Angerstein left a detailed diary of his travels, which shows that he had a remarkable understanding of the development of the ironworks during that period, making special visits to blast furnaces and noticing many details. In England, for example, he noted the iron masters still heavily used charcoal. He was critical of the overuse of charcoal because the British planted few new trees, the source of charcoal, and thus endangered their own fuel supplies. He understood the market for iron in England and recognized that there was still the possibility of importing iron there from the United Provinces, Russia, Spain, and Sweden. He recorded information on the textile industry, mining, and river navigation. Some British thought he might be a spy.
On Angerstein’s return to Sweden, he took over the position of Direktör för rikets gröfre svartsmide (Director of Steelworks), with responsibilities in the Bergscollegium and also on the Jernkontoret. He also managed the daily operations of the family’s ironworks and planned to update those factories with discoveries he had made on his travels. In 1757 he bought the Vira Iron Works in Uppland, northeast of Stockholm, and he was planning to extend the factory that had made Swedish army munitions for many years. However, he died on January 5, 1760, at Stockholm, aged forty-one, after being ill for some time. He never married. Just before he died he sold Vira to his brother-in-law, Archbishop Samuel Troilius, a wealthy cleric.
At the time of Angerstein’s death, plans to publish his diary ended. Two volumes (of the eight which made up the entire diary) had already been transcribed, with the plates being prepared. In 1765 it was reported that the task had been completed, and a copy remained in the library of the Jernkontoret, with some of the original documents lodged at the Swedish State Archives. Of Angerstein’s twenty original pocket day-books, only the two covering England and Wales appear to have survived, along with an account book of his first five months in London. Historians first made use of Angerstein’s work as early as 1951, with extracts published in the Transactions of the Newcomen Society in the late 1950s. It was not until 2001 that Angerstein’s diary of his travels appeared in English. They are now recognized as an important primary source for information on the Industrial Revolution in Europe.
—Justin Corfield
Reinhold Angerstein. Portrait by Olof Arenius. Photograph by Bengt Oberger.
Further Reading
Angerstein, Reinhold Rücker. R R Angerstein’s Illustrated Travel Diary, 1753–1755: Industry in England and Wales from a Swedish Perspective. Translated by Torsten Berg and Peter Berg. London: The Science Museum, 2001.
Müller, Leos. Consuls, Corsairs, and Commerce: The Swedish Consular Service and Long-Distance Shipping, 1720–1815. Uppsala Universitet: Forum Naval No. 10, 2004.
Rydén, Göran, and A. Florén. “A Journey to the Market Society: A Swedish Pre-Industrial Spy in the Middle of the Eighteenth Century.” in Societies Made Up of History: festskrift for Rolf Torstendahls. Edited by R. Björk and K. Molin. Edsbruk, Sweden: Akademitryck AB, 1999.
This treaty between the British and the French, often known as the Eden Agreement, was signed in 1786. The British wanted to increase trade relations with France following the end of the American War of Independence. Influenced by the publication of Adam Smith’s The Wealth of Nations, which had been published in 1776, the British prime minister William Pitt sent his negotiator, William Eden, First Baron Auckland (1745–1814), to persuade the French to negotiate a reduction in tariffs on each country’s exports to the other. In particular, the treaty opened up the French market to British textiles, with only 12 percent import duties levied. Although the same applied to French textiles going to Britain, because of the Industrial Revolution in Britain, the mills in Lancashire and Scotland were able to provide cotton and woolen items for much less than the French were able to produce the same item. This resulted in the treaty being hugely beneficial to the British, which led to tensions in France. The treaty survived the first few years after the French Revolution of 1789 but collapsed in 1793.
—Justin Corfield
Further Reading
Black, Jeremy. British Foreign Policy in an Age of Revolutions, 1783–1793. Cambridge: Cambridge University Press, 1994.
Ehrman, John. The British Government and Commercial Negotiations with Europe, 1783–1793. Cambridge: Cambridge University Press, 1962.
Lee, Stephen M. “Eden, William, first Baron Auckland (1744–1814).” Oxford Dictionary of National Biography. London: Oxford University Press, 2004.
This treaty, as with the Anglo-French Treaty of 1786, was a result of the British prime minister William Pitt being influenced by Adam Smith’s The Wealth of Nations and the concept of free trade. With this treaty both Ireland and Great Britain reduced import duties on manufactured goods to the same level. Because production costs were lower in Ireland, the Irish parliament saw this as hugely beneficial to them, and the English, including Josiah Wedgwood, worried about a flood of cheap imports into England. Wedgwood and his allies attacked the treaty and modified it, eventually ensuring that it was not ratified by the British parliament.
—Justin Corfield
Anheuser, Eberhard (1805–1880)
The founder of the Anheuser-Busch Company, Eberhard Anheuser was a soap and candle maker who immigrated to the United States in 1842 and later bought the Bavarian Brewing Company. Under his and his son-in-law’s direction, it became a major beer manufacturer, producing Budweiser beer.
Eberhard Anheuser was born on September 27, 1805, at Kreuznach (modern-day Bad Kreuznach), an early center of manufacturing in the western part of Germany. His father was Johann Jacob Anheuser, and his mother was Elisabeth (née Hoenes). The family had a long tradition of wine making in Rhineland-Palatinate dating back to at least 1627. He was born in the year before the establishment of Napoleon’s Confederation of the Rhine, which in 1807 took in Kreuznach, then close to the frontier of a massively extended France. The French heavily recruited soldiers from the region for the 1812 campaign in Russia. However, after 1815, Kreuznach was a part of the growing and powerful kingdom of Prussia. In later life Anheuser always gave his place of birth as “Prussia.” Although there were many business opportunities at home, in 1842 Eberhard Anheuser and two of his brothers moved to the United States, and Anheuser’s nephew, Rudolf Anheuser, took over the running of the family vineyards in the Nahe Valley.
In the United States, Anheuser settled in St. Louis, Missouri, rapidly emerging as the second largest U.S. city west of Pittsburgh. There he ran a factory making soap and candles; his business survived a fire in 1849 that destroyed parts of the old city and also the cholera epidemic in the same year. He prospered, and in 1859, Eberhard Anheuser became a major creditor of the Bavarian Brewery Company, which had been founded in 1853 by George Schneider. Schneider had opened his small-frame brewery on a hill between Lynch Street and Dorcas Street, using the underground caverns that he constructed to age the beer. With money from Anheuser, the Bavarian Brewery was able to produce eight thousand barrels of beer in that year. In 1860 the Bavarian Brewery Company ran into some financial difficulties. When it faltered soon afterward, Anheuser, along with partner William O’Dench, bought up the interests of the minority creditors, and Anheuser became president of the new entity, changing its name to Eberhard Anheuser and Company.
With the outbreak of the American Civil War in 1861, St. Louis prospered; it was used by the Union forces to garrison soldiers and later to construct some of the ironclad ships. The caverns that Schneider had excavated for the Bavarian Brewery Company, and by then owned by Anheuser, were an important place to hide weapons safely, as the Union soldiers in the town expected a Confederate raid.
In 1861, Anheuser’s two daughters married at a joint wedding. Anna, the older daughter, married Ulrich Busch, and Lilly married Ulrich’s brother, Adolphus; the Buschs were sons of a German wine merchant from Mainz. Adolphus Busch, the second youngest of twenty-two children, soon became heavily involved in the business that was, by this stage, heavily involved in the manufacture of soap and candles. Busch soon became the driving force in Eberhard Anheuser and Company, and in 1865 became an equal partner. Eleven years later, with a friend, Carl Conrad, who was a liquor importer, the two decided to produce a German-style beer that they named Budweiser, after the German name Budweis for the town of Ceske Budejovice in the modern-day Czech Republic. The beer is made from a proportion of rice as well as barley malt, giving the beer a lighter taste that has proven to be popular throughout the world.
In 1879 the company changed its name to Anheuser-Busch. Eberhard Anheuser died on May 2, 1880, at St. Louis, Missouri and was buried at Bellefontaine Cemetery, St. Louis. Adolphus Busch continued running the company until his death in 1913, when his son, August, took over the business. The company weathered the anti-German hysteria of World War I and Prohibition, and when August Busch died in 1934, the company passed into the hands of his son Adolphus Busch III, and then his brother August A. Busch Jr., in 1946. His son, August A. Busch III, became president in 1974, and August A. Busch IV has been vice president of the Budweiser Brands since 1992. The company now has a market share of between 42 and 48 percent of all beer sold in the United States.
See also: Brewing and Distilling.
—Justin Corfield
Eberhard Anheuser
Hernon, Peter, and Terry Ganey. Under the Influence: the Unauthorized Story of the Anheuser-Busch Dynasty. New York: Simon & Schuster, 1991.
Knoedelseder, William. Bitter Brew: The Rise and Fall of Anheuser-Busch and America’s Kings of Beer. New York: HarperBusiness, 2012.
MacIntosh, Julie. Dethroning the King: The Hostile Takeover of Anheuser-Busch, an American Icon. Hoboken, NJ: Wiley, 2011.
Plavchan, Ronald Jan. A History of Anheuser-Busch, 1852–1933. New York: Arno Press, 1976.
Anthropometry is the systemized procedure of anatomical research used to measure certain parts of the human body. It developed during the Industrial Revolution in the mid-nineteenth century and was used in military, legal, and educational institutions to compare various physical parameters. It is performed on the living, the dead, fetuses, embryos, and skeletal remains. In 1882, Alphonse Bertillon started using a measuring system as a means to keep track of repeat criminal offenders. However, measurement of stature as an indication of health was done much earlier by the Belgian Adolphe Quetelet and L. R. Villermé in France. In the United States in the 1920s and 1930s, Aleš Hrlička defined “skeletal landmarks,” standard places to measure on the human skeleton. He divided anthropometry into six subdivisions: anthropometry of the living, craniometry (measurement of the skull), osteometry (measurement of the other bones), encephalometry (measurement of the brain), organometry (measurement of the internal organs), and physio and psychometry (measure of bodily and mental functions). Instruments consisted of tape measures, various calipers, and scales. Cranial capacity was measured by taking a skull and filling it with sand or small pebbles by using the foramen magnum opening, then pouring the contents into a graduated cylinder and noting its number.
As nations industrialized in the nineteenth century, more and more people moved into cities. Changes in social structure resulted in improved standards for owners and entrepreneurs, but dire living conditions existed for many who were forced to live in working-class slums. Many children worked in factories for long hours until governments, beginning with that of Great Britain, enacted laws limiting child labor. Anthropometric data served a key function in the debates surrounding such laws. Sir Edwin Chadwick performed the first such study in England in 1830 to demonstrate the harmful effect of factory labor on younger children. Anthropometry gave information regarding socioeconomic status because in periods of depression, studies showed that stature actually decreased as a result of poor nutrition. In 1878, Charles Roberts did a study in which he compared the heights of the sons of manual or nonmanual workers and found that those boys born in America of nonmanual workers were taller by more than an inch at age twelve and by more than three-fourths of an inch at age eighteen. The sons of Irish-born parents showed similar discrepancies. Henry Bowditch then took those two groups and compared them, finding that ethnic differences were greater than the differences between occupational statuses. Bowditch wrote that children deprived of the comforts of life exhibited diminished stature. Many of the children born to immigrant parents worked in factories, so it was not industrialization per se that correlated with increased stature but the quality of life that the industrialization could buy.
Franz Boas, an anthropologist, was particularly interested in measurements of children to immigrant parents in America and how they compared with their European counterparts. In the 1920s, he reported that the cephalic index (cranial index) and facial width of American-born children of immigrants differed significantly. A cephalic index is a measurement that results from the maximum skull breadth divided by the maximum skull length. A cranial index of eighty to eighty-five described a rounded head, whereas a cranial index of less than eighty was either an average or narrow skull. Those children born in America had a cephalic index that was less than those born in Europe. Their facial width decreased. Thus the change in width resulted in a narrower skull shape. Boas recognized that class-based living conditions played an important role in the differences among children. He also admitted that the exact amount of influence environment asserts was difficult to quantify. His long-term observations during the fifty years from 1890 to 1940 showed that average stature had increased approximately 1.2 inches in both the United States and Europe, thus indicating generally rising standards of living throughout the period.
Anthropometry was important to the Industrial Revolution in a number of ways. It introduced a new field, physical anthropology, as a critical adjunct to learning about culture. Setting standards for height, weight, organ weights, and variability in everything measurable in the human body created a tool for assessing the impact of social change on the quality of life. Anthropometry, however, was also used to argue that sex and race differences functioned as significant predictors of intelligence. The pseudoscience of Social Darwinism blended concepts of evolution with interpretative anthropometric data to produce conclusions supporting racist hierarchies among humans. Scientists in Europe and North America produced many examples. Most horrifically, Nazism in Germany drew upon this work. Before that era, because of the strength of American paleontology and the rapid industrialization of the country after 1870, such arguments proved especially popular in the United States. The American paleontologist Edward Drinker Cope (1840–1897) identified four groups of “inferior humans,” nonwhite races, all women, southern as opposed to northern European whites, and lower classes. He based much of his speculation on cranial capacity measurements. Cope also believed the effect of climate on fetal and childhood development explained differences among the races. Referring to the flattened nasal bridge and shortened nasal cartilage he identified in Africans and African Americans, anthropologist Daniel Brinton (1837–1899) argued that such “childlike” features indicated the “inferiority” of arrested development. He also believed that cranial shape demonstrated the statistical inferiority of the female brain. G. Stanley Hall, the psychologist, wrote that women were more primitive than men. The nightmarish experiences of World War II, both in Europe and in Asia, caused educators and scientists worldwide to reject genetic and physical determinism. Anthropometric science in its older form never recovered. However, by the later years of the twentieth century, dramatic breakthroughs in mapping the human genome did reopen the potential of anthropology based upon inherited characteristics.
—Lana Thompson
Further Reading
Boas, Franz. Race, Language and Culture. New York: Free Press, 1940.
Floud, Roderick. “The Heights of Europeans since 1750: A New Source for European Economic History.” In Stature, Living Standards, and Economic Development. Edited by John Komlos. Chicago: University of Chicago Press, 1994.
Hrdliča, Aleš. Practical Anthropometry. Philadelphia, PA: Wistar Institute of Anatomy and Biology, 1939.
Singh, Indera P., and M. K. Bhasin. Anthropometry. Delhi: Bharti Bhawan, 1968.
Steckel, Richard H. “Heights and Health in the United States.” In Stature, Living Standards and Economic Development. Edited by John Komlos. Chicago: University of Chicago Press, 1994.
The system of apprenticeship had been around since ancient times, by which boys and young men (and occasionally girls) would train under the supervision of a craftsman or artisan. The method of apprenticeship today was formalized during the Middle Ages when craft guilds organized apprentices to ensure training and also to ensure that goods produced were of high quality. Generally, apprentices were aged between fourteen and twenty-one when they took up the apprenticeship, and most were male, although some girls and young women did find work in fields such as embroidery, tapestry making, and related fields, and for the last hundred years girls and young women have been able to obtain apprenticeships for many different trades. The systems varied slightly from country to country, but the aim of apprenticeships were basically the same. With the increase in school education, many young men and women took up apprenticeships after leaving school.
By the start of the Industrial Revolution in the United Kingdom, the apprenticeship system had been operating since the Middle Ages with only minor variations. In 1563, regulations for boys learning crafts stated that they should serve an apprenticeship of at least seven years. The length was, however, gradually relaxed. From 1710, a stamp duty was levied to register the indentures of apprenticeships, and as a result many records survive from then. Most apprenticeships were arranged by parents, with those of poor children coming under the Poor Law as a way of providing for boys and also girls. At the time of the Industrial Revolution, there were two major changes in Britain that changed the nature of apprenticeships. The first was the demand for many people, including children, to work in factories and in the mines in what were designated as “unskilled” jobs. As a result, this led to the situation in which many children found work as teenagers that prevented them from gaining an apprenticeship, and although they were able to earn an income from a young age, they were often left “unskilled” for the rest of their life, which was then often spent in poverty. The other event that led to the decline in apprenticeships at this time was quite different. The increase in the number of schools and students attending them and remaining at school resulted in longer, larger numbers of middle-class children using school as preparation for work, rather than apprenticeships. The growth of the government bureaucracy created more demand for office workers, and this further reduced the number of boys and girls willing to take up apprenticeships. By the twentieth century, apprenticeships were generally seen as being for “trades.” The professions renamed them: lawyers in training completed their articles and doctors in training worked as interns.
By the 1970s, as more and more students in Britain remained at school until sixteen or eighteen, the number of apprenticeships reached the lowest level since the Middle Ages. To combat this, in 1986 the government introduced National Vocational Qualifications to help what was then called “vocational training.” By 1990 only 0.7 percent of the population were undergoing apprenticeships. This led to the establishment of “modern apprenticeships” in 1994 in England, and with some standardization between England, Scotland, and Wales, by 2005 there were 160 apprenticeship frameworks in the United Kingdom. The introduction of the Modern Apprenticeships Advisory Committee in 2000 noted the decline in apprenticeships in Britain and that the levels were far lower than many other countries. However, a commissioned study showed that although the number of people starting an apprenticeship remained reasonably good, only a third of them actually completed their term; many left to find better-paid jobs or were unable to sustain the number of years required to complete their certificate.
The system of apprenticeships in the United States in the late eighteenth century and early nineteenth century was similar to that in Britain. However, it also faced the same pressures as its British counterpart, and with the move westward and the availability of cheap land, many families did not feel that apprenticeships were as important as they had been beforehand. The decline during the twentieth century was also similar to the experiences in Britain, and as a result a number of educational reforms were introduced late in the twentieth century to try to increase the skill level of many school leavers. By this time “work experience” schemes had been in operation for many years, with “school to work” programs being introduced, and “job shadowing.”
Apprenticeships remain far stronger in Germany than most other countries in the developed world. They also owed their origins to the trade guilds, and it is still very difficult in Germany to gain full employment in many sectors of the economy without having completed an apprenticeship. As a result, in 2001 there were still two-thirds of young people under the age of twenty-two who had begun an apprenticeship, with approximately 78 percent completing their course. This has resulted in 51 percent of all young people having completed a term as an apprentice. Part of the reason for this is that the system has been heavily regulated and forms an integral part of the vocational training system, as well as what has become known as the “dual education system.” Indeed, with the exception of very small companies, all businesses have to offer apprenticeships.
The system in France has been regulated by the government for far longer than most other countries. This was mainly because in 1791, after the French Revolution, the trade guilds were suppressed. This dramatically changed the employment scene in France, but the French Revolution and Napoleonic Wars distorted the economy, and it was not until 1851 that the first law on apprenticeships was introduced. From then this became a major form of training of young people, and from 1971 has been legally a part of professional training, with the age limit of starting an apprenticeship raised in 1986 from twenty to twenty-five. In 2005, worried about the alienation of many young people, the French government pledged their continued support for the apprenticeship system. Statistics for that year showed that 80 percent of young people who had completed apprenticeships were in work, and there have been many government moves to increase the skill level of the young through closer contact between schools and the apprenticeship system.
See also: Child Labor.
—Justin Corfield
Further Reading
Herndon, Ruth Wallis, and John E. Murray. Children Bound to Labor: The Pauper Apprentice System in Early America. Ithaca: Cornell University Press, 2009.
Oxley, G. W. Poor Relief in England and Wales, 1601–1834. Newton Abbot: David & Charles, 1974.
Snell, K. D. M. Annals of the Labouring Poor: Social Change and Agrarian England, 1660–1900. Cambridge: Cambridge University Press, 1987.
Thompson, E. P. The Making of the English Working Class. Harmondsworth, Middlesex: Penguin Books, 1979.
With water so important for cities from ancient times, the earliest known aqueduct was built by the Assyrians in the seventh century BC to take water to Nineveh, their capital. The Romans built large numbers of aqueducts to bring water to their cities for drinking water and for public baths. Many of these have survived, and their construction was not equaled for some 1,400 years.
An aqueduct taking water to the monastery of Stavronikita, Mount Athos, Greece. Photograph by Justin Corfield.
In the eighteenth century, during the Industrial Revolution in Britain, there was the need to build aqueducts that were capable of carrying canals. John Gilbert and James Brindley designed an aqueduct that in 1761 was able to carry the Bridgewater Canal over the River Irwell. The first aqueduct that was made from cast iron was at the Holmes on the Derby Canal. This was built in 1796 and was quickly followed by other cast iron aqueducts. The Pontcysyllte Aqueduct built by Thomas Telford and William Jessop, completed in 1805, carried the Ellesmere Canal for 1,007 feet (307 meters) over the River Dee, and it remains the longest and highest aqueduct in Britain. In 2009 it was declared a World Heritage Site.
The Briare Aqueduct near Châtillon-sur-Loire, France, was 662 meters long, and when it was built in 1896 it was the longest canal aqueduct in the world, remaining the longest until the opening of the Magdeburg Water Bridge in Germany. Work had started on it in 1905, continued through to 1942, and then was resumed in 1997, with completion in 2003. It is 918 meters long and connects the Mittellandkanal over the River Elbe with the Elbe–Havel Canal.
—Justin Corfield
Further Reading
Nardo, Don. Building History: Roman Roads and Aqueducts. San Diego: Lucent Books, 2001.
Ruddock, Ted. Masonry Bridges, Viaducts and Aqueducts. Aldershot, Hampshire, UK: Ashgate/Variorum, 2000.
Yorke, Trevor. Bridges Explained: Viaducts, Aqueducts. London: Countryside Books, 2008.
Arc welding is a process for joining metal by means of electric current. An arc welding apparatus may use either alternating current (AC) or direct current (DC). The arc is struck with a metallic electrode, which may be a consumable metal rod that becomes incorporated into the weld or a nonconsumable rod that only heats a rod of working metal and causes it to melt.
Electric arcs occur in nature in the form of lightning, but it was in 1800 that Humphry Davy was able to produce the first controlled electric arc. Initially it was a scientific curiosity, for the primitive batteries of the day could not sustain an arc for a useful length of time. However, the development of reliable generator technology made it possible to create an arc that would continue to operate until the electrodes were consumed. Initially they were used primarily for lighting streets and large, open spaces, but there was some experimentation using the arc to melt metals to weld them.
The biggest problems faced by early pioneers of arc welding lay in maintaining a stable arc long enough to perform useful work and to keep impurities out of the resulting bead. As a result, gas-welding technology, which used fuels such as acetylene combined with oxygen under pressure to attain higher temperatures than could be produced with atmospheric oxygen, competed strongly with arc welding into the beginning of the twentieth century. At this point, reliable fluxes were developed to coat the electrodes, resulting in greater arc stability and a protective coating over the bead that could subsequently be chipped away by the welder after the bead had cooled and was less likely to react with environmental impurities.
During World War I, shipbuilders worldwide began to use arc welding to replace rivets as the principal means of joining the plates of a ship’s steel hull. Not only did it allow ships to be completed more rapidly, but it also resulted in a more reliable and durable joint. In 1920 automatic welding was introduced, involving an electrode that was continuously fed into the welding site. As a result, the welder did not have to stop as each electrode was consumed, replace it, and restrike the arc. In the 1920s, there was extensive research on the use of various gasses to create controlled environments around the welding site, with the result that a wide variety of nonferrous metals could be welded with arc technology. The industrial demands of World War II saw further development of arc welding, particularly for joining aluminum in the aircraft industry. A wide variety of gas-shielded and flux-shielded welding techniques were developed both during the war and in the following decades. The Digital Revolution of the late twentieth century led to the next development in arc welding: the design and implementation of industrial robotics. Robot arms could complete thousands of spot welds in rapid succession with far greater precision than even a skilled human welder. By the 1980s such robot arms were frequently found on automobile industry assembly lines, where whole rows of them could spot-weld auto bodies one after another. Not only did they increase the precision and the quality of the resulting welds, but they also reduced the number of industrial accidents resulting from welding.
Arc welding is a dangerous procedure, exposing human welders to a number of dangers. The high current or high voltage necessary to create the arc can result in electrocution if the welder should accidentally come in contact with live current. The showers of sparks and splatters of hot metal that frequently fly during welding operations can cause severe burns. As a result, welders wear heavy leather jackets and gauntlets and are generally required to keep their pant legs outside their work boots so that any bits of metal that fall down their pants will land on the floor rather than be caught inside the boot to burn flesh. But the greatest risk posed by arc welding is that of damage to the eye due to the large quantities of ultraviolet (UV) light produced by the arc. These UV rays can burn the cornea, resulting in irritation of the eye, or it can actually damage the retina, resulting in permanent degradation or loss of vision. In order to protect their eyes from the UV radiation produced by the arc, welders wear heavy protective helmets with a filtered eye slit that admits only a fraction of the ambient light. However, these protective helmets also block the welder from being able to see his or her work in order to initially strike the arc. As a result, those welders who do not possess an excellent sense of their relative position in space often will momentarily raise their protective gear just long enough to strike the arc. A skilled welder may be able to drop the welding helmet over his/her face the moment the arc is struck. This trick might be used for years; however, a moment’s fumbling can expose unshielded eyes to dangerous UV radiation.
Two solutions have developed to respond to the problem. The first is the use of glass that darkens in response to UV radiation in protective helmets, so that the welder can see his or her target to strike the arc, and then have full protection while actually welding. However, as anyone who has photochromatic lenses in their regular glasses knows, there is a certain lag time between the beginning of exposure to UV radiation and the darkening. The second solution was developed in 1970 by a Swedish scientist who noticed that the spectrum of an electric arc has a notch (a frequency at which little or no light is produced), which coincides with the yellow sodium line. As a result, if the workspace is lighted with sodium vapor lights, a special interference filter on the welders’ helmets can be used that would only admit light at the sodium wavelength. As a result, welders could see well enough to strike the arc without risking their eyesight.
See also: Edison, Thomas; Shipping.
—Leigh Kimmel
Further Reading
Blunt, Jane, and Nigel C. Balchin. Health and Safety in Welding and Allied Processes. Cambridge, MA: Woodhead, 2002.
Cary, Howard B., and Scott C. Helzer. Modern Welding Technology. Upper Saddle River, NJ: Pearson Education, 2005.
James F. Lincoln Arc Welding Foundation. The Procedure Handbook of Arc Welding, 14th edition. Cleveland, OH: Lincoln Electric, 2000.
Weman, Klas. Welding Processes Handbook. New York: CRC Press LLC, 2003.
From the first Spanish settlements in Argentina until independence in the early nineteenth century, the Argentine economy was heavily regulated by the Spanish, who insisted that all trade between Argentina, then the Viceroyalty of the River Plate, and Europe was conducted through Lima. This did lead to the establishment of many inland cities in Argentina, but the cost of importing items was often prohibitive. The result was that smugglers started establishing themselves on the east bank of the River Plate, which resulted in the growth of the cities of Colonia and Montevideo.
The Napoleonic Wars caused the British to send an expedition to capture Buenos Aires, the future capital of Argentina, in 1806 to 1807. As British manufacturers had not been allowed to sell goods to Europe because of a blockade imposed by Napoleon, many sent shiploads to Buenos Aires and Montevideo in the hope of establishing a market in Latin America. Argentina gained its independence in 1810, and again in 1816, and soon afterward became a steady market for British-manufactured goods. During most of the first half of the nineteenth century, Argentina was dominated by Juan Manuel de Rosas, who ruled as a caudillo.
With a large beef industry in the hinterland of Argentina, the development of refrigeration led to increased demand for Argentine beef, and railroads were built throughout the country, mostly with British capital and British engineers. This railroad construction transformed the Argentine economy and resulted in a massive enlargement of the Argentine middle class. In Buenos Aires, there were trams, and later an underground metro was built and opened in 1913, it was the first metro system in South America.
Industrial development in Argentina continued to be funded by British investment, and little was done to build up a local heavy industry; most items were imported from Britain and also from France and Belgium. The farm sector continued to dominate the economy, and this continues to the present day. However, unlike the agricultural sectors in the United States, Canada, and Australia, Argentina remains dominated by a relatively small number of families, often interrelated, who not only dominate the rural economy but also the rural political scene. Resentment of this caused the formation, in 1891, of the Union Civica Radical (Radical Civic Union), a political party with its base in the middle class. The Radicals, as they became known, wanted a free flow of capital and migrants. They also wanted little control over imports and exports. When they came to power in 1916, the agricultural sector boomed, but the manufacturing industry was destroyed as even more British imports flooded the market.
Argentina remaining neutral in both World War I and World War II, which led to the country becoming increasingly wealthy. Its economy had weathered the Great Depression better than many European countries, and with vast gold reserves, the military sought to act. In 1930, they overthrew the Radical president Hipólito Yrigoyen, but then stepped back from politics. A military coup in 1943 resulted in Colonel Juan Perón becoming head of the National Department of Labor. His parents were Spanish migrants and had left Buenos Aires for Patagonia, where they hoped to establish a sheep farm. There they ran into many problems working against the large landowners funded with British capital. Perón sought to break the power of the “oligarchs” and to create a much larger middle class. He saw that this could transform the country, and in 1946, he became president, winning elections in that year and again in 1952. By expanding the state bureaucracy and raising the wages of the working class, he changed the country through massive state capital expenditures made possible by the wealth generated by World War II. His nationalization of the British railways was symbolic; they had been losing money for some years, but it did generate a sense of national pride.
However, in the early 1950s, Perón’s Argentina started having economic problems of its own making. Many migrants had flocked to the country after World War II, and this led to an economic resurgence and the creation of a protected manufacturing sector. However, as he became increasingly unpopular and heavy-handed, and was also running out of money, Perón was overthrown in 1955, ushering in a series of military or military-backed governments.
During the 1960s, the outcome of a breakdown in law and order was many strikes, political kidnappings, and also some guerilla warfare. This led to a flight of foreign capital and calls for Perón to return to power. He did so in 1973 but died in the following year, and his widow, Isabel, became president. With foreign companies being attacked by Montonero guerillas, supported by some of the most extreme Peronists, and with inflation at more than 50 percent per month, the military staged another coup. They easily took power, and many expected that it would be a repeat of the events in 1955. However, the new military government initiated a program of terror attacks, instead; their opponents were abducted and murdered. As many as thirty thousand people were murdered during this period, and about a third of these (9,032 on the official listing) were never found. They became known as the “Disappeared.” The country became divided between the wealthy who enjoyed the safety and certainty of the military dictatorship and the poor who were largely powerless. The Falklands War in 1983 led to the defeat of the Argentine military, and its government collapsed.
The railways were fundamental to Argentina’s development during the nineteenth century. However many of the lines have now been abandoned. Some of the trains have been saved such as these at the Railway Museum at Gualeguaychú, north of Buenos Aires. Photograph by Justin Corfield.
In 1988, Carlos Menem from the Peronist Party was elected as president. He pegged the Argentine peso to the U.S. dollar, which allowed Argentina to pay off its overseas debt more easily. It also overvalued the Argentine currency, destroying the manufacturing sector that had been nurtured by the military. However, it also led to Argentina being able to borrow far more from international markets. In December 2001, Menem’s successor, President Fernando de la Rúa, was hounded from office as the economy crashed. Gradually, the economy has stabilized under the successive Peronist presidents, Eduardo Duhalde, Néstor Kirchner, and Cristina Fernández de Kirchner.
See also: Brazil; Peronism.
—Justin Corfield
Further Reading
Lewis, Colin M. British Railways in Argentina 1857–1914. London: Athlone for the Institute of Latin American Studies, University of London, 1983.
———. Argentina: A Short History. Oxford: Oneworld, 2002.
Rock, David. Authoritarian Argentina: The Nationalist Movement, Its History, and Its Impact. Berkeley: University of California Press, 1993.
Romero, Luis Alberto. A History of Argentina in the Twentieth Century. University Park, PA: Pennsylvania State University Press, 2002.
Wright, Winthrop R. British-Owned Railways in Argentina: Their Effect on Economic Nationalism, 1854–1948. Austin, TX: University of Texas, 1974.
ARIZ. The Algorithm of Inventive Problem Solving, see TRIZ.
Arkwright, Richard (1732–1792)
The British industrial inventor, Sir Richard Arkwright, was the man credited with the design of his Arkwright spinning frame, which had a major impact on the Industrial Revolution. He also championed the use of power-driven machinery, especially that using water. The waterpower managed to reduce the costs of production, which made the English county of Lancashire and the areas around it the world center of cotton cloth production for the last part of the eighteenth century and most of the nineteenth century.
Born on December 23, 1732, at Preston, Lancashire, in the north of England, Richard Arkwright was the youngest of thirteen children in a poor family. He started working as an apprentice barber, and eventually as a barber and then a maker of wigs. This allowed him to travel around England, which taught him much about the country and the need for labor-saving machines. He also tried to educate himself—even when he was fifty, he was still studying English grammar to prevent making basic grammatical mistakes. He married Patience Holt in 1755 at Bolton, and she gave birth to his first son, Richard. Patience died in the following year, and Richard Arkwright decided to try his hand as an inventor. He closely followed the invention of the flying shuttle by John Kay of Bury, Lancashire. He married again in 1761 to Margaret Biggins. They had three children, with only one daughter surviving childhood.
In 1768, Arkwright started working with a clockmaker in Warrington called John Kay—not to be confused with John Kay the inventor—to try to make a frame for spinning cotton. He also worked with Thomas Highs, a reed maker from Leigh, who may have come up with some of the early designs. The machine that Arkwright designed had a succession of uneven rollers, which rotated at increasingly faster speeds to draw out the roving; the first roller was covered in leather to allow it to catch hold of the cotton. The thread was then twisted on a bobbin. The result was that cotton thread could be made quickly, and very thin, but it also made it strong enough to be easily woven. Making use of the spinning jenny of James Hargreaves, Arkwright incorporated a design that powered a mill by water. Worried that others might copy the design, Arkwright patented the water frame in 1769.
With the spinning frame able to produce thread cheaply, the Arkwrights moved to Nottingham, and there Richard Arkwright formed a business partnership with Jedediah Strutt and Samuel Need, powering their mill by using horses. This proved to be an expensive way of generating power, so soon afterward Arkwright decided to modify his machines to use waterpower. He moved to Cromford in Derbyshire, where he built a new mill that was heavily involved in the manufacture of ribbed stockings and other items of apparel. Soon Arkwright opened other factories around the north of England. In 1775 Arkwright took out a patent for the complete cotton-thread production process. This caused a legal dispute as other manufacturers challenged whether or not Arkwright had been its original inventor. In 1779 a large mob burned down one of his mills at Chorley. Although this cost him much money, it did not stop the opening of a steam-powered mill at Manchester in 1781. In the following year Arkwright had assets worth £200,000 and employed five thousand people. He was knighted in 1786, and in the following year was created High Sheriff of Derbyshire. Soon afterward he was able to purchase the manor of Cromford. Arkwright’s fortune stemmed from the advantageous conditions of his day. Much of the fortune for the British cotton industry came from the monopoly that the British had over the sale of woolen cloth in India, which prevented the Indians from making their own cloth. The ability to employ large numbers of people at low wages at places throughout the north of England, and the increasing use of machinery such as that introduced by Arkwright, also kept production costs low.
Richard Arkwright, a portrait by Joseph Wright of Derby.
A modified version of Arkwright’s water-frame, known as the “Throstle.”
However, at the same time Arkwright faced several problems. In 1785 the British Crown brought a court case against Arkwright to establish exactly who had designed the spinning frame. Witnesses, including Thomas Highs, claimed that Arkwright had stolen their ideas, and the court decided to revoke the patents. Arkwright also lost his appeal. Although it was a significant defeat, Sir Richard Arkwright’s wealth remained largely intact. At his death on August 3, 1792, at Cromford, Derbyshire, he was one of the richest people in England, leaving an estate valued at about £500,000. His only son, Richard (1755–1843), was also an effective businessman. As well as adding to the family fortune, he was also well known for his implementation of major improvements in the health care and working conditions of his factory workers.
See also: Great Britain; Luddites and Ned Lud; Textile Industry.
—Justin Corfield
Further Reading
Cooke, A. J. “Richard Arkwright and the Scottish Cotton Industry.” Textile History 10 (1979): 196–202.
Crabtree, John Henry. Richard Arkwright. London: The Sheldon Press, 1923.
Fitton, R. S. The Arkwrights: Spinners of Fortune. Manchester: Manchester University Press, 1989.
Fitton, R. S. and A. P. Wadsworth. The Strutts and the Arkwrights, 1758–1830: A Study of the Early Factory System. Manchester: Manchester University Press, 1973.
Hewish, John. “From Cromford to Chancery Lane: New Light on the Arkwright Patent Trials.” Technology and Culture 28, no. 1 (1987): 80–86.
Hills, Richard Leslie. Richard Arkwright and Cotton Spinning. London: Priory Press, 1973.
Mason, J. J. “Arkwright, Sir Richard (1732–1792).” Oxford Dictionary of National Biography. Oxford: Oxford University Press, 2004.
Tann, Jennifer. “Richard Arkwright and Technology.” History 58 (1973): 29–44.
Armstrong, Edwin Howard (1890–1954)
An American electrical engineer, Edwin Armstrong was the chief developer of FM (frequency modulated) radio, the device that dominated commercial radio production from the mid-twentieth century until the advent of digital commercial radio in the first years of the new millennium. Born on December 18, 1890, in Chelsea, the neighborhood of New York City, Edwin Howard Armstrong grew up in New York City as the son of publisher John Armstrong and former schoolteacher Emily Gertrude (née Smith). When he was fourteen, he set his sights on working on inventions, inspired by the exploits of Guglielmo Marconi. Armstrong attended Columbia University. When he finished his studies, he became an assistant in the department of electrical engineering and then started studying with Professor Michael I. Pupin until 1934 when he was appointed professor of electrical engineering at Columbia.
In 1912, during his initial period at Columbia, he invented the regenerative circuit, which was patented two years later. He then served in the Signal Corps of the U.S. forces during World War I, rising to the rank of major, a title that he used informally for the rest of his life. During his time in the army, he was awarded the Medal of Honor from the Institute of Radio Engineers in 1917. Stemming from his army experiences, he invented the superheterodyne receiver, which he patented in 1918, and the super-regenerative circuit, which he patented in 1922.
As with other inventors, Armstrong faced regular patent disputes. The first involved the regenerative circuit, which had been patented by Armstrong under the name “wireless receiving system.” Two years later another American inventor, Lee de Forest, patented his own regenerative circuit and then sold his rights to AT&T. From 1922 until 1934 there was a long court battle with Armstrong winning the initial court case, losing the second, and stalemating in the third. Finally, the U.S. Supreme Court awarded de Forest the patent, although some scientists have subsequently claimed that this was largely because of a misunderstanding of the technical facts of the case.
However, at the time that Armstrong was losing the case over the regenerative circuit, he was coming up with a much more important invention. In 1922 John Renshaw Carson of AT&T, who had invented the single-side band, published in the Proceedings of the Institute of Radio Engineers that he did not see any particular advantage in developing FM radio. Carson’s article caused a number of scientists to lose interest, but Armstrong continued with his work. In 1933 he patented what he called a “radio signaling system,” which varied the amplitude of the radio waves to create sound. Three years later he published in the Proceedings of the Institute of Radio Engineers, showing that wide-band FM radio produced sound with far less static than AM radio.
The only serious problem with Armstrong’s invention was that it could lead to the end of AM radio. This possibility seriously worried the leadership of the Radio Corporation of America (RCA), which employed Armstrong at the time. Because they were then developing television, RCA asked Armstrong to remove his equipment from their office in the Empire State Building to make way for television equipment. Rather than fight the intracorporate battle, Armstrong quit RCA. In 1937 he established his own FM radio station, the first in the world, W2XMN, broadcasting from Alpine, New Jersey. Using a 40-kilowatt broadcaster, the 42.8 MHz signal could be clearly heard up to one hundred miles away. Worried that the FM radio might come to dominate the radio field, RCA started to lobby for a change in government regulations and control of the FM radio patent. The company succeeded in moving the FM radio spectrum from 42 to 50 MHz to 88 to 109 MHz, destroying W2XMN as a result.
The systematic undermining of Armstrong’s work caused him immense strain, wrecking his marriage. His obsession with FM radio caused Armstrong to become mentally unbalanced. He eventually committed suicide by jumping from the window of his thirteenth-floor New York City apartment on January 31, 1954. He was buried in Locust Grove Cemetery, Merrimac, Massachusetts. His widow, Marion, who had been secretary to David Sarnoff, the radio pioneer from RCA, continued the battle after her husband’s death, finally winning control of the FM patent in 1967.
Edwin Armstrong
Armstrong won a number of awards, including the Egleston Medal of Columbia University in 1939; the Modern Pioneer Plaque of the National Association of Manufacturers; the Holley Medal of the American Society of Mechanical Engineers in 1940; the Franklin Medal of the Franklin Institute; and the John Scott Medal of the Board of City Trusts, City of Philadelphia in 1941; the Edison Medal of the American Institute of Electrical Engineers in 1943; and the Medal for Merit in 1947. In 1955, the year after his death, the International Telecommunications Union added him to their roster of great inventors. In 1980 Armstrong was inducted into the National Inventors Hall of Fame, and on September 21, 1983, he was commemorated on a twenty-cent U.S. postage stamp, in a series commemorating American inventors.
—Justin Corfield
Further Reading
Erickson, Don V. Armstrong’s Fight for FM Broadcasting: One Man vs. Big Business and Bureaucracy. Tuscaloosa, AL: University of Alabama Press, 1973.
Lessing, Lawrence. Man of High Fidelity: Edwin Howard Armstrong. J. B. Lippincott Company, 1956.
Lewis, Tom. Empire of the Air: The Men Who Made Radio. New York: E. Burlingame Books, 1991.
MacLaurin, W. Rupert, and R. Joyce Harman. Invention and Innovation in the Radio Industry. New York: Macmillan, 1949.
Armstrong, William George (1810–1900)
Sir William Armstrong, First Baron Armstrong, was a British industrialist, inventor, and organizer of industry. He was most well known for founding the Armstrong Whitworth arms manufacturing empire. At his death his hometown newspaper, the Newcastle Daily Chronicle, queried how Armstrong’s brilliant mind could be used largely for “the science of destruction.”
William George Armstrong was born on November 26, 1810, at Newcastle upon Tyne in the north of England, the son of William Armstrong, a farmer from Wreay, near Carlisle. He attended Whickham School, then Bishop Auckland Grammar School. He became a legal clerk, and then trained as a lawyer. He started a legal career working in the firm of Donkin, Stables and Armstrong, but soon he became interested in engineering. His first major invention, in 1840, was a hydraulic engine. He went on to develop other machines, including a hydraulic crane used on the Newcastle docks. He then designed a hydraulic accumulator tower over three hundred feet tall, which was built on the quayside at Grimsby. This was the first of the machines he patented. It also earned him election to the Royal Society as a “gentleman well-known as an earnest investigator of physical science, especially with reference to the electricity of steam and the hydro-electric machine.” These successes caused him to abandon his law practice in 1847 to devote the rest of his life to machine design.
In 1847 Armstrong built a factory at Newcastle where he started making hydraulic machinery, including cranes. This was so successful that in 1850 Armstrong was cowinner of the Burlinson Prize awarded by the Glamorganshire Canal Company and was also given the Telford Medal from the Institution of Civil Engineers. After the Crimean War broke out in 1853, Armstrong and his partner, James Rendel, were asked by the War Office to design a submarine to be used to attack the navy of Russia. When the Crimean War ended in 1856, Armstrong changed his production to emphasize artillery. His firm equipped the British army with the new Armstrong breech-loading gun to replace the rifle used during the war. He also developed a new artillery piece that was sold to the Confederate forces during the American Civil War (1861–1865) capable of piercing the armor on the new ironclad ships. One of the guns located at Fort Fisher, North Carolina, proved decisive in defending the fort.
Despite Armstrong’s lobbying, the British government decided to continue to use muzzle-loading artillery and had all these guns made at Woolwich. Armstrong then turned his Elswick Ordnance Company into the Sir W. G. Armstrong Mitchell & Company and sought orders from other countries. The principal customers included the Egyptian army as well as Chile and Turkey. In 1870 Armstrong became an agent for Richard Gatling’s machine gun, used by the British navy until they replaced them with Hiram Maxim’s version. During the 1870s, Armstrong concentrated on making hydraulic equipment, including the Swing Bridge over the River Tyne in 1876 connecting Gateshead to Newcastle upon Tyne. He later supplied the original lifting gear for London’s Tower Bridge. By the early 1880s, he returned to the production of armaments. To develop new designs, Armstrong started employing many military engineers, including Andrew Noble and George Wightwick Rendel, the latter helped develop the naval cruiser. With Rendel, the company expanded into shipbuilding from 1882, starting work on ships for the navy of Japan.
From the 1860s Armstrong became less interested in the running of his business and became heavily involved in landscape gardening. Over the course of his career he garnered many awards, including knighthood in 1859. In 1871 he founded the College of Physical Science at Newcastle, named Armstrong College from 1904, and in 1963 it was transformed into the University of Newcastle. Created First Baron Armstrong in 1887, he bought Bamburgh Castle in Northumberland. Here he entertained the shah of Persia in 1889, the emir of Afghanistan in 1895, and the king of Siam in 1897. The castle is still held by the family. Lord Armstrong died on December 27, 1900, and was buried in Rothbury Churchyard. He left £100,000 to help with the building of the Royal Victoria Infirmary in Newcastle upon Tyne.
In 1897 Armstrong’s company merged with Joseph Whitworth & Co., and the naval guns that they produced proved devastatingly effective at the Battle of Tsushima in 1905 when the Japanese destroyed the Russian navy, thereby giving them victory in the Russo-Japanese War. Upon his 1911 visit to Britain, Japanese admiral Heihachiro Togo made a point of visiting the Armstrong works. World War I, generated a massive increase in arms production. Armstrong’s company manufactured the 18-pounder artillery piece, the workhorse of British artillery throughout the war. In 1927 Armstrong’s company merged with Vickers to become Vickers-Armstrongs Limited.
William G. Armstrong, portrait by James Ramsay, 1830.
—Justin Corfield
Further Reading
Bastable, Marshall J. Arms and the State: Sir William Armstrong and the Remaking of British Naval Power, 1854–1914. London: Ashgate, 2004.
Dougan, David. The Great Gun-Maker: The Story of Lord Armstrong. Newcastle upon Tyne: Graham, 1971.
Linsley, Stafford M. “Armstrong, William George, Baron Armstrong (1810–1900).” Oxford Dictionary of National Biography. Oxford University Press, 2004.
McKenzie, P. The Life and Times of William George Armstrong, Baron Armstrong of Cragside. Morpeth: Longhirst, 1983.
Scott, J. D. Vickers: A History. London: Weidenfeld and Nicolson, 1962.
The Arts and Crafts movement was a rejection of the Industrial Revolution. More directly, it was a reaction against the manufactured objects displayed and celebrated in the Great Exhibition of the Works of Industry of All Nations, also known as the Crystal Palace Exhibition, held in London in 1851. A young William Morris (1834–1896) was not impressed by what he saw at the London exposition. Trained as an architect and painter, Morris named industrialization and the mass production of Victorian products as the culprit. He began a crusade to promote the benefits of craft over industrialization, preferring the “honest” aesthetic of the handmade.
In establishing the tenets of the movement, Morris looked to his country’s history to find a tradition of genuine English design. Influenced by the writings of Augustus Pugin, the father of England’s Gothic revival, and of John Ruskin, who argued that industrialization had dehumanized the artisan, Morris determined that the guild system of England’s medieval/Gothic era was the period of his country’s greatest creativity. He advocated that his contemporaries “study the ancient work directly and learn to understand it [so as not to] find ourselves influenced by the feeble work all around us.” Morris trusted that good work grew out of the joy of a person’s creation, not a machine’s, and he believed that members of medieval guilds found joy in their work. Morris was inspired by this practical approach to making, not the look of the Gothic itself. He suggested that there was an “honesty” and “morally sound” relationship between the craftsperson and the craft. In addition to medieval works, the architects and artisans of the movement looked at local vernacular traditions for their inspiration. Morris declared: “As to matters of construction, it should not have to depend on the special skill of a very picked workman, or the super excellence of his glue, but be made on the very proper principles of the art of joinery.”
Giving form to his philosophies, Morris’s friend and colleague architect Philip Webb sketched ideas for a house for Morris and his bride, Jane Burden. This became known as the Red House, located in Bexleyheath, Kent, completed in 1860. The L-shaped house is a deep red brick with a red tile roof. The façades are austere, ornamented by the windows and their surrounding brickwork. The interior showcased the talents of Morris and his artisan associates, as they realized the necessity of custom designing all of the cabinetry and casework as well as every piece of furniture. This set the standard for future arts and crafts homes.
During the 1880s and 1890s, a number of new artisan guilds were founded in Great Britain, among them the Century Guild, Selwyn Image, the Art Workers’ Guild, the Home Arts and Industries Association, and the Guild of Handicraft. Their collective goal was to raise the standard of design by promoting cooperation between artists/designers and craftsmen. Despite the various trends that developed in the different groups, the central philosophy remained intact: built work must reflect a respect for materials and craft. Whether buildings or artifacts, objects were to showcase the collaborative spirit of artist and artisan and express beauty through the simple joinery and details of construction.
The Arts and Crafts movement gained notoriety—and its name—with the 1888 establishment of the Arts and Crafts Exhibition Society. Walter Crane was the first chairman of the society; committee members included Morris and designer/illustrator Edward Coley Burne-Jones. Morris outlined the aims of the Society in the preface to The Arts and Crafts Essays, published in 1893. “Our art is the work of a small minority composed of educated persons, fully conscious of their aim of producing beauty and distinguished from the great body of workmen by the possession of that aim. . . . It is this conscious cultivation of art and the attempt to interest the public in it which the Arts and Crafts Exhibition Society has set itself to help by calling special attention to that really most important side of art, the decoration of utilities by furnishing them with genuine artistic finish in place of trade finish.”
The Arts and Crafts movement found its way to the United States around the turn of the twentieth century. Its emphasis on unique, hand-crafted objects was compatible with the American spirit of individualism. American practitioners were less wedded to English Gothic and more to the vernacular approach to architectural design. They sought to establish a new architecture derived from their own history using their own local materials. In 1895, the Chicago Arts and Crafts Society was founded by architects George W. Maher, Frank Lloyd Wright, Dwight Perkins, Robert C. Spencer Jr., and Myron Hunt. Like Morris, this group hoped to establish a national architectural language that expressed, in this case, American democracy. Their ideas were expressed through basic geometric forms, the elimination of unnecessary detail, and a respect for materials. In California, architects such as Bernard Maybeck, Charles S. and Henry M. Greene, and Irving Gill were drawn to the movement. They designed every detail of their projects from the construction details to the built-ins and custom furniture. Greene and Greene’s Gamble House in Pasadena is considered one of the finest examples of Arts and Crafts architecture in the United States.
Ultimately the movement had its limitations, particularly when applied to the design of the new high-rise commercial architecture of the period. As buildings became taller, the limitations of handmade, crafted structures gave way to the need for industrial components. Like any reactionary philosophy, the “spirit of the age” ultimately took hold, and by the 1920s Modernism replaced Arts and Crafts as the expression of the times. By 1911 German architect Walter Gropius declared that “the artist possesses the ability to breathe soul into the lifeless product of the machine.” Ironically, the principles of Arts and Crafts—the attention to detail, the high standards, and the sensitivity to aesthetics—became part of the Modern paradigm; and yet, the belief that every feature of a house is a work of art has secured its legacy as an important and visually stunning approach to the design of the built environment.
See also: apprentice system.
—Sally L. Levine
Further Reading
Blakesley, Rosalind P. The Arts and Crafts Movement. London: Phaidon Press, 2006.
Coleman, Brian. Historic Arts & Crafts Homes of Great Britain. Layton, UT: Gibbs Smith Publisher, 2005.
Condit, Carl. The Chicago School of Architecture. Chicago: University of Chicago Press, 1964.
Kaplan, Wendy. The Arts and Crafts Movement in Europe and America: Design for the Modern World, 1880–1920. New York: Thames & Hudson, 2004.
Kurtich, John, and Garret Eakin. Interior Architecture. New York: Van Nostrand Reinhold, 1993.
Tinniswood, Adrian. The Arts & Crafts House. New York: Watson-Guptill Publications, 1999.
Todd, Pamela. William Morris and the Arts and Crafts Home. San Francisco: Chronicle Books, 2005.
An important but dangerous mineral
Asbestos is any one of six naturally occurring silicate minerals that are used commercially for their desirable physical properties.
A Greek term meaning “incombustible,” asbestos was applied to fibrous materials hundreds of years before the science of mineralogy evolved. It lacks scientific validity because a collective term applied to members of two distinct groups cannot be simply defined mineralogically. The problem is that these substances are the serpentine mineral chrysolite and fine amphibole minerals. Originally the term asbestos applied only to the amphibole varieties, but it continues to be used in commerce even though over 95 percent of production is fibrous serpentine.
Amphibole asbestos is characterized by a combination of distinctive crystals with length-to-width ratios of twenty to one or more. Although greatly outnumbering the serpentine varieties, the amphibole asbestoses are much more rare in nature. Only two varieties are readily available, crucidolite and amosite, both of which are mined only in South Africa. They are used mainly in high-temperature insulation and acid-resistant products.
Chrysolite is the only asbestos mineral belonging to the serpentine group. It is found in California, Vermont, Canada, Zimbabwe, South Africa, Swaziland, and Russia. It is used to make asbestos shingles, sheet siding, asbestos cement pipe, floor tile, gaskets, paper, binders for heat-insulating materials, and filler for asphalts, plastics, paints, and greases.
The controversy about the health effects of exposure to asbestos centers largely on damage to lung tissue from inhaling minute particles of the mineral. It is generally agreed that prolonged occupational exposure to asbestos dust can increase a person’s chances of contracting lung cancer or mesothelioma, a rare cancer of the lining of the abdominal cavity. However, there is no consensus for a threshold atmospheric concentration that would eliminate danger. In fact, there is no method for measuring it.
The largest area of research concerned with asbestos has been in connection with the question of the health effects of the fibers. Efforts aimed at finding environmentally acceptable substitutes for asbestos have not been successful. To acquire any substantial part of the asbestos market, a substitute material, in comparison to asbestos, must have about the same strength, chemical inertness, durability, and cost.
—Kenneth E. Hendrickson Jr.
Further Reading
Bartrip, B. W. J. Beyond the Factory Gates: Asbestos and Health in Twentieth Century America. London: Continuum. 2006.
Benarde, Melvin A. Asbestos: The Hazardous Fiber. Boca Raton, FL: CRC Press, 1990.
Bowker, Michael. Fatal Deception: The Untold Story of Asbestos, Why It Is Still Legal and Still Killing Us. Emmaus, PA: Rodale, 2003.
Brodeur, Paul. Outrageous Misconduct: The Asbestos Industry on Trial. New York: Pantheon. 1985.
Castleman, Barry I. Asbestos: Medical and Legal Aspects. New York: Law and Business. 1984.
Dewees, Donald N. Controlling Asbestos in Buildings: An Economic Investigation. Washington, DC: Resources for the Future. 1986.
Kendall, Tom. Asbestos. London: Financial Times Energy, 2000.
Schneider, Andrew, and David McCumber. An Air That Kills: How the Asbestos Poisoning of Libby, Montana, Uncovered a National Scandal. New York: Putnam, 2004.
A Welsh designer, Laura Ashley established the company that bore her name, manufacturing patterns that became famous all over the world because of their simplicity of color and the style they exuded. Initially operating from small premises, this company expanded dramatically and established shops selling her goods all throughout the British Isles and through department stores around the world.
Born on September 7, 1925, at Dowlais, Merthyr Tydfil, Laura Mountney was the eldest of four children of Stanley Lewis Mountney and Margaret Elizabeth (née Davis). Her parents raised her as a strict Baptist. Although she did not speak Welsh, she enjoyed the language, and certainly regarded herself as Welsh. Her father was a civil servant who worked in Surrey, and Laura went to school in London. However, when she was fourteen, at the start of World War II, her school was evacuated, so she returned to Wales. Soon afterward she trained as a secretary and then found work in the Women’s Royal Naval Service. After the war Laura Mountney worked as a secretary with the National Federation of Women’s Institutes, a post she held for seven years.
In 1949, Laura Mountney married Bernard Albert Ashley, a Welsh businessman and engineer who also had a flair for design. They had two children, and it was during her time as a homemaker that Laura Ashley started to design napkins, tea towels, and table mats. Part of the inspiration came from a visit to the Victoria and Albert Museum, and her wares were made in the attic of their house in Pimlico, London. It was not long before Laura Ashley diversified into producing scarves. These became fashionable when Audrey Hepburn appeared wearing one in Roman Holiday (1953).
It was from these small beginnings that Laura Ashley started her company. The couple moved to Kent in 1955 and bought a mill there. However, the site suffered its first disaster three years later when, in September 1958, the River Darent overflowed and destroyed much of the stock and the equipment, nearly ending the company. The business rebounded and in 1961 was relocated to Wales. The Ashley children were very supportive of the move, and there was also the benefit of government grants that were aimed at helping rural communities. There it was run initially from a former social club in Carno, Montgomeryshire (modern-day Powys), in central Wales, and in 1967 it moved to the former village railway station, the railway line having closed down some years earlier. It was this station that was to become the headquarters of the company until 2005.
At this juncture, as the company was starting to become extremely profitable, Bernard Ashley managed to design a flatbed printing process that could process some one thousand meters of fabric each day. The labor force were largely drawn from women who had skills in sewing and cutting at home, many of them never having been in the workforce before. Many of the men were former farmhands. The Ashleys quickly gained a reputation for treating their staff well, and her factories only operated for four-and-one-half days per week. Laura Ashley believed that people needed some leisure time, especially after having to work on boring tasks.
Although Laura Ashley had designed dresses and blouses for work, it was not until 1966 that she first managed to design a costume for a social function. It was a period when women’s clothing became more revealing. By contrast, the Laura Ashley clothes were far more traditional. The Laura Ashley fashion became popular, and it was not long before the company was generating large profits with sales reaching £300,000 in 1970. By this time Bernard Ashley was heavily involved in designing and running the machinery while Laura Ashley designed the patterns. Laura insisted that many of her patterns were not her own designs, however. In many cases she had found old patterns and adapted them, often changing the colors and making heavy use of pastel shades. Laura regularly visited museums to get new ideas and to see how old designs could be revived or changed.
The Laura Ashley business was immensely successful. The company achieved a turnover of more than £130 million. This enabled it to expand to take in a European market. The Ashleys bought a townhouse in Brussels but spent most of their later years in their château in Picardy and some time each year at their villa in the Bahamas. Frequently returning to Britain, Laura, who had been reclusive from the late 1970s, was badly injured in a fall down the steps of her daughter’s house in the Cotswolds, and she died from these injuries on September 17, 1985, at Coventry. Two years later, her husband was knighted.
Just before Laura Ashley’s death, plans had already been put into place for the floatation of the company on the London Stock exchange. The company at that time had seventy-three shops in Britain and fifty-five in the United States, employing 3,400 people. The aim of the flotation was to raise capital for the construction of a new factory. This took place in late November 1985, and it was thirty-four times oversubscribed. As a public company, Laura Ashley expanded dramatically, with new retail outlets purchased. At the height of success there were about five hundred stores. However, this led to cash flow problems as demand fell back slightly, and the company recorded losses in 1990 and 1991. When there was a boardroom clash between Sir Bernard Ashley and the new CEO, James Maxmin. Maxmin managed to bring the company back into making a profit, but he resigned soon afterward, citing difficulties dealing with Sir Bernard Ashley. By the end of the 1990s, MUI Asia Limited had become a major shareholder, and soon the company started again to focus on its home furnishings—the most profitable lines—and to reduce the emphasis on clothing. Sir Bernard Ashley died on February 14, 2009.
Laura Ashley
See also: Textile Industry.
—Justin Corfield
Ashley, Nick. Laura Ashley at Home: Six Family Homes and Their Transformation. New York: Harmony Books, 1988.
Gale, Iain, and Susan Irvine. Laura Ashley Style. London: Weidenfeld & Nicolson, 1987.
Hooson, Emlyn. “Laura Ashley.” Dictionary of National Biography 1981–1985. Oxford: Oxford University Press, 1990: 12–13.
Wood, Martin A. Laura Ashley. London: Frances Lincoln Ltd. Publishers, 2009.
An English cotton manufacturer, Ashworth was born on September 4, 1794 at Birtwistle, near Bolton, Lancashire, England, the oldest of the six sons and five daughters of John Ashworth, land agent and cotton spinner, and his wife, Isabel (née Thomasson). He attended a local school and then went to Ackworth School, a Quaker boarding school in Yorkshire. In 1808, he entered his father’s cotton business and was running it by 1818. It expanded in 1829 with the purchase of the Egerton Mill in 1829. In 1834, it was running two factories, which had around seven hundred employees working over seventy-seven thousand spindles. It continued to grow, but he argued with his brother, who had joined the firm in 1824. Finally in 1854 the firm was divided, with Henry Ashworth getting the New Eagley mill that by that time had seven hundred employees.
Henry Ashworth became well known for his construction of three rural industrial villages—Turton, Egerton, and Bank Top. These provided cottages for his employees whose wages were high enough to allow wives to remain at home. Schooling was encouraged, and piped water was provided from 1835. A library and a newspaper room were later added, but no public houses were allowed for the consumption of alcohol.
Ashword was a founding member of the Anti-Corn Law League, established to repeal the Corn Laws. With his friends, Richard Cobden and John Bright, he visited the north of England to survey the effects of the Corn Law on agriculture. He later wrote Recollections of Richard Cobden and the Anti-Corn Law League (1876). He traveled widely and spent the winter in Italy from 1879 to 1880. He was returning from Rome when he died on May 17, 1880, at Florence, possibly from malaria. He was buried in the Protestant Cemetery in Florence.
—Justin Corfield
Boyson, Rhodes. The Ashworth Cotton Enterprise: The Rise and Fall of a Family Firm, 1818–1880. Oxford: Clarendon Press, 1970.
Howe, Anthony. The Cotton Masters, 1830–1860. Oxford: Oxford University Press, 1984.
Howe, A. C. “Ashworth, Henry (1794–1880).” Oxford Dictionary of National Biography. London: Oxford University Press, 2004.
Trinder, Barrie. The Making of the Industrial Landscape. London: Dent, 1982.
The Asiatic Petroleum Company (later the Shell Petroleum Company, Ltd.) was established in 1903 by Royal Dutch and Shell to operate a shipping line to transport fuel from Java and Borneo. Up until that point, the Shell Transport and Trading Company, the Royal Dutch, and the Rothschild merchant bank from Paris (which had considerable oil interests) all were involved in competing against each other and against Standard Oil.
The Royal Dutch Company had been formed in 1890 to exploit the petroleum interests in the Netherlands East Indies (modern-day Indonesia)—especially Sumatra and Borneo. Its market was largely in Asia, especially China, French Indochina (Vietnam, Cambodia, and Laos), and British Malaya, as well as India. The increasing amount of business led some British business interests, in 1897, to establish the Shell Transport and Trading Company with a capital injection of £1.8 million from Samuel Brothers, with Marcus Samuel (later Viscount Bearsted, 1853–1927) becoming the manager. The name came from the fact that Samuel had taken over his father’s import-export business, which had been involved in the importing of shells from East Asia to Britain. It had subsequently established a sideline in importing kerosene, with that part of the business gradually proving far more profitable.
The Shell Transport and Trading Company revolutionized the manner of carrying petroleum. Up until that point, the transportation of petroleum was expensive because the steamers taking the oil could only do so one way; the cost of cleaning out the ships thoroughly enough for them to take other goods was costly. This meant that the cost of both the voyage with the oil and the return trip had to be paid for out of the oil sales. Marcus Samuel started investigating the possibility of using steam to clean the vessels, an idea from a ship captain but not having, up until that point, been tried out. This new technique allowed the ships from the Shell Transport and Trading Company to take petroleum from the Netherlands East Indies to China and elsewhere in Asia and return with rice and other produce. At the time of the investment, some people were worried that Samuel had risked too much on the venture. However, within thirty years his investment was worth £26 million. By that time they were also involved in petroleum from Texas, Romania, and parts of Russia.
It was not long after Samuel began using steam to clean his ships that other companies followed suit. However, with Standard Oil trying to control the petroleum industry in Asia, the main rivals decided to merge their interests. On June 27, 1902, an agreement was reached whereby Royal Dutch, the Shell Transport and Trading Company, and the merchant bank Rothschild, the latter two putting up £300,000 each, came together to form a combined shipping operation. The Asiatic Petroleum Company itself was formed on July 2, 1903, and it controlled the ships that were used by Royal Dutch and other related companies. By this time, Royal Dutch was buying kerosene from Russia, with sales to China, India, Egypt, and also to the Netherlands East Indies. In 1905, the Asiatic Petroleum Company made an arrangement with Burmah Oil by which both companies agreed to “friendly co-operation,” especially in the Indian market. In 1907, a complete merger of operations led to the creation of Royal Dutch/Shell, with Sir Henri Deterding in charge of the new conglomerate that included the Asiatic Petroleum Company.
The Chinese Revolution led to widespread disturbances in parts of China, and in April 1914, the White Wolf bandits operating in east-central China sacked the city of Hangzhou, destroying much of the company’s property. The troubles in China became worse during the 1920s with the Nationalists in southern China imposing a tax on imported kerosene. The Asiatic Petroleum Company and Standard Oil both refused to agree to the taxes that went against Chinese treaties with the Western Powers. The Chinese viewed these as unequal treaties and sought to abrogate them. It was not long before the Asiatic Petroleum Company and Standard Oil imposed an embargo on territory under the control of the Kuomintang, in the south of the country. They hoped that this might cause the Nationalists to change their policy, but instead of using their ties with the Soviet Union—the Nationalists had not yet broken with the Chinese Communist Party—the embargo was broken. With the prospect of a Kuomintang government monopoly being established, the Asiatic Petroleum Company and Standard Oil both agreed to accept the tax. Historically, it was a major move because it showed the power of the Nationalists and that foreign companies operating in China were not given the free reign they had before.
See also: Petroleum; Shanghai, China.
—Justin Corfield
Further Reading
Gerretson, Dr. F. C. History of the Royal Dutch. Leiden: E. J. Brill, 1953.
Wilson, David A. “Principles and Profits: Standard Oil Responds to Chinese Nationalism, 1925–1927.” Pacific Historical Review 46, no. 4 (1977): 625–47.
ASIT (Advanced Systematic Inventive Thinking). See TRIZ.
Joseph Aspdin (or Aspden) was a British cement manufacturer who obtained the patent for Portland cement in 1824, the first patent granted in Great Britain for a type of cement. He was born near Leeds in about December in 1778, the son of Thomas Aspdin, a bricklayer, and his wife, Mary. He was the eldest of six children.
Starting work as a bricklayer at Briggate, Leeds, on May 21, 1811, Joseph Aspdin married Mary Fotherby. They had two sons and five daughters—two died in infancy. By 1817, Aspdin had started working on his own in central Leeds. Over a few years he started experimenting with the manufacture of cement, and on October 21, 1824, he was granted the British Patent 5022, “An improvement in the mode of producing an artificial stone.” In this patent, he first used the term Portland cement to compare its strength to that of Portland stone, which was renowned for its strength and quality and was quarried on a peninsula near the town of Portland in Dorsetshire, in the southwest of England.
The cement involved using limestone “such as that generally used for making or repairing roads,” which was then mixed in “specific quantity of argillaceous earth or clay” and mixed with water to make a far stronger bonding agent. The mixture was then heated at a high temperature, using a kiln in a similar manner to a glass furnace. Aspdin may have developed his method at the glassworks either at Hunslet or Wakefield. The end product when mixed with water and aggregates of small stones and gravel was particularly good for making cement. This cement was very useful for making stucco and architectural precast moldings as it set much faster than ordinary cement.
Soon after being granted the patent, in early 1825 Aspdin formed a business partnership with a neighbor, William Beverley, and the two established a factory to produce the cement. It was located at Kirkgate, near Wakefield. Beverley remained in Leeds while Aspdin moved to Wakefield. By the end of 1825, Aspdin had managed to get a second patent, this time for a method of making lime.
The factory making Portland cement operated until 1838 when the site was compulsorily purchased by the Manchester and Leeds Railway Company. The machinery was moved to a nearby site in Kirkgate while the original location was cleared. By this time Aspdin’s eldest son James was working as an accountant in Leeds, and his younger son William, born in 1816, was running the factory. William modified the making of Portland cement, making what is now regarded as modern Portland cement. He included a far higher amount of limestone in his mixture, and then burned it at a much higher temperature. This took more fuel and was more expensive. As a result, when he left the company he moved to northeast Kent where there were much greater supplies of soft chalk allowing for the manufacture of “modern” Portland cement. William Aspdin eventually established partnerships selling the cement at Gateshead upon Tyne and at Westminster, later setting up a sales office in Hamburg, Germany.
In 1841 Joseph Aspdin went into partnership with his eldest son, and he even posted a notice to tell the public that the new company would not be responsible for William Aspdin’s debts. Joseph Aspdin retired in 1844 and transferred his entire business to James Aspdin, who, four years later, moved the factory to another site at Ings Road where the plant remained until 1900. In the 1851 Census for Hanson’s Lane, Wakefield, Joseph Aspdin is listed as a “cement manufacturer,” living with his wife and a teenage female servant. Joseph Aspdin died on March 20, 1855, at his home in Wakefield and was buried in the churchyard of St. John’s Church, Wakefield.
At the entrance to the Leeds Town Hall, a bronze tablet as a tribute to Aspdin by the American Portland Cement Association and the British Cement Makers Federation notes: “In memory of Joseph Aspdin, of Leeds, bricklayer, 1779–1855, whose invention of Portland Cement, patented in October 1824 and followed by improvement in manufacture and use, has made the whole world his debtor.” It was unveiled in 1924.
See also: Great Britain.
—Justin Corfield
Further Reading
Francis, A. J. The Cement Industry 1796–1914: A History. North Pomfret, VT: David and Charles, 1977.
Halstead, P. E. “The Early History of Portland Cement.” Transactions of the Newcomen Society 34 (1961–1962): 37–54.
Hewlett, P. C., ed. Lea’s Chemistry of Cement and Concrete, 4th edition. London: Arnold, 1998.
Kirkbride, T. W. “Joseph Aspdin.” Dictionary of National Biography: Missing Persons. Oxford: Oxford University Press, 1994. Revised by Mike Chrimes. Oxford Dictionary of National Biography. Oxford: Oxford University Press, 2004.
An assembly line is a manufacturing process in which interchangeable parts are added to a product in a sequential manner in order to create a finished product much faster than with hand-crafting methods. The best-known form of the assembly line, the moving assembly line, was put into practice by the Ford Motor Company between 1900 and 1913 and made famous by the social and economic implications of mass production such as the affordability of the Ford Model T and the introduction of high wages for Ford workers. But Henry Ford did not invent the assembly line. His was merely the first company to build large factories based on the concept. In any case, mass production via assembly lines such as those used by Ford is generally considered to be significant in the development of modern consumer culture by making possible low unit costs for manufactured goods.
The origins of the assembly-line concept can be traced at least as far back as the sixteenth century when the Venetian Arsenal employed some six thousand people who were apparently capable of producing a ship a day and could fit, arm, and provision a newly built galley with standardized parts on an assembly-line basis not seen again until the Industrial Revolution. The first linear and continuous assembly line in post-Renaissance times was created in 1801 by Marc Isambard Brunel and others for the production of pulleys for the Royal Navy; which remained in use until the 1960s. In the United States, credit for this innovation is usually given to Eli Whitney, who in 1798 received a government contract to manufacture ten thousand muskets. At that time, the manufacture of firearms was still primarily a handicraft process. Each gun was made as a separate unit with the barrel, the stock, and the lock put together by a slow and laborious process of fitting, shaping, and filing. The Whitney plan was to disassemble a completed musket and use each piece as a pattern for the manufacture of a large number of identical replicas. The process would result in the production of a large number of identical stocks, locks, and barrels from which a completed musket could be assembled in a short time with a minimum of fittings. Although the completion of his contract took longer than Whitney had anticipated, he finally finished it successfully in 1809. The assembly plan was soon adopted by other arms manufacturers, notably Colt and North, and by the 1850s it was used for the manufacture of clocks, watches, locks, sewing machines, and farm machinery.
Americans are also given credit for the development of several machine tools used in the construction of heavy machinery. Among these are the turret lathe, the slide lathe, the milling machine, the gear cutter, and the vertical caliper. All of these are indispensable to the whole assembly-line process.
The Industrial Revolution in Western Europe and North America led to a proliferation of manufacturing and invention. Many industries, notably textiles, firearms, timepieces, buttons, railroad cars and locomotives, sewing machines, and bicycles, benefited from improvement in material handling, machining, and assembly, although modern concepts such as industrial engineering had not yet emerged.
At the turn of the twentieth century, Ransom Olds patented the assembly-line concept, which he put to work in his Olds Motor Vehicle Company in America to mass-produce automobiles. But his effort was overshadowed by the independent redevelopment of assembly-line work at Ford Motor Company a few years later, which introduced the method to a wider audience.
As practiced at Ford, the system took five years to develop. It began when William Klann returned from a visit to a Chicago slaughterhouse where he saw what was described as the “disassembly line” (animals were butchered as they moved along a conveyor). The efficiency of one person removing the same piece over and over caught his attention, and he reported the idea to Peter E. Martin, head of Ford production. From this beginning, through a trial-and-error process over a period of several years, the modern automated assembly-line concept emerged. As a result of these developments, Ford’s cars came off the line at three-minute intervals. The process was so successful that paint became a bottleneck. Only Japan black would dry fast enough, forcing the company to drop the variety of colors used before 1914. Thus, all Ford cars were black until fast-drying Darco lacquer was developed in 1926. Ford’s complex safety procedures—especially assigning each worker to a specific location instead of allowing him to move around—dramatically reduced the rate of injury. Ford also increased wages, and the combination of high wages and high efficiency came to be known as Fordism. It was eventually copied by most major industries.
Whereas the Industrial Revolution in Western Europe and North America led to a proliferation of inventors and innovators, the Second Industrial Revolution led to the emergence of such concepts as Ford’s version of the assembly line, which played a significant role in shaping modern civilization.
See also: Automobile; Daimler, Gottlieb; Detroit, Michigan; General Motors Corporation; Patents.
—Kenneth E. Hendrickson Jr.
Further Reading
Adler, William M. Mollie’s Job: A Story of Life and Work on the Global Assembly Line. New York: Scribner, 2000.
Cavendish, Ruth. Women on the Line. London: Routledge & Kegan Paul, 1982.
Crowson, Richard, ed. The Handbook of Manufacturing Engineering. Assembly Processes: Finishing, Packaging, and Automation. Boca Raton, FL: CRC/Taylor and Francis, 2006.
Ford, Henry. My Life and Work. Saint Louis Park, MN: Filiquarian Publishing, 2006.
Linhart, Robert. The Assembly Line. Amherst: University of Massachusetts Press, 1982.
Owen, A. E. Assembly with Robots. Englewood Cliffs, NJ: Prentice-Hall, 1985.
Prenting, Theodore O. Humanism and Technology in Assembly Line Systems. New York: Spartan Books, 1974.
Taylor, Frederick Winslow. The Principles of Scientific Management. New York: Harper and Brothers, 1934.
Walker, Charles R. The Foreman on the Assembly Line. Cambridge: Harvard University Press, 1952.
———. The Man on the Assembly Line. Cambridge: Harvard University Press, 1952.
Widick, B. J. Auto Work and Its Discontents. Baltimore: Johns Hopkins University Press, 1976.
A pottery worker who is associated with Staffordshire figures that made him wealthy, John Astbury was born in about 1688, probably the son of John Astbury and Anna (née Hales). He became a potter and married Elizabeth Wedgwood, the sister of Thomas Wedgwood (the father of Josiah Wedgwood). In 1744, he was one of the witnesses to the apprenticeship of Josiah Wedgwood.
There were several men called John Astbury connected with the pottery industry at that time, and it is believed that the famous potter was the one who worked for John Fenton from 1723 at Shelton and then was involved in running the pottery works at Shelton from 1731. He was certainly influenced by events and fashions of the period and used flint to gain the white texture that made his pottery so well known. He is probably the John Astbury described as a “master potter” who died on March 3, 1743 and was buried at St. Peter ad Vincula, Stoke-on-Trent, England.
—Justin Corfield
Further Reading
Barker, D. “An Important Discovery of Eighteenth-Century Ceramics in Stoke-on-Trent.” West Midlands Archaeology 35 (1992).
Phillips, Helen L. “Astbury, John (1688?–1743?).” Oxford Dictionary of National Biography. London: Oxford University Press, 2004.
Atatürk (Mustafa Kemal) (1881–1938)
The Turkish leader and founder of the Turkish Republic, Mustafa Kemal (Atatürk) was born around 1881 and died in 1938. He was an army officer, revolutionary leader, founder of the Republic of Turkey, and its first president. He established himself as a capable military commander in the Gallipoli Campaign (1915) and later fought with distinction on the Anatolian and Palestinian fronts. Following the defeat of the Ottoman Empire in 1918, he led the Turkish national movement against the efforts of the Allies to partition the country. Having established a provisional government in Ankara, he defeated the invading forces of the Entente powers, drove them from Anatolia, and established the Republic of Turkey. He then embarked upon a major program of political, economic, and cultural reform seeking to create a modern, democratic, and secular nation. He succeeded in bringing Turkey into the modern world.
Kemal was commissioned a lieutenant in 1905 and soon joined a secret society of reformist officers called Motherland and Liberty. Two years later he joined the Committee of Union and Progress (CUP), commonly known as the Young Turks. This group seized power from the Sultan, Abdülhamid II, in 1908, but at this time Atatürk was not among the political leaders of the movement. During the next several years he served in various military and diplomatic capacities and at the same time became a vocal critic of the CUP leadership. He opposed Turkey’s entry into the war on the side of the Central Powers, but he served gallantly. His first encounter with Allied forces came at the Battle of Gallipoli (April 25, 1915–January 9, 1916), where the Allied objective was to take Istanbul. Turkish forces under his command played a major role in crushing the attack.
Following the Turkish victory of Gallipoli, Mustafa Kemal fought in the Caucasus Campaign in 1916 followed by the Sinai and Palestinian campaigns in 1917 and 1918, by which time he had reached the rank of general. In these theaters of the war, the Turkish and German forces were defeated, and on October 30, 1918, the Ottomans surrendered to the Allies with the Armistice of Mudros. Soon, it became clear that the Allies intended to partition Turkey as British, Italian, French, and Greek forces began to occupy the country. Mustafa Kemal’s participation in the resistance began when he was assigned the task of demobilizing the Turkish Ninth Army. Instead, he issued orders to local leaders, provincial governors, and military leaders to resist the occupation. The Turkish War of Independence began in May 1919.
In the Treaty of Sèvres signed on August 10, 1920, the Sultan agreed to the dismemberment of the country, but Mustafa Kemal and his followers in Ankara refused to accept it. They formed the Turkish Grand National Assembly, wrote a new constitution, and elected Mustafa Kemal president. By 1922, the National Army, led by Mustafa Kemal, had driven all the invaders from the country. On November 1, 1922, the Grand National Assembly voted to abolish the sultanate, and the last sultan left the country on November 17. Shortly thereafter, the Conference of Lausanne convened and, after lengthy debate, agreed to Turkish independence. The Treaty of Lausanne was signed on July 24, 1923. Then began the process of modernization as envisioned by Mustafa Kemal, now known as Atatürk, the “father of Turkey.”
Under Atatürk’s rule from 1923 to 1938, Turkey did not become a true democracy. In fact, many people saw Atatürk as a dictator. Nevertheless, he instituted many reforms. He separated the government from religious affairs, closed the Islamic Court, and replaced Sharia law with a civil code modeled after the Swiss Civil Codes and a penal code modeled after the Italian Penal Code. His economic policies were aimed at the development of small- and large-scale business under state control. Many of these grew into successful enterprises and were later privatized. The primary goal was to eliminate foreign control of the economy, which had become endemic during the last years of the caliphate.
Atatürk also emphasized educational reform. A new system designed to promote literacy, science, patriotism, and modernity was created under state control. The number of primary and secondary schools was increased, Islamic schools were banned, the University of Istanbul was reorganized, and a new university was established in Ankara. The program even included the creation of a new Turkish alphabet to replace Arabic.
In addition to educational reform, Atatürk sought to improve the status of women by instituting social reform. His policy was aimed at complete gender equality to be achieved by means of education and law. Women were encouraged to attend school and to progress as far as their talents would permit. On December 5, 1934, Turkish legislation passed a law granting full political rights to women.
Atatürk’s importance to Turkey and the world was incalculable. He saved his country from foreign domination, transformed it into a modern state, resurrected the economy, and strove to modernize its society. He also sought to promote international peace. By the time of his death in 1938, the country had made significant gains in most areas even though the transformation was not yet complete. Some traditional and Islamic resistance remained and has continued to the present day, but the power and influence of Atatürk’s legacy remains strong.
Ataturk as President of Turkey.
See also: Five-Year Plans; Germany; Nationalization.
—Kenneth E. Hendrickson Jr.
Further Reading
Atillasoy, Yüksel. Mustafa Kemal Atatürk: The First President and Founder of the Turkish Republic. Woodside, NY: Woodside House, 2002.
Huntington, Samuel. Political Order in Changing Societies. New Haven: Yale University Press, 2006.
Kinross, Patrick. Atatürk: The Rebirth of a Nation. London: Phoenix Press, 2003.
Mango, Andrew. Atatürk. London: John Murray, 2004.
Palmer, Alan. Ataturk. London: Cardinal Books, 1991.
Zürcher, Erik. Turkey: A Modern History. London; New York: I. B. Tauris, 2004.
An automobile is a four-wheeled automotive vehicle designed for passenger transportation, commonly propelled by an internal combustion engine using a volatile fuel. The modern automobile consists of about fourteen thousand parts divided into several structures and mechanical systems. These include the steel body, containing passenger and storage space, which sits on the chassis or steel frame; the engine that powers the car by means of a transmission; the steering and brake systems that control the car’s motion; and the electrical system that includes a battery, alternator, and other devices. Subsystems involve fuel, exhaust, lubrication, cooling, suspension, and tires.
The invention of the automobile, like most technological innovations, was a long process. Experiments with steam vehicles began late in the eighteenth century, but it was not until the perfection of the internal combustion engine in the 1860s that real progress was possible. A vehicle designed by Karl Benz in 1885 is generally considered to be the first successful internal combustion automobile. The credit for the first successful vehicle in the United States is given to J. Frank Duryea, who built and operated a vehicle in 1893. Other automobile pioneers in this country were Henry Ford, who built and operated a car in 1896, the Dodge Brothers, Ransom Olds, and Alexander Winton. Only four cars were produced in 1895, but the industry grew rapidly, and by 1900 some eight thousand cars were registered. The first trucks appeared in 1904. By 1910 registrations had reached nearly one-half million, and by 1920 over eight million cars were registered in the United States.
Competition was particularly vigorous during the early days of the industry, and for a time it was doubtful whether the internal combustion automobile could overcome the competition of automobiles driven by steam and electricity. But gradually, the advantages of the internal combustion engine became apparent, and by 1902 other forms of motive power were in decline. Entry into the industry was relatively easy in this period. In 1903 the Olds Company dominated the industry and accounted for 25 percent of the automobiles manufactured. By 1909, however, there were nineteen firms in the industry, and by 1914 this number had increased to seventy-one. Between 1903 and 1926, 181 companies entered the automobile field, and of these 136 failed. In 1926 there were only forty-four companies left, and only eleven of these have been in business throughout the entire period.
Henry Ford was one of those who opened business in 1903, and he had about 9 percent of the market when he introduced the Model T in 1908. This car was almost immediately successful because it was rugged, dependable transportation well suited for the undeveloped roads of that period. Ford was also the first to use the assembly line mass-production technique to produce a low-cost car. As a result of his low unit cost and mass-production methods, Ford was able to reduce the cost of the Model T from the original price of $850 to $440 by 1920. By 1919 Ford had acquired 43 percent of the market, and in 1921 he was the first automaker to build and sell one million cars in a single year.
The full impact of the automobile on the nation’s economy was not felt until after World War I, but even in the prewar period the effects were significant. The rapid expansion in the number of automobiles called for more road mileage, and in 1916 Congress passed the Highway Act, which provided federal aid to the states for road building on a dollar-matching basis. Also, the automobile industry was beginning to exert a notable impact on the construction, oil, and rubber industries.
The production of motor vehicles grew into the country’s largest industry during the 1920s. By 1929 it accounted for 8 percent of the value of manufactured products and 5 percent of all of the country’s wage earners. Moreover, it contributed to the prosperity of many related industries. These included plate glass, steel, rubber, gasoline, lead, leather, and many others. The automobile industry also continued to contribute to the growth of road construction as well as service stations, garages, and motels. The impact of the automobile on the American economy, and on the nation’s social structure, has continued to the present day.
Automobile Overland, 1920.
The automobile and the industries it spawned or supported were all children of the Second Industrial Revolution. Their role in the history of the United States and the industrialized world cannot be overstated.
See also: Volvo (AB Volvo); Daimler, Gottlieb; Diesel, Rudolf; Ford Motor Company; Fordism; General Motors Corporation; Goodyear, Charles; Mitsubishi; Petroleum.
—Kenneth E. Hendrickson Jr.
Further Reading
Berger, Bennett. Working Class Suburbs: The Study of Auto Workers in Suburbia. Berkeley: University of California Press, 1960.
Epstein, Ralph C. The Automobile Industry: Its Economic and Commercial Development. New York: A. W. Shaw Company, 1928.
Georgano, G. N. Cars: Early and Vintage 1886–1930. London: Grange-Universal, 1985.
Howe, Irving, and B. J. Widick. The UAW and Walter Reuther. New York: Random House, 1949.
Jerome, Harry. Mechanization in Industry. Cambridge, MA: National Bureau of Economic Research, 1934.
Rae, John B. The Road and the Car in American Life. Cambridge, MA: MIT Press, 1971.