Chapter Eighteen

THE GREAT POSTWAR BOOM

IT WAS WIDELY PREDICTED by both economists and business leaders that the postwar American economy would be characterized by renewed depression. Federal government expenditures would be drastically scaled back (and in fact they fell by nearly two-thirds over three years) while most of the twelve million men and women in the armed services would pour into the job market, forcing down wages and driving up unemployment. Proving that economists have uniquely clouded crystal balls, the longest sustained boom in American history was in the offing.

So worried was the government about renewed depression that it moved in 1944 to prevent it. On June 22 that year President Roosevelt signed the GI Bill of Rights (formally the Serviceman’s Readjustment Act), passed unanimously by Congress. Ostensibly it was intended to reward veterans for their bravery and sacrifice in defeating Germany and Japan. In fact, a major purpose was to slow down the return of veterans to the job market. The GI Bill of Rights provided generous assistance to all honorably discharged veterans in paying for education and housing while in school and assistance in buying houses and businesses after school was completed.

The law of unintended consequences is usually invoked to explain the pernicious results stemming from well-intentioned legislative action, such as Prohibition. But the GI Bill, while perhaps moderately effective in smoothing the flow of GIs into the American economy, had in fact almost nothing but unintended consequences, and almost all of them were profoundly good for the country.

It allowed no fewer than eight million veterans to obtain more education both in college and in technical schools than they otherwise would have. It greatly enlarged the percentage of the population that had college degrees. In 1950 some 496,000 college degrees were awarded, twice the number of a decade earlier.

Between 1945 and 1952 the federal government spent $14 billion on GI educational benefits but added far more than that to the human capital that would power the postwar economy. Considered as a “public work,” the GI Bill proved to be the Erie Canal of the new, postindustrial economy that was then, quite unrecognized, coming into being.

The GI Bill also powered a social revolution. It opened up high-level jobs to many segments of the population that had rarely known such jobs before, thus greatly enlarging and diversifying the country’s economic elite, which had long been dominated by people with British or northwest European names. Because children in this country have historically received on average two years more schooling than did their parents, these benefits have continued generation after generation. Further, because the benefits of the GI Bill have been extended to veterans who served in subsequent wars, including the cold war, it has been a continuing engine of human capital creation and technological capacity for the last sixty years, allowing the country to dominate the new information economy as much as it had dominated the industrial economy of decades past.

The GI Bill of Rights also revolutionized housing in this country. Housing had been a growing problem in the 1920s and 1930s, when new construction did not equal the number of new families needing housing by six hundred thousand units. The war had brought housing construction virtually to a halt. With the end of the war, veterans returning in the millions, getting married and initiating the baby boom, the pressure for new housing became intense.

Many New Dealers envisioned government-built or -sponsored housing in apartment complexes to be built on areas of cleared slums, such as Parkchester in New York’s borough of the Bronx, or housing built privately with government subsidies, such as Stuyvesant Town in Manhattan, owned and built by Metropolitan Life Insurance Company. Many of these would be built in cities across the country in the years after the Second World War, but they were almost always economic and social failures, simply and quickly evolving into high-rise slums that were often far worse than the slums they replaced.

The GI Bill of Rights provided for Veterans Administration mortgages, by which the Veterans Administration guaranteed, at first, half the mortgage up to $2,000. This was soon amended to allow guarantees of up to $25,000 or 60 percent of the loan, whichever was less. With no fear of loss through a default, many banks were willing to make loans with no money down to veterans. All that was needed was the housing to buy.

Entrepreneurs such as William Levitt supplied it. Levitt had been a contractor before the war, erecting houses one by one, as single-family houses had always been built. With the VA mortgages, he saw opportunity. “The market was there,” he said years later, “and the government was providing the financing. How could we lose?”

He acquired 7.3 square miles of what had been Long Island potato fields in suburban Nassau County. His brother Alfred designed two basic models of houses, “ranch” and “Cape Cod,” and in four years he built fully 17,500 units of single-family housing by industrializing the process of construction. “What it amounted to,” Levitt explained, “was a reversal of the Detroit assembly line. There, the car moved while the workers stayed at their stations. In the case of our houses, it was the workers who moved, doing the same job at different locations.”

The first houses could be rented for $65 a month or purchased for $6,990, a figure soon raised to $7,990. By 1949 they could only be purchased. What the family got on a sixty-by-one-hundred-foot lot was a two-bedroom house on a cement slab, with living room, kitchen, and bathroom. An attic could be converted into two more rooms, and a bathroom. The neighborhood consisted of hundreds or thousands of nearly identical houses. At first, they were nearly devoid of trees.

Intellectuals, with characteristic snobbery, were appalled. The social critic John Keats wrote a best-selling book called The Crack in the Picture Window, in which he mourned the fact that the inhabitants of these new suburbs, which began springing up around every American city in imitation of Levittown, “were not, and are not, to know the gracious dignity of living that their parents knew in the big two- and three-story family houses set well back on grassy lawns off shady streets.”

Of course, most of the people who moved into Levittown and its thousands of imitations had never known any such thing. Instead they had grown up in crowded, walk-up apartments in urban neighborhoods where parks were few and far between. To these people, the new suburbs were an affordable paradise.

Far more important economically, this new type of housing allowed millions of new families to have something their parents had never known: home ownership. Instead of paying rent, they were building equity. As family income rose with age and experience, they could trade up, using the equity in their old house to serve as the down payment on the new house. The GI Bill thus helped millions of families acquire not only better housing than their parents had ever dreamed of, but something else: capital, the financial assets that are the defining characteristic of the middle class in this country.

Further, once a family had some financial assets, it became much easier for it to obtain credit. Bank loans and charge accounts had been attributes of the rich before the Second World War; now they rapidly became an aspect of everyday life. In 1951 a banker named William Boyle, who worked for the Franklin National Bank headquartered in the middle of the burgeoning Long Island suburbs, came up with the idea of the credit card. It relieved merchants of the trouble and expense of maintaining their own charge accounts, allowed ordinary people to charge at numerous businesses, and provided the issuing banks with handsome profits by charging interest on unpaid balances.

The idea, as good ideas always do, spread rapidly. By the 1960s credit cards were common. In the early 1970s MasterCard and Visa gave credit cards national and soon international scope. Today they and their off-spring, debit cards, are replacing cash in most transactions. The ubiquity of credit has become so intense and credit so important in everyday life that maintaining a good credit rating is now a great concern of most Americans. Having one’s credit cut off, and thus losing one’s access to the marketplace, is not altogether dissimilar today to what being excommunicated meant in the Middle Ages.

As the new suburbs grew explosively, many of the cities around which they grew declined in population. Except for New York, all the cities with major league baseball teams in 1950—which is to say the major cities of the northeastern quarter of the country—lost population, sometimes by as much as 50 percent in the decades after the Second World War. The population that remained was mostly poor and minority, often needing more services than the cities could provide. Within a few decades, the population of the suburbs exceeded that of the nation’s cities, and had become the linchpin of American politics.

 

THE RETURNING VETERANS and their families, besides investing in real estate, also began, slowly at first, to invest in securities.

Wall Street had seen business seriously decline from the glory days of the late 1920s. Volume in 1929 had averaged 2.5 million shares a day. In 1939 it dropped below 1 million. There was no panic on Wall Street when war broke out in September that year, but rather, as elsewhere, a sullen acceptance. However, a three-year decline in both prices and volume began. In 1942 daily volume averaged a dismal 455,000 shares, while the Dow-Jones Industrial Average fell below 100 for what turned out to be the last time in April that year, even while corporate profits were soaring thanks to war orders.

Even after the war, prices on Wall Street lagged behind the fast-growing economy. On December 31, 1949, the Dow stood at 200, only twice where it had been in 1940, although the economy had nearly tripled and corporate profits increased even more. Some blue-chip stocks were selling for a mere four times earning and paying dividends over 8 percent. But a revolution was already under way on Wall Street that would remake the brokerage business and the American economy as well.

Charles Merrill, a southerner, arrived on Wall Street at the age of twenty-two, just in time to experience the panic of 1907. He opened his own firm in 1914 and two years later merged it with the firm owned by Edmund Lynch, to form Merrill Lynch and Company. (The partnership papers accidentally left out the comma between the two names, and it has been left out intentionally ever since.) In the 1920s he helped underwrite the stock issues of several chain stores. Early in 1929 he saw what was happening on the Street and urged his customers to get out and was himself largely out of the market when the crash came.

Correctly forecasting that the depression that began to develop in 1930 would be a long one, he sold his seat on the exchange and sold his firm to another brokerage house, E. A. Pierce and Company, becoming a limited partner there but not active in its management. He spent most of the 1930s consulting with the various chain stores he had helped underwrite, such as Western Auto and Safeway, and began to think about applying chain store techniques to the business of brokerage.

Most Wall Street brokerage firms at this time were small, family-owned, and uninterested in customers who had only small accounts. Research, such as it was, was casual at best, mere rumor gathering at worst. In 1940 Merrill took over as the senior partner at E. A. Pierce and Company, and the Merrill Lynch name reemerged on Wall Street. Merrill immediately began to create an entirely new kind of brokerage business. His customers men (soon renamed registered representatives) were thoroughly trained and provided with information gathered by a large research department.

In 1948 he began to advertise—unprecedented on Wall Street—to acquaint the average person with Wall Street and the investment opportunities to be found there. The ads discussed the mechanics of how stocks are bought and sold and the risks involved. They often as well made a subtle political point. When President Truman, running for another term in 1948, made a rabble-rousing reference to “the money changers,” Merrill replied in an ad. “One campaign tactic did get us a little riled,” he admitted. “That was when the moth-eaten bogey of a Wall Street tycoon was trotted out…. Mr. Truman knows as well as anybody that there isn’t any Wall Street. That’s just legend. Wall Street is Montgomery Street in San Francisco. Seventeenth Street in Denver. Marietta Street in Atlanta. Federal Street in Boston. Main Street in Waco, Texas. And it’s any spot in Independence, Missouri [Truman’s hometown], where thrifty people go to invest their money, to buy and sell securities.”

Merrill’s idea of bringing Wall Street to Main Street worked. By 1950 Merrill Lynch was the largest brokerage house in the country. By 1960 it was four times the size of its nearest competitor, with 540,000 accounts, and was already known on the Street—with a mixture awe and envy—as “the thundering herd.” Other brokerage firms had no choice but to imitate Merrill Lynch’s business model, and the family firm, catering to a few rich clients, began to disappear on Wall Street.

While those investing directly in stocks remained a relatively small group, those who had indirect investments grew very rapidly. Pension programs for hourly wage employees had been nearly nonexistent in the 1920s (Sears Roebuck was a notable exception). But the Wagner Act made it possible for labor unions to insist on negotiating for them, and an increasing number of people in corporate management favored the idea and it began to spread rapidly through corporate American in the 1940s.

Charles E. Wilson, the president of General Motors in the 1940s (and later Eisenhower’s secretary of defense), was told that if the money in such programs was invested in the stock market, workers would be the owners of American business in a few decades. “Exactly what they should be,” he replied.

By the 1950s pension funds, controlled by both corporations and unions, had become major players on Wall Street. In 1961, when the federal budget was less than $100 billion, noninsured pension funds held stock worth $17.4 billion and were making new investments at the rate of $1 billion a year. Mutual funds, which had first appeared in 1924, as well began to play a larger and larger role on the Street, as people sought to invest in common stocks without having to make the actual decisions themselves about which ones to buy. Only $500 million was invested in mutual funds in 1940. Investment was five times as high a decade later, and $17 billion, nearly five times higher still, in 1960.

Finally, in 1954, the market, deeply undervalued, began to move up. It reached a new postdepression high on February 13, when it closed at 294.03, its highest since April 1930. By June it stood at 330, and in December it finally broke through its September 3, 1929, high of 381.17 after more than twenty-five years, the longest period between highs in the Dow-Jones in its 108-year history. The Great Depression was finally over, psychologically as well as economically.

 

THE BIGGEST PROBLEM of the postwar economy turned out to be not unemployment but inflation. Although GNP dipped slightly in 1946, as military orders fell from an annual rate of $100 billion in early 1945 to $35 billion a year later, GNP had recovered by year’s end and grew strongly thereafter.

The reason, clear in retrospect, was the vast pent-up demand for durable goods created by the war. Virtually no cars, no housing, and no appliances had been manufactured during the war. Those in use were nearing the end of their productive utility, and many were already far beyond that point. Further, the huge pool of personal savings that had accumulated during the war was there to pay for the goods demanded.

But it required time for the country’s industry to shift back from war production to consumer goods, while irresistible political pressure brought wage and price controls to a premature end in 1946. The result was a roaring inflation, the greatest in the peacetime history of the country up to that point, as nongovernmental spending rose by 40 percent while the supply of goods did not rise nearly so quickly. Farm prices rose 12 percent in a single month and were 30 percent higher by the end of that year. Automobile production, virtually nonexistent since 1942, reached 2,148,600 in 1946, but wouldn’t top 1929’s production until 1949.

Corporate profits in this intense sellers’ market rose by 20 percent, and labor unions demanded large increases in hourly wages and benefits. Strikes multiplied alarmingly once the wartime prohibition against them lapsed. In January 1946 fully 3 percent of the labor force, including workers in the automobile, steel, electrical, and meat-packing industries, was on strike. Never before (or since) had so many workers been on the picket lines. Many people thought labor had become too powerful and that the Wagner Act had pushed the pendulum too far in labor’s direction.

The new president, Harry Truman, took much of the blame for the economic disruption of the immediate postwar era, and “to err is Truman,” became a national joke. In the off-year campaign of 1946 the Republicans ran on the slogan of “Had enough?” and for the first time since 1928 won a majority in both houses of Congress. Truman would famously label it the “do-nothing Eightieth Congress,” but it produced at least one piece of major legislation, the Taft-Hartley Act. Taft-Hartley, unlike the Wagner Act, allowed employers to fully inform their workers on the company’s position regarding the issues in an election to certify a union, as long as they used no threats. It also allowed management to call an election on its own if it chose to do so and forbade unions to coerce workers or to refuse to bargain, just as the Wagner Act had forbidden management to do so.

Secondary boycotts, a powerful weapon in labor’s arsenal, were outlawed, as was the closed shop (where workers have to be a member of the union before they can be hired). Union shops (where workers must join the union upon being hired) required a vote of the workers, and states were allowed to outlaw the union shop. Most visibly, Taft-Hartley gave the president the power to interrupt a strike by calling an eighty-day cooling-off period while government mediators sought a settlement.

Labor, of course, fought Taft-Hartley tooth and nail and President Truman vetoed it, calling it “shocking—bad for labor, bad for management, bad for the country.” Congress overrode the president’s veto. But as so often happens, the problem the legislation was meant to address, the great upsurge in strikes immediately following the war, was already correcting itself. In 1946 there had been a total of 125 million man-days lost to strikes. In each of the next three years there was an average of only 40 million.

And Truman was wrong; Taft-Hartley proved good for the country. With a more level playing field and the great prosperity of the times, labor and management learned how to be less confrontational and to work better toward achieving a just division of the wealth created by corporations and their workers. By 1992, in a vastly larger economy and workforce, fewer than four million man-days were lost to strikes. And while Taft-Hartley was often invoked to end strikes in the two decades after the law was first passed, and by both Democratic and Republican presidents, it has been invoked only once in the last quarter century.

By 1992, of course, the economic universe that had brought the modern labor movement into being was rapidly vanishing. The percentage of the workforce that was unionized peaked in 1945 at 35.6 percent and has been declining ever since. By 1960 it was only 27.4 percent of the nonfarm workforce. Today it stands at only 14 percent and would be lower than it had been in 1900 were it not for the spread of unionization among government workers, which began only in the 1960s.

The heart of the old union movement had been in manufacturing, among the everybody-does-the-same-job assembly line workers. But just as agriculture, the country’s first great economic sector, has continually increased output while using an ever-declining percentage of the workforce, so has the country’s second great economic sector, manufacturing. The age-old American drive to increase productivity and thus minimize labor continues unabated.

 

ALTHOUGH MANY NEW ENGLAND cotton mills had moved to the Piedmont area after the turn of the twentieth century, to take advantage of the cheap labor available in the area, the South was still overwhelmingly agricultural at mid-century, and unions had never been strong there. Permitted by Taft-Hartley to do so, many southern states outlawed the union shop, ensuring that the labor movement would remain relatively weak.

Corporations, always looking to minimize labor costs, began building more factories in these states. But economic growth in the South was still impeded by two major factors: its summer climate, which nonnatives found very difficult to deal with, and the enduring and bitter legacy of racism.

The first factor was solved with air-conditioning. A primitive but working air-conditioning system had been built as early as 1842, in Apalachicola, Florida, to cool a hospital. Early in the twentieth century, Stuart Cramer and Willis H. Carrier, working independently, developed practical air-conditioning systems that could be manufactured on an industrial scale. Their use in large commercial buildings, such as theaters and department stores, began in the early 1920s. The development of Freon, a highly stable and efficient refrigerant, in 1930 brought the running cost of air-conditioning down substantially and allowed it to spread rapidly.

After the Second World War most public and office buildings were designed with air-conditioning systems, and small air-conditioning units developed for railroad cars were adapted for home and automobile use. By the 1960s air-conditioning was standard equipment in middle-class houses in the South and was spreading rapidly in all areas of the country with hot summers.

Racism, needless to say, was a far more difficult problem to deal with. Franklin Roosevelt, who needed southern votes to enact his economic and foreign policy programs, did little if anything to end the rule of Jim Crow. It was his successor, Harry Truman, who began the battle for equal rights in 1946 by ordering the integration of the armed forces. In 1954 the Supreme Court unanimously overturned the separate-but-equal doctrine that allowed segregation, and ordered the integration of schools and other public facilities “with all deliberate speed.”

Meanwhile, one of the most remarkable political movements in American history, epitomized by the Reverend Martin Luther King, Jr., began to demonstrate peacefully for the end of racial discrimination in the marketplace as well as in the law. It took ten years of ever larger demonstrations and courage in the face of sometimes brutal repression, but finally, in 1964, the Twenty-fourth Amendment outlawing the poll tax and the Civil Rights Act were passed. The following year the Voting Rights Act was also passed by Congress. They put the federal government, and the nation as a whole, squarely behind the doctrine of racial equality.

Although at the time it seemed that the fight for equal rights had just begun, in fact, the battle was won. With the rapidly increasing political power of blacks in the South, politicians began attending to their interests, including even die-hard segregationists such as George Wallace and Strom Thurman. Within a decade, Jim Crow was largely an ugly memory, and the most divisive and shameful aspect of American life was expunged from the body politic and increasingly from the hearts and minds of Americans as well.

With its great economic advantages, especially low land and labor costs, the South began to grow and modernize rapidly. As its economy grew and developed increasingly into a First World economy, it population grew likewise, as did its national political influence. In 1940 the eleven states of the old Confederacy had had 25 percent of the electoral votes. Today they have 35 percent. The Civil War was over at last.

 

THE INTERNATIONAL EUPHORIA of V-E Day (May 8, 1945) and V-J Day (August 15) that greeted the return of peace in Europe and Asia did not continue long. Even before the war was over, it was becoming obvious that the Soviet Union did not intend to live up to its agreements regarding postwar Europe. On April 30, 1945, the day that Hitler committed suicide, President Truman, not yet even moved into the White House, summoned the Soviet foreign minister, Vlacheslav Molotov, who was in Washington, to a meeting. He told him in no uncertain terms that the Soviet Union must carry out its agreements regarding Poland, including free elections. Molotov was stunned by the tongue lashing, but it did no good. By the end of the year Soviet forces were in firm control of most of eastern Europe and clearly had no intentions of withdrawing.

The Soviet Union soon began to pressure Turkey to make concessions to its interests, and it supported Communist guerrilla movements in Greece and elsewhere.

By early spring 1946 the wartime alliance between the Western democracies and the Soviet Union had been shown to be, at best, a matter of “the enemy of my enemy is my friend.” With the collapse of Nazi Germany, the alliance collapsed as well. On March 15, 1946, in Fulton, Missouri, Winston Churchill delivered his famous “Iron Curtain” speech with President Truman in the audience. It was clear that a new confrontation between Great Powers had begun and that while the period between the first and second world wars had lasted twenty years, the new peace had hardly lasted one. Worse, the possibility of atomic war—out of which there could come no winners—was frighteningly real.

The Communist tyranny in the Soviet Union was every bit as odious as that of the Nazis in Germany, and, as with all tyrannies, every bit as aggressive. How to confront it was the question. The United States was overwhelmingly powerful militarily, but it had been demobilizing as quickly as possible since the end of the war. There had been more than twelve million people in the armed forces in 1945. By 1947 there were fewer than two million. Thousands of ships and planes had been scrapped or sold to other countries. And while the United States still had a monopoly on the atomic bomb (the Soviets would not explode their first one until September 1949), the country as yet had very few of them.

In early 1947 Britain, which had been aiding Greece against the Communist insurgency, as well as Turkey, told the United States that it could no longer carry the burden financially. Truman felt that the United States had no option but to take over Britain’s role in this area, for otherwise these countries were almost certain to fall under Soviet domination.

The United States decided to fight this third Great Power confrontation of the twentieth century, soon dubbed the cold war, in a new way: with money instead of bullets. It would contain the Soviet Union with alliances and with sufficient forces to deter an attack, but would place most of the emphasis on reviving and enlarging the economies of potential victims of Soviet aggression.

On March 12 Truman addressed a joint session of Congress and announced what quickly came to be called the Truman Doctrine. “I believe it must be the policy of the United States,” he told Congress, “to support free peoples who are resisting attempted subjugation by armed minorities or by outside pressure.” And he said that this assistance would be “primarily through economic and financial aid.”

There was little opposition. The former arch-isolationist senator from Michigan, Arthur Vandenberg, now chairman of the Senate Foreign Relations Committee, told Truman after he had been briefed beforehand, “if you will say that to the Congress and the country, I will support you and I believe most of its members will do the same.”

In June, at a commencement address at Harvard, General George C. Marshall, Army chief of staff during the war, now the secretary of state, proposed what would be called the Marshall Plan, one of the most extraordinary, and extraordinarily successful, acts of statesmanship in world history. It called for European nations, including the Soviet Union, to cooperate in the economic recovery of the Continent, with the United States providing the capital necessary. Stalin quickly rejected the idea, as did the countries in eastern Europe under his control, but he inadvertently helped sell it to other European countries by engineering a coup in Czechoslovakia in early 1948.

In the next several years the Marshall Plan provided Europe with $13 billion and helped especially to get the economies of West Germany, France, and Italy back on their feet. However, the Marshall Plan aid was but a small fraction of American foreign aid in these years. Between 1946 and the early 1970s, when the foreign aid program began to wind down, the United States spent about $150 billion on economic aid to foreign countries. About one-third of this went to Europe, the rest in Asia, Latin America, and elsewhere.

Like Lend-Lease, it was both extraordinarily generous (indeed, it was entirely unprecedented in world history for a dominant power to help its potential economic rivals build their economies) and a perfect example of enlightened self-interest on a massive scale. That free trader Adam Smith would have approved. With half the world’s GNP, the United States was running large export surpluses. But foreign countries, many of them economically devastated, could pay for imports only by exporting in turn to the United States.

With the help of such international organizations founded as a result of the Second World War as the World Bank, the International Monetary Fund, and the General Agreement on Tariffs and Trade (GATT, now the World Trade Organization), the United States worked to establish a new world trading system and to lower tariffs on a worldwide basis to increase world trade, to the mutual benefit of all. The result was, once again, a spectacular success. World trade increased by a factor of six in the fifteen years after the war, greatly strengthening the economies of all involved. And the trend has continued unabated. In 2000 total world trade was 125 times the level of 1950, equaling an astounding $7.5 trillion, including both manufactures and services. Free trade has proved the greatest engine of economic growth the world has ever known.

 

IN 1946 CONGRESS PASSED the Employment Act. It established the president’s Council of Economic Advisors as well as the Joint Economic Committee of Congress. Most importantly, it declared it the policy of the federal government to maximize employment, production, and purchasing power. Largely forgotten today—because its purposes are now so taken for granted—the Employment Act of 1946 was revolutionary in its time.

Until the 1930s governments were thought to have only very limited economic responsibilities. It was, certainly, the business of government to maintain a money supply that kept its value and to help in the enforcement of contracts. But government was not thought to have any more responsibility for the business cycle as a whole than for the weather. The Great Depression and the most influential economist since Adam Smith changed that.

John Maynard Keynes (Lord Keynes after 1944) was born in Cambridge, England, and would be associated with the university there all his life. He studied under Alfred Marshall, the leading economist of the previous generation, whose Principles of Economics, first published in 1890, was immensely influential in the profession. Marshall had trained as a mathematician and physicist before switching to economics. His conception of a national economy was what Keynes called “a whole Copernican system, by which all the elements of the economic universe are kept in their places by mutual counterpoise and interaction.”

Keynes first rose to fame when he attended the Versailles Peace Conference in 1919 with the delegation from the British treasury and, outraged with the peace treaty that emerged, wrote a book called The Economic Consequences of the Peace, which proved to be all too prescient. In 1936 he published his masterpiece, The General Theory of Employment, Interest, and Money, to explain the origins of the Great Depression and why it persisted.

Before Keynes, economists had been mostly interested in what is today called microeconomics, the myriad allocation of resources that determine prices and affect supply and demand in the marketplace. Keynes was interested in macroeconomics, how aggregate supply and demand affect national economies. Keynes argued that supply and demand must equal in the long term, but as he noted in his most famous aphorism, “in the long term we are all dead.” In the short term, supply and demand can be out of balance. If there is too much supply, depression results; too much demand and inflation breaks out.

Keynes felt that government, by running a deliberate deficit in times of slack demand (or cutting taxes) and expanding the money supply, could prevent depression. Equally, by doing the opposite—running surpluses and raising interest rates—governments could keep booms in check. Economists took to Keynesian ideas immediately. For one thing, these ideas were immensely exciting intellectually, seemingly a powerful tool for preventing such economic disasters as they had all just lived through.

Of course, Keynesianism also made economists important as they had never been before. Politicians, before Keynes, had not needed economists to help them govern, any more than they had needed astronomers. After Keynes, they were vital, hence the President’s Council of Economic Advisors. The quality and consistency of the advice, however, was equivocal. Truman, the first president to have such formal economic advice, joked that what he needed was a one-armed economist because the ones he had were always saying, “on the one hand…but on the other hand.”

The first postwar presidents, Truman and Eisenhower, had been born in the nineteenth century and educated in pre-Keynesian economics. They remained skeptical. Eisenhower’s secretary of the treasury, George Humphrey, expressed the attitude of the pre-Keynesian generation perfectly when he said, “I do not think you can spend yourself rich.” While neither president paid down the national debt to any appreciable degree (in fact it increased under their tenures), neither did they allow it to rise significantly. Half the budgets of Eisenhower and Truman were in surplus, including two during the Korean War. Because the economy grew strongly in the fifteen years after the war, the debt as a percentage of gross national product declined precipitously. It had been nearly 130 percent of GNP in 1946. By 1960 it was a mere 57.75 percent.

When Kennedy became president, however, in 1961, full-blown Keynesianism was adopted. Walter Heller, a professor of economics at the University of Minnesota before becoming Kennedy’s chairman of the Council of Economic Advisors, talked about being able to “fine tune” the national economy. In other words, he wanted government to be the “engineer” driving Alfred Marshall’s economic machine. He proposed budgeting based on what he called a “full-employment budget.” In other words, the budget should spend what revenues the government would be receiving if the economy was operating at an optimum level. If the economy was at that level, the budget would be in balance; if below it, the budget deficit would automatically stimulate the economy, driving it toward the optimum.

Money, of course, is “the mother’s milk of politics.” Once politicians had intellectually defensible reasons for spending in deficit, the political pressure to do so began to increase relentlessly. As a result, the national debt rose by a third in the decade of the 1960s, the first time the debt had risen significantly during a time of peace and prosperity. But because of that prosperity (and a gathering inflation), the debt-to-GNP ratio continued to decline, dropping below 40 percent by 1969.

Kennedy, relatively conservative, moved to stimulate the economy by means of tax cuts rather than increased spending, especially in the highest brackets, cutting them from 91 percent to 70 percent. When enacted shortly after his death, they worked as intended. Between 1963 and 1966 the economy grew between 5 and 6 percent a year, while unemployment dropped below 4 percent from 5.7 percent.

In the twenty years between the end of the Second World War and the mid-1960s, the American economy almost doubled in real terms, GNP growing from $313 billion to $618 billion (both figures in 1958 dollars), while inflation remained low. Many foresaw an extended period of prosperity with continued “fiscal dividends” as government continued to cut taxes (or increase spending) as revenues rose. The stock market, mired below its 1929 high for a quarter century, rose steadily after finally breaking through that high in 1954 and was approaching 1,000 on the Dow by the mid-sixties. After two decades of continuing economic boom (interrupted only by three recessions so short and so shallow they would have gone unnoticed in a less statistics-drenched age), it seemed the dream of permanent prosperity, dashed by the Great Depression, was at hand once more.

It was not to be.