CHAPTER ONE
JANE IS AN ASSISTANT in a large nonprofit research organization, where she has worked for the past thirty-two years. She was an excellent typist in school and took a few courses in business practice. After spending a semester in college, she decided that the cost of an undergraduate education was not worth the benefits; jobs for typists were plentiful, and the money seemed attractive. Her first job was with the nonprofit, where she initially worked for two bosses. Her primary tasks were to type up reports and research papers, file the enormous amount of paperwork that kept accumulating, and answer the phones.
Over the years, many of those who started in positions similar to Jane's have lost their jobs. The advent of the computer—first the mainframe, then the personal computer—eliminated much of the routine work of assistants. Midlevel supervisors and managers learned to type their own documents. Presentations and basic analysis were outsourced, sometimes to far-off countries, where workers did what was necessary overnight. Most files became electronic, stored on disks rather than in physical cabinets. And as Jane's bosses turned to communicating by e-mail, phone calls became rarer and rarer: they were not in a fast-moving business requiring constant verbal contact with their clients. As a result, Jane's secretarial job too became endangered, and eventually she was laid off.
Jane, however, survived the onslaught of the machines, largely by reinventing herself. She quickly found another job within the organization. She has become a sort of “fixer” for her new bosses, taking on tasks that they have little time or capacity to handle—such as picking the restaurant and ordering the menu for an office dinner, inviting speakers to the organization and managing their schedules, heading off irate clients and ensuring their problems are dealt with, or following up with an obdurate office accountant questioning a bill submitted by one of her bosses. Because Jane has transformed herself into one who takes care of the unusual tasks, ones that machines cannot handle, she has to report to more bosses—nine at last count. Work is exhausting, because demands come from all sides, but Jane is thankful she still has a job. And it is more interesting now.
Jane's bosses have benefited hugely from the revolution in computers and communications. The research papers and articles they write receive much wider circulation. In the past they had to be photocopied and sent by mail to a small list of the truly interested, but today they are uploaded to a website and quickly seen by many. Their presentations are more colorful and their seminars more interesting, which means that their audiences pay closer attention when they speak. They routinely field requests from strangers who have come across their work somewhere on the Web, to speak, consult, or give expert testimony.
Thus advances in technology have wide-ranging effects across the population. The routine tasks done by secretarial and clerical workers like Jane, typically those with a high school education and perhaps even with some college experience, have been automated. But the nonroutine, creative tasks typically undertaken by those with advanced degrees have been aided by technology. From CEOs, who can see their firm's latest inventory position by tapping on a few keys, to analysts and consultants, whose reports can be accessed around the world, the influence and reach of the skilled and the creative has increased.1 Technology has increased their productivity even while rendering others redundant.
Typically, however, technological advance is a good thing for everyone in the long run. It eliminates drudgery while giving the worker the time and capacity to make use of her finer talents. We are surely better off posting a document on an accessible website than asking a clerical worker to affix thousands of stamps and destroy so many trees to send physical mail that will ultimately be thrown away. But in the short run, technological advances can be extremely disruptive, and the disruption can persist into the long run if people do not have the means to adapt.
America has adapted to technological change before. As agriculture gave way to manufacturing in the mid-1800s, the elementary school movement in the United States created the most highly educated population in the world. As factory work became more sophisticated, and as demand grew for office workers to handle myriad activities in the emerging large, multidivision firms, the demand for workers with high school training increased. The high school movement took off in the early part of the twentieth century and provided the flexible, trained workers who would staff America's factories and offices. In 1910, fewer than one-tenth of U.S. workers had a high school diploma; in the 1970s, when Jane started her career, more than three-quarters did.2
Although earlier episodes of adaptation were very successful, the next phase of the race between technology and education, as the Harvard economists Claudia Goldin and Lawrence Katz have put it, has been far less satisfactory in the United States. Recent technological advances now require many workers to have a college degree to carry out their tasks. But the supply of college-educated workers has not kept pace with demand—indeed, the fraction of high school graduates in every age cohort has stopped rising, having fallen slightly since the 1970s.3 Those who are fortunate enough to have bachelor's and advanced degrees have seen their incomes grow rapidly as the demand for graduates exceeds supply. But those who don't—seven out of ten Americans, according to the 2008 census—have seen relatively stagnant or even falling incomes.4
Faced with a weak safety net and continuing uncertainty about jobs that could easily be eliminated by the next technological advance or wave of outsourcing, many Americans find it hard to feel optimistic about the future. Although Americans have, by and large, been flexible in their search for opportunity —willing to uproot themselves and travel across the continent to take a new job—the demands on them are far greater now. Many have to go back to school to remedy a deficient high school education before they can derive the full benefit of further education, all for distant and uncertain job prospects. Some lack the fortitude and strength of purpose to do so; others simply do not have the resources. For a single mother of two, for example, who is barely making ends meet with two low-paying jobs, further education is simply not a feasible option.
The gap between the growing technological demand for skilled workers and the lagging supply because of deficiencies in the quantity and quality of education is just one, albeit perhaps the most important, reason for growing inequality. The reasons for rising inequality are, of course, a matter of much debate, with both the Left and the Right adhering to their own favored explanations. Other factors, such as the widespread deregulation in recent decades and the resulting increases in competition including for resources (such as talent), the changes in tax rates, the decrease in unionization, and the increase in both legal and illegal immigration, have no doubt all played a part.5 Regardless of how the inequality has arisen, it has led to widespread anxiety.
Many have lost faith in the narrative of America as the land of unbounded opportunity, which in the past created the public support that made the United States a bastion of economic freedom. Politicians, always sensitive to their constituents, have responded to these worrisome developments with an attempt at a panacea: facilitating the flow of easy credit to those left behind by growth and technological progress. And so America's failings in education and, more generally, the growing anxiety of its citizenry about access to opportunity have led in indirect ways to unsustainable household debt, which is at the center of this crisis. That most observers have not noted these links suggests this fault line is well hidden and therefore particularly dangerous.
Incomes in the United States, of which wages constitute the most important component, have been growing more unequal. The wages of a 90th-percentile earner—that is, a person earning more than 90 percent of the general population —increased by about 65 percent more over the period 1975–2005 than the wages of a 10th-percentile earner. (This difference is known as the 90/10 differential.) In 1975, the 90th percentile earned, on average, about three times more than the 10th percentile; by 2005 they earned five times more.6 All of this growth was concentrated at the top: the wages of those in the middle relative to those at the 10th percentile have not gone up anywhere near as much as the wages of the 90th percentile have grown relative to those in the middle.
Many commentators, both in academia and in the popular press, have focused on the income gains made by the top 1 percent or even the top 0.01 percent of earners, perhaps because it is more customary to look up than down. I believe the more troublesome trend for the United States is the 90/10 or 90/50 differential, which reflects the changes most Americans experience.
Much of the 90/10 differential can be attributed to what economists call the “college premium.” The ratio of the wages of those who have only a bachelor's degree to those who only have a high school degree has risen steadily since 1980. The 2008 Current Population Survey by the Census Bureau indicated that the median wage of a high school graduate was $27,963, while the median wage of someone with an undergraduate degree was $48,097—about 72 percent more. Those with professional degrees (like an MD or MBA) earn even more, with a median wage of $87,775.7 That the 90/10 differential is largely due to the college premium also explains why the 50/10 differential has not moved as much—neither the 50th percentile earner nor the 10th percentile earner has been to college. In fact, the 50th percentile typically consists of white-collar workers like Jane and her colleagues, who have been most squeezed by the technological change.
Why has the college premium increased? One view is that it is because technology has become even more demanding of skills, reflecting what economists term “skill-biased technical change.” But Goldin and Katz argue that the pace of technological change and its demand for greater skills has been relatively steady: the automobile and the airplane were as disruptive to lifestyles in the beginning of the twentieth century as the Internet and organizational change were at the end. Rather, what has changed is the supply of the educated. Between 1930 and 1980, the average years of schooling among Americans age 30 or older increased by about one year every decade. Americans in 1980 had 4.7 years more schooling on average than Americans in 1930. But between 1980 and 2005, the pace of increase in educational attainments was truly glacial—only 0.8 years over the entire quarter century.8
In part, the reason for the slower increase in supply has been the relative stagnation of high school graduation rates. Although the United States has led historically in the fraction of the population with high school degrees, that fraction has not increased since 1980, and other countries have caught up and surpassed the United States. Moreover, while more and more Americans in the 20–24 age group are going to college (61 percent in 2003, up from 44 percent in 1980), no doubt in part attracted by the potential boost to wages, college graduation rates have not kept pace: too many students like Jane are dropping out of college despite an increasing college premium over time. College graduation rates for young men born in the 1970s are no higher than for men born in the 1940s—a shocking fact when one considers how much greater demand there is now for workers with college degrees.9
One possible explanation of the relative stagnation in education is that there might be a natural limit to how much education a population can absorb. After all, not everyone has the aptitude or inclination to write a PhD thesis. If that is the case in the United States, however, the rest of the world does not seem to sense such a limit. Despite leading the world in the past, the United States has fallen behind twelve other rich countries in four-year-college graduation rates.10 When we also note that its high school graduation rates put it in the bottom third among rich countries, we can see why the United States is falling behind both its own historical record and its competitors.
Finally, wages are not the only component of income. Income from property such as stocks and bonds also adds to overall income, while taxes subtract from it. Interestingly, even for the richest 0.01 percent of Americans toward the end of the twentieth century, 80 percent of income consisted of wages and income from self-owned businesses, and only 20 percent consisted of income from arm's-length financial investments.11 This ratio is in stark contrast to the pattern in the early part of the century, when the richest derived most of their income from property. The rich are now the working rich—whether they are entrepreneurs like Bill Gates or bankers like Lloyd Blankfein of Goldman Sachs—instead of the idle rich. At a time when wealth seems to be within the grasp of anyone who can get a good job, it is all the more unfortunate that so many Americans, by dint of their poor education, are locked out of the productive jobs that would make them better off.
I have used the term education so far, even when I refer to employability, but a better term is human capital, which refers to the broad set of capabilities, including health, knowledge and intelligence, attitude, social aptitude, and empathy that make a person a productive member of society. Formal education plays perhaps the most important role in forming an individual's human capital, but family, community, and employers also play important parts. In what follows, I focus on education, but I also refer to these other elements, especially in Chapter 9 when I turn to remedies.
Education plays a far greater role than simply improving an individual's income and career prospects: it has intrinsic worth of its own, allowing us to make use of our finer faculties. In addition, studies show that the educated typically take better care of their own health, are less prone to indulging in criminal activities, and are more likely to participate in civic and political activities. Moreover, they influence their children to do the same, so that their education has beneficial effects on future generations also. So as it falls behind in education, America is diminishing the quality of its society.
Why is the educational system failing the United States? With a university system that is still considered the best in the world, and which attracts students from every corner of the globe, the failure clearly does not lie in the quality of university education. Instead, there are three obvious problems that my earlier discussion suggests. First, the quality of the learning experience in schools is so poor that far too many students drop out before completing high school. Second, in a related vein, even among those who graduate from high school, many are unprepared for the rigors of university education. Finally, as the college premium increases, the cost of higher education also increases: it is a service that is provided by the well-educated with very small increases in productivity over the years (college class sizes have not increased dramatically at my university despite all the improvements in communications technology, though the learning experience has probably improved). Despite attempts to expand financial aid, a quality education at a private university is passing beyond the reach of even middle-class families. And with tight state budgets, even state schools are raising fees significantly.
Of course, learning does not take place only in the classroom. Differences in aptitude for education emerge in early childhood as a result of varying nutrition, learning environments, and behavioral expectations. The family matters immensely, as do the kind of role models children want to emulate and the attitudes their friends have. At my daughter's university-affiliated school, the smartest kid in class is pushed to excel and is secretly admired even if she does not belong to the popular set. Advanced students take university courses in high school and even sign up for research projects with professors. However, in too many schools in America, being smart can be positively dangerous, as children resent and set upon those who dare to try to escape the trap of low expectations. Here again, advantage breeds more advantage. The rich can afford to live in better neighborhoods, can give their children the health care and nutrition that allow them to grow up healthy, and can hire tutors and learning aides if their children fall behind. Even dysfunctionality hurts children less if their parents are rich. As the political analysts Ross Douthat and Reihan Salam put it: “The kids in Connecticut prep schools smoked pot and went on to college like their parents; kids in rural Indiana smoked meth and dropped out; kids in the South Bronx smoked crack and died in gang wars.”12
Family instability, too, is harder on poor children. Poor, less well-educated couples are more likely to break up, and when that happens the economic consequences are more severe than for the well-off: the cost of maintaining two establishments, shuttling children between the two parents, and child care eat up a much bigger fraction of the poor parents’ income, leaving less for other basic necessities, let alone counseling and remedial tuition to help devastated children cope with the breakup. Divorce therefore affects the children's health and schooling far more in a poor family than in a rich family. Inequality tends to further perpetuate itself through the social environment.
We do not need to get into the moral issues surrounding extreme inequality to understand that it is a thoroughly undesirable state of affairs. To the extent that it is caused by a significant part of the population's not being able to improve themselves because of lack of access to quality education, it signifies tremendous inefficiency. A mind is a terrible thing to waste, and the United States is wasting too many of them.
Differences in educational attainment in the face of rising technological demand for skills are, of course, only one reason for the growing inequality. There are other reasons why measured inequality might rise.13 Rising inequality in the United States in the past three decades coincides with a period of deregulation. Increasing competition does increase the demand for talented employees, thus increasing the dispersion of wages within any segment of the population. In general, this would increase inequality, although by increasing the costs of discriminating against the poor but talented, it could reduce inequality. Deregulation can also lead to more entry and exit of firms, which increases the volatility of each worker's earnings: an entrepreneur who earns nothing for a few years and then makes millions adds to both the bottom and the top of the distribution curve in different years. (So does a penurious graduate student before becoming a well-paid professor!) These effects may account for up to one-third of the increase in measured inequality.14
Greater immigration and trade have also played a part because immigrants, competing directly for unskilled jobs, and unskilled workers far away, competing through trade, have both served to hold down wages of unskilled U.S. workers. Most studies see the magnitude of these effects as small.15 Unskilled immigrants have, however, contributed to inequality in a different way. They typically occupy the bottom of the income distribution and thus contribute to measured inequality.16 Paradoxically, although their incomes are often higher than their incomes in the home country, they swell the ranks of those who appear down and out in America.
The reduction in the punitive postwar marginal tax on high incomes (from a top rate of 91 percent through much of the 1950s and the 1960s, through a number of ups and downs, to 35 percent at the time of writing) has increased incentives to earn higher incomes and may thus have contributed to the growing entrepreneurship and inequality.17 The weakness of unions may also have reduced moderately educated workers’ bargaining power, though the loss of high-paying unionized jobs probably has more to do with increased competition and entry as a result of deregulation, as well as competition from imports. A relatively stagnant minimum wage has certainly allowed the lowest real wages to fall (thereby also ensuring that some people who would otherwise be unemployed do have a job), though only a small percentage of American workers are paid the minimum wage. Finally, the entry of women into the workforce has also affected inequality. Because the well-connected and the highly educated tend to mate more often with each other, “assortative” mating has also helped increase household income inequality.
The reasons for growing income inequality are, undoubtedly, a matter of heated debate. To my mind, the evidence is most persuasive that the growing inequality I think the most worrisome, the increasing 90/10 differential, stems primarily from the gap between the demand for the highly educated and their supply. Progressives, no doubt, attribute substantial weight to the antilabor policies followed by Republican governments since Ronald Reagan, whereas conservatives attribute much of the earlier wage compression to anticompetitive policies followed since Franklin Roosevelt. Neither side would, however, deny the importance of differential educational attainments in fostering inequality.
Americans have historically not been too concerned about economic inequality except when it becomes extreme—as it did toward the end of the nineteenth century. Through a variety of means such as antitrust laws and estate taxes, they have ensured that wealth generated from corporate ownership does not become so highly concentrated that it upsets the distribution of political power. The government has repeatedly intervened to limit the power of banks—as in Andrew Jackson's fight to close the Second Bank of the United States (after he accused it of political meddling), the creation of the Federal Reserve in 1913 so that banks had an alternative to J. P. Morgan as the lender of last resort, and the Glass-Steagall Act of 1933, which broke up the most powerful banks. Similarly, through antitrust investigations, most famously against John D. Rockefeller's Standard Oil and Bill Gates's Microsoft, the government has sought to rein in the power of big business. But with the exception of some episodes—for example, during the Great Depression—the government and the public have not been strongly predisposed toward punitive taxation of the rich to achieve a more equitable distribution of income.
“Soak the rich” policies have seldom been popular among the less well-off in America, not necessarily because they have great sympathy for the rich but perhaps because the poor see themselves eventually becoming rich: Horatio Alger's stories of ordinary people attaining great success in the land of limitless opportunity had broad appeal.18 Although such optimism may always have been unrealistic, the gulf between the possible and the practical might have been small enough in the past that Americans could continue dreaming. According to the World Values Survey, 71 percent of Americans believe the poor have a good chance of escaping poverty, while only 40 percent of Europeans share this belief.19 These differences are particularly surprising because cross-country studies suggest that people in the United States are not much more mobile across income classes than in European countries, and indeed the bottom 20 percent of earners may be unusually immobile in the United States.20 Nevertheless, the idea of income mobility was deeply ingrained in the past. That great observer of America, Alexis de Tocqueville, remarked that in America, “wealth circulates with astounding rapidity and experience shows it is rare to find two successive generations in the full enjoyment of it.”21
Over the past 25 years, though, more and more Americans have come face to face with the bitter reality that they are trapped by educational underachievement. The Newsweek columnist Robert Samuelson has argued that “on the whole, Americans care less about inequality—the precise gap between the rich and the poor—than about opportunity and achievement: are people getting ahead?”22 Yet inequality in education is particularly insidious because it reduces opportunity. Someone who has had an indifferent high school education cannot even dream of getting a range of jobs that the new economy has thrown up. For Americans, many of whom “define political freedom as strict equality but economic freedom as an equal chance to become unequal,” inequality of access to quality education shakes the very foundation of their support for economic freedom, for they no longer have an equal chance.23
If Americans no longer have the chance to be upwardly mobile, they are less likely to be optimistic about the future or to be tolerant of the mobility of others —because the immobile are hurt when others move up. When others in town become richer, the cost of everything goes up, and the real income—the income in terms of its purchasing power—of the economically immobile falls. Matters are even worse if the immobile measure their worth in terms of their possessions: my Chevrolet becomes much less pleasurable when my neighbor upgrades from a Honda to a Maserati.24 Envy has historically been un-American, largely because it was checked by self-confidence. As self-confidence withers, can envy, and its close cousin, hatred, be far behind?
As more and more Americans realize they are simply not equipped to compete, and as they come to terms with their own diminished expectations, the words economic freedom do not conjure open vistas of unlimited opportunity. Instead they offer a nightmare vision of great and continuing insecurity, and growing envy as the have-nots increasingly become the have-nevers. Without some change in this trend, destructive class warfare is no longer impossible to contemplate.
Politicians have recognized the problem posed by rising inequality. Because African Americans and Hispanics have been harder hit by poor schooling than other groups, their lack of progress is also conflated with race. Nevertheless, politicians have understood that better education is part of the solution. A number of presidents have taken up the cause, but without making much of a dent. Moreover, even if they could make a difference, the changes would take effect too late to alter the lives of today's adults.
Taxation and redistribution could be an alternative; but, as the political scientists Nolan McCarthy, Keith Poole, and Howard Rosenthal argue, growing income inequality has made Congress much more polarized and much less likely to come together on matters of taxation and redistribution.25 Even as I write this, the Senate is divided completely along party lines in its attitude toward health care reform, with Democrats unanimous in support, and Republicans equally unanimous in opposition. Politicians are coming to terms with something Aristotle pointed out: that although quarrels are more likely in an unequal society, striving to rectify the inequality may precipitate the very conflict that the citizenry wants to avoid.26
Politicians have therefore looked for other ways to improve the lives of their voters. Since the early 1980s, the most seductive answer has been easier credit. In some ways, it is the path of least resistance. Government-supported credit does not arouse as many concerns from the Right at the outset as outright income redistribution would—though, as we have experienced, it may end up as a very costly way to redistribute, imposing harm on the recipient and costs on the taxpayer.
Politicians love to have banks expand housing credit, for credit achieves many goals at the same time. It pushes up house prices, making households feel wealthier, and allows them to finance more consumption. It creates more profits and jobs in the financial sector as well as in real estate brokerage and housing construction. And everything is safe—as safe as houses—at least for a while.
Easy credit has large, positive, immediate, and widely distributed benefits, whereas the costs all lie in the future. It has a payoff structure that is precisely the one desired by politicians, which is why so many countries have succumbed to its lure. Rich countries have, over time, built institutions such as financial-sector regulators and supervisors, which can stand up to politicians and deflect such short-term myopia. The problem in the United States this time was that the politicians found a way around these regulatory structures, and eventually public support for housing credit was so widespread that few regulators, if any, dared oppose it.
The period leading up to the Great Depression was also a time of great credit expansion and, perhaps not coincidentally, one of substantial income inequality. Mortgages were different then. Residential mortgages were offered by banks and thrift companies (also known as savings and loans associations). Mortgages were available for only a short term, about five years, and featured a single capital repayment at maturity, unless the borrower could refinance the loan. Moreover, most loans were at variable rates, so the borrower bore the risk that interest rates would change; and lenders did not typically lend more than 50 percent of value, so homeowners bore much of the risk of house-price fluctuations.27
In the 1930s, as the Depression worsened, refinancing dried up, valuations plummeted, and homeowners, strapped for the cash to repay maturing loans, started defaulting in droves. With 10 percent of the nation's housing stock in foreclosure, the government intervened in the housing market to save it from free fall. Among the institutions it created initially were the Home Owner's Loan Corporation (HOLC) and the Federal Housing Administration (FHA).
HOLC's role was to buy defaulted mortgages from banks and thrifts and restructure them into fixed-rate, 20-year fully amortizing mortgages (in which the principal is paid over the term of the loan). The long maturity and the fully amortizing payment structure meant that homeowners were not confronted with the disastrous refinancing problem. The government was willing to hold these mortgages for a while, but it did not see itself in the loan business in the long term and had to find a way of making the mortgages palatable to private-sector lenders. Private lenders, historically averse to making long-term loans, had to be persuaded to trust borrowers.
The solution was that the FHA would bear the default risk by providing mortgage insurance—essentially assuring lenders that it would repay the loan if the homeowner defaulted. The FHA protected itself by charging an insurance premium, setting strict limits on the maximum loan it would finance (initially 80 percent of the property value), and the amount of the loan it would insure. This restriction also ensured that a private market emerged for the mortgages, or portions thereof, that the government would not insure.
Thus the banks and the thrifts that bought FHA-insured mortgages had to bear only the interest-rate risk—the risk stemming from the fact that they were financing fixed-rate long-term mortgages with short-term, effectively variable rate deposits. So long as short-term rates did not spike, this was a profitable business.
The HOLC was wound down in 1936. To provide a financing alternative to banks, the Federal National Mortgage Association (FNMA, later Fannie Mae) was set up to draw private long-term financing into the mortgage market once again. Essentially, FNMA bought FHA-insured mortgages and financed them by issuing long-term bonds to investors like insurance companies and pension funds. Unlike the banks and thrifts, FNMA had longer-term fixed-rate financing and therefore did not bear much interest-rate risk even if it held the mortgages on its books.
The system worked well until rising short-term interest rates in the late 1960s caused deposits to flow out of banks and thrifts—because regulatory deposit rate ceilings introduced during the Depression to prevent excessive competition did not permit them to match the higher market interest rates. Financing for mortgages dried up. To compensate, the government tried to bring more direct financing capacity into the market by splitting Fannie into two in 1968—creating a Government National Mortgage Association (GNMA or Ginnie Mae) to continue insuring, packaging, and securitizing mortgages, and a new, privatized Fannie Mae that would finance mortgages by issuing bonds or securitized claims to the public. At a time when President Lyndon Johnson needed funds to pay for the growing costs of the Vietnam War, privatization conveniently removed Fannie Mae's debt from counting as a government liability, making the government's balance sheet look a lot healthier. Soon after, Freddie Mac (or the Federal Home Loan Mortgage Corporation, to go by its full name) was created to help securitize mortgages made by the thrifts, and eventually it too was privatized.
As inflation rose in the late 1970s and early 1980s, the Federal Reserve chairman, Paul Volcker, increased short-term interest rates to hitherto unimagined levels to try to tame it. With much of their portfolio invested in fixed long-term interest-rate mortgages, made when interest rates were low, and much of their financing tied to sky-high short-term interest rates, the savings and loan or thrift industry essentially went bankrupt. The political reaction was not to shut the thrifts down: housing was too important, the industry too well connected, and the hole that the taxpayer would have to fill too embarrassing to own up to.
Instead, the political system reacted with the Depository Institutions Deregulation and Monetary Control Act of 1980 and Garn–St. Germain Depository Institutions Act of 1982, which liberalized the range of loans that thrifts could make, including mortgages, and the ways they could borrow, to help the industry earn its way back to stability. The sorry history of ensuing developments, in particular the immensely risky and ultimately disastrous gambles that thrifts took in commercial real estate, backed with taxpayer money, has been told elsewhere.28 A sizeable loss for thrifts was converted into a gigantic loss for taxpayers, aided and abetted by politicians. Suffice it to say that as a consequence, the insurers Fannie and Freddie, rather than the thrifts, played an increasing role in mortgage financing.
Fannie and Freddie, variously known as government-sponsored enterprises (GSEs) or agencies, were curious beasts. They were not quite private, though they had private shareholders to whom their profits belonged. And they certainly were not public, in that they were not owned by the government, but they had both government benefits and public duties. Among their perks, they were exempted from state and local income taxes, they had government appointees on their boards, and they had a line of credit from the U.S. Treasury. For the investing public, these links to the government indicated that the full faith and credit of the United States stood behind these organizations. Fannie and Freddie could thus raise money at a cost that was barely above the rate paid by the Treasury. These perks came with a public mandate—to support housing finance.
Fannie and Freddie did two things to fulfill their mandate. They bought mortgages that conformed to certain size limits and credit standards they had set out, thus allowing the banks they bought from to go out and make more mortgage loans. The agencies then packaged pools of loans together and issued mortgage-backed securities against the package after guaranteeing the mortgages against default. They also started borrowing directly from the market and investing in mortgage-backed securities underwritten by other banks. Because the mortgages were sound, these were fairly safe and extremely profitable activities. But much of the profit stemmed from their low cost of financing, deriving from the implicit government guarantee, and this was a critical political vulnerability.
As evidence mounted in the early 1990s that more and more Americans faced stagnant or declining incomes, the political establishment started looking for ways to help them with fast-acting measures—certainly faster than education reform, which would take decades to produce results. Affordable housing for low-income groups was the obvious answer, and Fannie and Freddie were the obvious channels. Congress knew it could use Fannie and Freddie as vehicles for its designs because they benefited so much from government largesse, and their managers’ arms could be twisted without any of the agencies’ activities inconveniently showing up as an expenditure in government budgets.
In 1992, the U.S. Congress passed the Federal Housing Enterprise Safety and Soundness Act, partly to reform the regulation of the agencies and partly to promote homeownership for low-income and minority groups explicitly. The act instructed the Department of Housing and Urban Development (HUD) to develop affordable-housing goals for the agencies and to monitor progress toward these goals. Whenever Congress includes the words safety and soundness in any bill, there is a distinct possibility that it will achieve exactly the opposite, and that is precisely what this piece of legislation did.
Even though the agencies could not head off legislation, they could shape it to their advantage. They ensured that the legislation allowed them to hold less capital than other regulated financial institutions and that their new regulator, an office within HUD—which itself had no experience in financial-services regulation—was subject to congressional appropriation.29 This meant that if the regulator actually started constraining the behavior of the agencies, the agencies’ friends in Congress could cut the regulator's budget. The combination of an activist Congress, government-supported private firms hungry for profits, and a weak and pliant regulator proved disastrous.
At first Fannie and Freddie were not eager to put their profitable franchise at risk. But seeing the political writing on the wall, they complied. Steven Holmes, a reporter for the New York Times, offered a prescient warning in the 1990s: “In moving, even tentatively, into this new area of lending, Fannie Mae is taking on significantly more risk, which may not pose any difficulty in flush economic times.…But the government sponsored entity may run into trouble in an economic downturn, prompting a government rescue similar to that of the Savings and Loan industry in the 1980s.”30 As housing boomed, the agencies found the high rates available on low-income lending particularly attractive, and the benign environment and the lack of historical experience with low-income lending allowed them to ignore the additional risk.
Under the Clinton administration, HUD steadily increased the amount of funding it required the agencies to allocate to low-income housing. The agencies complied, almost too eagerly: sometimes it appeared as if they were egging the administration on to increase their mandate so that they would be able to justify their higher risk taking (and not coincidentally, management's higher bonuses) to their shareholders. After being set initially at 42 percent of assets in 1995, the mandate for low-income lending was increased to 50 percent of assets in 2000 (in the last year of the Clinton administration).
Some critics worried that the agencies were turning a blind eye to predatory lending to those who could not afford a mortgage. But reflecting the nexus between the regulator and the regulated, HUD acknowledged in a report in 2000 that the agencies “objected” to disclosure requirements “related to their purchase of high-cost mortgages,” so HUD decided against imposing “an additional undue burden”!31
Congress was joined by the Clinton administration in its efforts. In 1995, in a preamble to a document laying out a strategy to expand home ownership, President Clinton wrote: “This past year, I directed HUD Secretary Henry G. Cisneros…to develop a plan to boost homeownership in America to an all-time high by the end of this century.…Expanding homeownership will strengthen our nation's families and communities, strengthen our economy, and expand this country's great middle class. Rekindling the dream of homeownership for America's working families can prepare our nation to embrace the rich possibilities of the twenty-first century.” What did this mean in practice? The strategy document went on to say: “For many potential homebuyers, the lack of cash available to accumulate the required down payment and closing costs is the major impediment to purchasing a home. Other households do not have sufficient available income to make the monthly payments on mortgages financed at market interest rates for standard loan terms. Financing strategies, fueled by the creativity and resources of the private and public sectors [italics mine], should address both of these financial barriers to homeownership.”32
Simply put, the Clinton administration was arguing that the financial sector should find creative ways of getting people who could not afford homes into them, and the government would help or push wherever it could. Although there was some distance between this strategy and the NINJA loans and “liar” loans (loans for which borrowers could come up with creative representations of their income because no documentation was required) that featured so prominently in this crisis, the course was set.
The Clinton administration pushed hard in other ways. The Community Reinvestment Act (CRA) passed in 1977 required banks to lend in their local markets, especially in lower-income, predominantly minority areas. But CRA did not set explicit lending goals, and its enforcement was left to regulators. The Clinton administration increased the pressure on regulators to enforce CRA through investigations of banks and threats of fines.33 A careful study of bank mortgage lending shows that lending went up as CRA enforcement increased over the 1990s, especially in the highly visible and politically sensitive metropolitan areas where banks were most likely to be scrutinized.34
Recall also that the Federal Housing Administration guaranteed mortgages. It typically focused on riskier mortgages that the agencies were reluctant to touch. Here was a vehicle that was directly under political control, and it was fully utilized. In 2000, the Clinton administration dramatically cut the minimum down payment required for a borrower to qualify for an FHA guarantee to 3 percent, increased the maximum size of mortgage it would guarantee, and halved the premiums it charged borrowers for the guarantee. All these actions set the stage for a boom in low-income housing construction and lending.
The housing boom came to fruition in the administration of George W. Bush, who also recognized the dangers of significant segments of the population not participating in the benefits of growth. As he put it: “If you own something, you have a vital stake in the future of our country. The more ownership there is in America, the more vitality there is in America, and the more people have a vital stake in the future of this country.”35 In a 2002 speech to HUD, Bush said:
But I believe owning something is a part of the American Dream, as well. I believe when somebody owns their own home, they're realizing the American Dream.…And we saw that yesterday in Atlanta, when we went to the new homes of the new homeowners. And I saw with pride firsthand, the man say, welcome to my home. He didn't say, welcome to government's home; he didn't say, welcome to my neighbor's home; he said, welcome to my home.…He was a proud man.…And I want that pride to extend all throughout our country.36
Later, explaining how his administration would go about achieving its goals, he said: “And I'm proud to report that Fannie Mae has heard the call and, as I understand, it's about $440 billion over a period of time. They've used their influence to create that much capital available for the type of home buyer we're talking about here. It's in their charter; it now needs to be implemented. Freddie Mac is interested in helping. I appreciate both of those agencies providing the underpinnings of good capital.”37
The Bush administration pushed up the low-income lending mandate on Fannie and Freddie to 56 percent of their assets in 2004, even as the Fed started increasing interest rates and expressing worries about the housing boom. Peter Wallison of the American Enterprise Institute and Charles Calomiris of Columbia University argue that Fannie and Freddie moved into even higher gear at this time not so much because of altruism, but because the accounting scandals that were exposed in those agencies in 2004 made them much more pliant to Congress's demands for more low-income lending.38
How much lending flowed from these sources, and when? It is not easy to get a sense of the true magnitude of subprime and Alt-A lending by Fannie, Freddie, and the FHA, partly because as Edward Pinto, a former chief credit officer of Fannie Mae, has argued, many loans on each of these entities’ books were sub-prime in nature but not classified as such.39 For instance, Fannie Mae classified a loan as subprime only if the originator itself specialized in the subprime business. Many risky loans to low-credit-quality borrowers thus escaped classification as subprime or Alt-A loans. When the loans are appropriately classified, Pinto finds that subprime lending alone (including financing through the purchase of mortgage-backed securities) by the mortgage giants and the FHA started at about $85 billion in 1997 and went up to $446 billion in 2003, after which it stabilized at between $300 and $400 billion a year until 2007, the last year of his study.40 On average, these entities accounted for 54 percent of the market across the years, with a high of 70 percent in 2007. He estimates that in June 2008, the mortgage giants, the FHA, and various other government programs were exposed to about $2.7 trillion in subprime and Alt-A loans, approximately 59 percent of total loans to these categories. It is very difficult to reach any other conclusion than that this was a market driven largely by government, or government-influenced, money.
As more money from the government-sponsored agencies flooded into financing or supporting low-income housing, the private sector joined the party. After all, they could do the math, and they understood that the political compulsions behind government actions would not disappear quickly. With agency support, subprime mortgages would be liquid, and low-cost housing would increase in price. Low risk and high return—what more could the private sector desire? Unfortunately, the private sector, aided and abetted by agency money, converted the good intentions behind the affordable-housing mandate and the push to an ownership society into a financial disaster.
Both Clinton and Bush were right in worrying that growth was leaving large segments of the population behind, and their solution—expanded home ownership—was a reasonable short-term fix. The problem with using the might of the government is rarely one of intent; rather, it is that the gap between intent and outcome is often large, typically because the organizations and people the government uses to achieve its aims do not share them. This lesson from recent history, including the savings and loans crisis, should have been clear to the politicians: the consequences of the government's pressing an agile financial sector to act in certain ways are often unintended and extremely costly. Yet the political demand for action, any action, to satisfy the multitudes who believe the government has all the answers, is often impossible for even the sensible politician to deny.
Also, it is easy to be cynical about political motives but hard to establish intent, especially when the intent is something the actors would want to deny—in this case, politicians using easy housing credit as a palliative. As I argue repeatedly in this book, it may well be that many of the parts played by the key actors were guided by the preferences and applause of the audience, rather than by well-thought-out intent. Even if no politicians dreamed up a Machiavellian plan to assuage anxious voters with easy loans, their actions—and there is plenty of evidence that politicians pushed for easier housing credit—could have been guided by the voters they cared about.41 Put differently, politicians may have tried different messages until one resonated with voters. That message—promising affordable housing, for example—became part of their platform. It could well be that voters shaped political action (much as markets shape corporate action) rather than the other way around. Whether the action was driven by conscious intent or unintentional guidance is immaterial to its broader consequences.
A very interesting study by two of my colleagues at the University of Chicago's Booth School, Atif Mian and Amir Sufi, details the consequences in the lead-up to the crisis.42 They use ZIP codes to identify areas that had a disproportionately large share of potential subprime borrowers (borrowers with low incomes and low credit ratings) and show that these ZIP codes experienced credit growth over the period 2002–2005 that was more than twice as high as that in the prime ZIP codes. More interesting, the number of mortgages obtained in a ZIP code over that period is negatively correlated with household income growth: that is, ZIP codes with lower income growth received more mortgage loans in 2002– 2005, the only period over the entire span of the authors’ study in which they saw this phenomenon. This finding should not be surprising given the earlier discussion: there was a government-orchestrated attempt to lend to the less well-off.
The greater expansion in mortgage lending to subprime ZIP codes is associated with higher house-price growth in those ZIP codes. Indeed, over the period 2002–2005 and across ZIP codes, house-price growth was higher in areas that had lower income growth (because this is where the lending was focused). Unfortunately, therefore, all this lending was driving house prices further away from the fundamental ability of household income to support repayment. The consequence of all this lending was more default. Subprime ZIP codes experienced an increase in default rates after 2006 that was three times that of prime ZIP codes, and much larger than the default rates these areas had experienced in the past.
Could the increased borrowing by low-income households have been driven by need? After all, I have argued that their incomes were stagnating or even falling. It is hard, though, to imagine that strapped households would go out and borrow to buy houses. The borrowing was not driven by a surge in demand: instead it came from a greater willingness to supply credit to low-income households, the impetus for which came in significant measure from the government.
Not all the frenzied lending in the run-up to the recent crisis was related to low-income housing: many unviable loans were made to large corporate buyouts also. Nevertheless, subprime lending and the associated subprime mortgage-backed securities were central to this crisis. Without any intent of absolving the brokers and the banks who originated the bad loans or the borrowers who lied about their incomes, we should acknowledge the evidence suggesting that government actions, however well intended, contributed significantly to the crisis. And the agencies did not escape the fallout. With the losses on the agencies’ mortgage portfolios growing and hints that investors in agency debt were getting worried, on Sunday, September 7, 2008, Henry J. Paulson, secretary of the treasury, announced what the market had always assumed: the government would take control of Fannie and Freddie and effectively stand behind their debt. Conservative estimates of the costs to the taxpayer of bailing out the agencies amount to hundreds of billions of dollars. Moreover, having taken over the agencies, the government fully owned the housing problem. Even as I write, the government-controlled agencies are increasing their exposure to the housing market, attempting to prop up prices at unrealistic levels, which will mean higher costs to the taxpayer down the line.
The agencies are not the only government-related organizations to have problems. As the crisis worsened in 2007 and 2008, the FHA also continued to guarantee loans to low-income borrowers. Delinquency rates on those mortgages exceed 20 percent today.43 It is perhaps understandable (though not necessarily wise) that government departments will attempt to support lending in bad times, as they play a countercyclical role. As Peter Wallison of the American Enterprise Institute has pointed out, it is less understandable why the FHA added to the subprime frenzy in 2005 and 2006, thus exacerbating the boom and the eventual fall.44 Delinquencies on guaranteed loans offered then also exceed 20 percent. The FHA will likely need taxpayer assistance. The overall cost to the taxpayer of government attempts to increase low-income lending continue to mount and perhaps will never be fully tallied up.
As house prices rose between 1999 and 2007, households borrowed against the home equity they had built up. The extent of such borrowing was so great that the distribution of loan-to-value ratios of existing mortgages in the United States barely budged over this period, despite double-digit increases in house prices.45 House-price appreciation also enabled low-income households to obtain other forms of nonmortgage credit. For instance, according to the Survey of Consumer Finances conducted by the Federal Reserve Board, between 1989 and 2004 the fraction of low-income families (families in the bottom quartile of income distribution) that had mortgages outstanding doubled, while those that had credit card debt outstanding grew by 75 percent.46 By contrast, the fraction of high-income families (families in the top quartile of income distribution) that had mortgages or credit card debt outstanding fell slightly over this period, suggesting that the rapid spread of indebtedness was concentrated in poorer segments of the population.
Indeed, although housing booms took place around the world, driven by low interest rates, the boom in the United States was especially pronounced among borrowers who had not had prior easy access to credit, the subprime and Alt-A segments of the market. Detailed studies indicate that this housing boom was different because house prices for the low-income segment of the population rose by more and fell by more than they did for the high-income segments. By contrast, in previous U.S. housing booms, house prices for the high-income segment were always more volatile than for the low-income segment.47 Relative to other industrial countries like Ireland, Spain, and the United Kingdom, all of which had house-price booms that turned to busts, U.S. house prices overall were nowhere as high relative to fundamentals.48 But the boom was concentrated in those least able to afford the bust. The U.S. boom was different, at least in its details.
Some progressive economists dispute whether the recent crisis was at all related to government intervention in low-income housing credit.49 This certainly was not the only factor at play, and to argue that it was is misleading. But it is equally misleading to say it played no part. The private financial sector did not suddenly take up low-income housing loans in the early 2000s out of the goodness of its heart, or because financial innovation permitted it to do so—after all, securitization has been around for a long time. To ignore the role played by politicians, the government, and the quasi-government agencies is to ignore the elephant in the room.
I have argued that an important political response to inequality was populist credit expansion, which allowed people the consumption possibilities that their stagnant incomes otherwise could not support. There were clearly special circumstances in the United States that made this response more likely—in particular, the many controls the government had over housing finance and the difficulty, given the increasing polarization of U.S. politics, of enacting direct income redistribution. Moreover, the objective of expanding home ownership drew on the politically persuasive historical symbolism of small entrepreneurs and farmers in the United States, all owning their property and having a stake in society and progress. These specific circumstances would not necessarily apply in other industrial countries.
That said, there are a number of parallels, both in U.S. history and in the contemporary experience of emerging markets, for the use of credit as a populist palliative. A previous episode of high income inequality in the United States came toward the end of the nineteenth century and the beginning of the twentieth century. As small and medium-sized farmers perceived that they were falling behind, their grievances about the lack of access to credit and the need for banking reforms were articulated by the Populist Party. Pressure from such quarters helped accelerate the deregulation of banking and the explosion of banks in the early part of the twentieth century. Indeed, in North Dakota, after a Populist candidate won the 1916 gubernatorial race with the support of small farmers, the Populist Party created the United States’ first state-owned bank, the Bank of North Dakota.50 The explosion in rural bank credit was followed in the 1920s by a steady decline in the prices of agricultural produce, widespread farmer distress, and the failure of a large number of small rural banks. As in the recent crisis, populist credit expansion went too far.
The tradition of using government-linked financial institutions to expand credit to politically important constituencies of moderate creditworthiness is also well established in emerging markets. For example, Shawn Cole, a professor at Harvard Business School, finds that Indian state-owned banks increase their lending to the politically important but relatively poor constituency of farmers by about 5 to 10 percentage points in election years.51 The effect is most pronounced in districts with close elections. The consequences of the lending are greater loan defaults and no measurable increase in agricultural output, which suggest that it really serves as a costly form of income redistribution. Most recently, the coalition United Progressive Alliance (UPA) government waived the repayment of loans made to small and medium-sized farmers just before the 2009 elections, an act that some commentators believed helped the coalition get reelected. Populism and credit are familiar bedfellows around the world.
Growing income inequality in the United States stemming from unequal access to quality education led to political pressure for more housing credit. This pressure created a serious fault line that distorted lending in the financial sector. Broadening access to housing loans and home ownership was an easy, popular, and quick way to address perceptions of inequality. Politicians set about achieving it through the agencies and departments they had set up to deal with the housing-debt disasters during the Great Depression. Ironically, the same organizations may have helped precipitate the ongoing housing catastrophe.
This is not to fault their intent. Both the Clinton administration's attempt to make housing affordable to the less well-off and the Bush administration's attempt to expand home ownership were laudable. They were also politically astute in that they focused on alleviating the concerns of those being left behind while buying time for more direct policies to work. But the gap between government intent and outcomes can be very wide indeed, especially when action is mediated through the private sector. More always seems better to the impatient politician. But any instrument of government policy has its limitations, and what works in small doses can become a nightmare when scaled up, especially when scaled up quickly. Some support to low-income housing might have had benefits and prompted little private-sector reaction. But support at a scale that distorted housing prices and private-sector incentives was too much. Furthermore, the private sector's objectives are not the government's objectives, and all too often policies are set without taking this disparity into account. Serious unintended consequences can result.
Successive governments pushed Fannie and Freddie to support low-income lending. Given their historical focus on prime mortgages, these agencies had no direct way of originating or buying subprime loans in the quantities that were being prescribed. So in the years of the greatest excess, they bought subprime mortgage-backed securities, but without adjusting for the significantly higher risks that were involved. And the early rewards from taking these risks were higher profits. That there also were very few defaults initially emboldened the agencies to plunge further, and their weak and politically influenced regulator did little to restrain them. At the same time, as brokers came to know that someone out there was willing to buy subprime mortgage-backed securities without asking too many questions, they rushed to originate loans without checking the borrowers’ creditworthiness, and credit quality deteriorated. But for a while, the problems were hidden by growing house prices and low defaults—easy credit masked the problems caused by easy credit—until house prices stopped rising and the flood of defaults burst forth.
On net, easy credit, as is typically the case, proved an extremely costly way to redistribute. Too many poor families who should never have been lured into buying a house have been evicted after losing their meager savings and are now homeless; too many houses have been built that will not be lived in; and too many financial institutions have incurred enormous losses that the taxpayer will have to absorb for years to come. Although home ownership rates did go up—from 64.2 percent of households in 1994 to 69.2 percent in 2004—too many households that could not afford to borrow were induced to do so, and since 2004, even home ownership has declined steadily (to 67.2 percent as of the fourth quarter of 2009), with the rate likely to fall further as many households face foreclosure.52
This is a lesson that needs to be more widely absorbed. Few “solutions” hold more support and promise up front, and lead to more recrimination after the fact, than opening the spigot of lending. For poor countries there is a strong parallel with the past enthusiasm for foreign aid. Now we know that aid leads to dependency, indebtedness, and poor governance and rarely leads to growth.53 The new miracle solution is microcredit—lending to the poor through group loans, a system in which peer pressure from the group makes individuals more likely to repay. Although it has promise on a small scale, history suggests that when scaled up, and especially when used as an instrument of government policy, it will likely create significant problems.
So what should the United States do to deal with the waning of the American dream, with the shrinking of opportunities for the large mass of the American people? Ignoring the problem will only make matters worse. Inequality feeds on itself. Moreover, it will precipitate a backlash. When people see a dim economic future in a democracy, they work through political channels to obtain redress, and if the political channel does not respond, they resort to other means. The first victims of a political search for scapegoats are those who are visible and easily demonized, but powerless to defend themselves. Illegal immigrants and foreign workers do not vote, but they are essential to the economy—the former because they often do jobs no one else will touch in normal times, and the latter because they are the source of the cheap imports that have raised the standard of living for all, but especially those with low incomes. There has to be a better way than simply finding scapegoats, and I examine possible solutions in subsequent chapters.
At this point, though, I want to turn to a problem that was growing in magnitude elsewhere in the world. Even as political compulsions in the United States were pushing it to become more favorable to boosting consumption, countries like Germany and Japan, which were extremely dependent on exports for growth, were accounting for a larger share of the world economy. Why they, and a growing number of emerging markets, have become dependent in this way, and the consequences of such dependence for countries like the United States, are the issues I turn to now.