2

Image

What’s the Problem?

ECONOMIC SECURITY, OPPORTUNITY, and shared prosperity are integral to a good society. We aren’t doing as well as we should. In fact, since the 1970s we’ve been going in the wrong direction.

Too Little Economic Security

To be economically secure is to have sufficient resources to cover our expenses. We achieve economic security with a stable and sizable income, with assets that can be sold or borrowed against, and with insurance.

From the 1930s through the mid-1970s, economic insecurity decreased for virtually all Americans.1 Incomes grew steadily for most households, reducing the share with low income and facilitating the purchase of private insurance. More Americans became homeowners, thereby accumulating some assets. And a raft of government laws and programs—limited liability law, bankruptcy protection, Social Security old-age benefits, unemployment insurance, statutory minimum wage, AFDC (Aid to Families with Dependent Children, which later became TANF), Social Security disability benefits and Supplemental Security Income (SSI), Medicare and Medicaid, food stamps, EITC, and disaster relief, among others—provided a safeguard against various financial risks, from business failure to job loss to poor health to old age.2

Since the 1970s, according to a number of knowledgeable observers, the tide has turned. Economic insecurity has been rising.3 Paul Osterman sounded the alarm in his 1999 book Securing Prosperity, in which he noted the increasing frequency of job loss.4 In 2006, Louis Uchitelle echoed this argument in his book The Disposable American.5 In The Great Risk Shift, published the same year, Jacob Hacker pushed the assessment beyond job loss to suggest that severe income decline has become more common and that private and public insurance against risks such as poor health and old age have weakened.6 Peter Gosselin reached a similar verdict a few years later in High Wire.7 A survey by the Rockefeller Foundation in early 2007, prior to the 2008–9 “Great Recession,” found more than 25 percent of Americans saying they were “fairly worried” or “very worried” about their economic security.8

A rise in economic insecurity is what we would expect given the changes in the American economy over the past several decades. Competition among firms has intensified as manufacturing and some services have become internationalized. Competitive pressures have increased even in sectors not exposed to competition from abroad, such as retail trade and hotels, partly due to the emergence of large and highly efficient firms such as Walmart. At the same time, companies’ shareholders now demand constant profit improvement rather than steady long-term performance.

These changes force management to be hypersensitive to costs and constraints. One result has been the end of job security, as firms restructure, downsize, move offshore, or simply go under.9 Another is enhanced management desire for flexibility, leading to greater use of part-time and temporary employees and irregular and unstable work hours. This increases earnings instability for some people and may reduce their likelihood of qualifying for unemployment compensation, paid sickness leave, and other supports. Employers also have cut back on the provision of benefits, including health insurance and pensions.

Private insurance companies are subject to the same pressures. And they now have access to detailed information about the likelihood that particular persons or households will get in a car accident, need expensive medical care, or experience home damage from a fire or a hurricane. As a result, private insurers are more selective about the type and extent of insurance coverage they provide and about the clientele to whom they provide it.

The period since the 1970s also has witnessed commitments by prominent American policy makers to ensure that, in Bill Clinton’s expression, “the era of big government is over.” From Ronald Reagan to Clinton to George W. Bush and even Barack Obama, recent presidents have expressed a preference for scaling back government expenditures. The 1996 welfare reform, which devolved decision-making authority for America’s chief social assistance program to the states and set a time limit on receipt of benefits, embodies this commitment. Tellingly, the number of TANF recipients and the amount they receive have declined sharply since the reform.

Finally, family protections against economic insecurity are weaker for some segments of the American population. Having a second adult who has a paying job (or can get one) in the household is a valuable asset in the event of income loss.10 Later marriage and more frequent divorce mean that a larger share of Americans has little or no family buffer.

Economic insecurity is a product of low income, significant income decline, or inadequate insurance. To get a complete picture, we would need a single data source that captures each of these elements for a representative sample of American households, and does so consistently over time. Unfortunately, such data don’t exist. Instead, the information is available in bits and pieces. In what follows, I put the pieces together to gauge the extent of economic security and its trend over time.

Low Income

As of 2007, the average income of the roughly 25 million households in the bottom 20 percent (quintile) was just $18,000.11

Very few of these low-income Americans are destitute. Most have clothing, food, and shelter. Many have a car, a television, heat and air conditioning, and access to medical care.12 But making ends meet on an income of $18,000 is a challenge. That comes out to $1,500 a month. If you spend $500 on rent and utilities, $300 on food, and $200 on transportation, you’re left with just $500 each month for all other expenses. It’s doable. Millions of Americans offer proof of that. But this is a life best described as “scraping by.”13

Now, there are important caveats. First, income data are never perfect. However, these data, compiled by the Congressional Budget Office (CBO), are quite good. They are created by merging the Census Bureau’s annual survey of households with tax records from the Internal Revenue Service (IRS). The income measure includes earnings, capital gains, government transfers, and other sources of cash income. It adds in-kind income (employer-paid health insurance premiums, Medicare and Medicaid benefits, food stamps), employee contributions to 401(k) retirement plans, and employer-paid payroll taxes. Tax payments are subtracted. These data give us a pretty reliable picture of the incomes of American households.

Second, $18,000 is the average among these 25 million households, so some had an income above this amount. According to the CBO’s calculations, the highest income among bottom-quintile households with one person was $20,000. For households with four persons, it was $40,000. Making ends meet is a little easier at this income level, but it still isn’t easy. And half or more of these 25 million have incomes below the $18,000 average. Some solo adults have to make do with an income of $10,000 or $5,000. Some families with one or more kids have to get by on $20,000 or $15,000 or even less.

Third, some of these households have assets that reduce their expenses or provide a cushion in case expenses exceed income in a particular month or year. Some, for example, are retirees who own a home and therefore have no rent or mortgage payments. But many aren’t saved by assets. Asena Caner and Edward Wolff calculate that in the late 1990s, about one-quarter of Americans were “asset poor,” meaning they did not have enough assets to replace their income for at least three months.14

Fourth, these data very likely underestimate the true incomes of some households at the bottom. The data come from a survey in which people are asked what their income was in the prior year. People in low-income households tend to underreport their income, perhaps out of fear that accurate disclosure will result in loss of a government benefit they receive.15

Fifth, some of these 25 million households have a low income for only a short time. Their income may be low one year because the wage earner leaves her job temporarily to have a child, is sick, or gets laid off. By the following year, the earner may be back in paid employment. Some low earners are just beginning their work career. Five or ten years later, their earnings will be higher, or perhaps they will have a partner whose earnings add to household income. Using a panel data set known as the Panel Study of Income Dynamics (PSID), which tracks the same set of households over time, Mark Rank and Thomas Hirschl calculate that by the time Americans reach age 65, fewer than 10 percent will have spent five or more consecutive years with an income below the official poverty line (about $12,000 for a single adult and $23,000 for a household of four as of 2012).16 On the other hand, some who move up the economic ladder will later move back down. Shuffling in and out of poverty is common. Rank and Hirschl find that if we ask what share of Americans will have spent five or more total years below the poverty line upon reaching age 65, the share rises to 25 percent.17

Finally, some of these households are made up of immigrants from much poorer nations. Many are better off than they would have been if they had stayed in their native country. But that doesn’t change the fact that they are scraping by.

How much should these qualifiers alter our impression of economic insecurity due to low income in the United States? It’s difficult to say. Suppose the truly insecure constitute only half of the bottom quintile. That’s still 10 percent of American households, much more than we should accept in a nation as rich as ours.

Perhaps we should measure low income in another way. We could, for example, identify the minimum income needed for a decent standard of living and then see how many households fall below this amount. A team of researchers at the Economic Policy Institute did just that, estimating “basic family budgets” for metropolitan and rural areas around the country and calculating the share of families with incomes below these amounts in 1997–99.18 They concluded that approximately 29 percent of US families could not make ends meet. More recently, researchers with Wider Opportunities for Women and the Center for Social Development at Washington University calculated basic-needs budgets for various household types.19 They estimate that to meet basic expenses in 2010, a single adult needed, on average, about $30,000, and a household with two adults and two children needed about $68,000. According to their calculations, 43 percent of American households fell below the threshold.

Let’s return to low income and consider the trend over time. Is it getting better or worse? Figure 2.1 shows what happened between 1979 and 2007. There was improvement, but only a little. Average income in the bottom fifth rose by just $2,000 over this nearly three-decade period. That’s not much, particularly given that the American economy was growing at a healthy clip (a point I expand on later in this chapter). On the other hand, these data don’t indicate a rise in insecurity.

One group some believe has suffered a rise in insecurity due to low income is the elderly. Now, in one respect elderly Americans have fared well: they are the only age group whose poverty rate has declined since the 1970s.20 A key reason is Social Security. In 1979, the average recipient of Social Security old-age benefits got about $10,000 (in today’s dollars). That average increased steadily over the ensuing three decades, reaching nearly $15,000 as of 2010. During this time, the share of elderly Americans receiving Social Security held steady at around 90 percent.

Image

FIGURE 2.1 Average income of households on the bottom fifth of the income ladder

Posttransfer-posttax income. The income measure includes earnings, capital gains, government transfers, and other sources of cash income. It adds in-kind income (employer-paid health insurance premiums, Medicare and Medicaid benefits, food stamps), employee contributions to 401(k) retirement plans, and employer-paid payroll taxes. Tax payments are subtracted. The incomes are in 2007 dollars; inflation adjustment is via the CPI-U-RS. Data source: Congressional Budget Office, “Average Federal Tax Rates and Income, by Income Category, 1979–2007.”

But Social Security is just the first of three tiers of retirement income security. After all, $15,000 isn’t much to live on, even if you don’t have a mortgage to pay. The second tier is private—usually employer-based—pensions. The share of people under age 65 who participate in an employer pension plan has remained steady at around 60 percent,21 but the type of plan has changed dramatically. According to the Center for Retirement Research, in the early 1980s nearly 90 percent of Americans with an employer pension plan had a defined-benefit plan. By 2007 that share had shrunk to 36 percent. Defined-benefit pension plans have been replaced by defined-contribution plans such as 401(k)s. Among those with a pension, defined-contribution plans jumped from 38 percent to 81 percent.22

Defined-contribution plans have some advantages: they’re portable across employers, the employee has some say in how the money is invested, and a person in financial difficulty prior to retirement age can withdraw some or all of the money, though there is a tax penalty for doing so.23 The problem is that employees and employers may not contribute enough to defined-contribution plans or keep the money in them long enough to reap the benefits in retirement.24 If an employee doesn’t know about or understand her firm’s program, or feels she needs every dollar of her earnings to pay for current expenses, she may go a long time, perhaps even her entire working career, without putting any money into a defined-contribution plan. Employer contributions usually take the form of matching funds, with the amount put in by the employer pegged to the amount put in by the employee. Thus, no employee contribution often means no employer contribution. Moreover, when a person switches employers, she or he can choose to keep the defined-contribution-plan money as is, roll it over into an individual retirement account (IRA), or withdraw it, after a tax penalty is subtracted. Too many people choose to withdraw some or all of the money, leaving them with a lot less, and sometimes nothing at all, for their retirement years.

The third tier of retirement income security is personal savings. It too has weakened. Average household saving as a share of disposable household income fell from 10 percent in the 1970s to 8 percent in the 1980s to 5 percent in the 1990s to 3 percent in the 2000s.25 And the decline was probably even steeper for households on the lower rungs of the income ladder.

Income Decline

It isn’t just a low level of income that threatens economic security. Instability of income does too.

A large income decline can be problematic even if it’s temporary. Consider two households with the same average income over ten years. In one, the income is consistent over these years. The other experiences a big drop in income in one of the years, but offsets that drop with higher-than-average income in one or more later years. The latter household may be worse off in two respects. The first has to do with subjective well-being. A loss tends to reduce our happiness more than a gain increases it.26 The second involves assets. A large decline in income may force a household to sell off some or all of its assets, such as a home, to meet expenses. Even if the income loss is ultimately offset, the household may be worse off at the end of the period due to the asset sell-off.

It turns out, however, that income declines often aren’t temporary. Stephen Rose and Scott Winship have analyzed data from the Panel Study of Income Dynamics (PSID) to find out what subsequently happens to households experiencing a significant income decline.27 According to their calculations, among households that experience a drop in income of 25 percent or more from one year to the next, about one-third do not recover to their prior income level even a full decade later. There are various reasons for this. Some people own a small business that fails and don’t manage to get a job that pays as much as they made as entrepreneurs. Others become disabled or suffer a serious health problem and are unable to return to their previous earnings level. Still others are laid off, don’t find a new job right away, and then suffer because potential employers view their jobless spell as a signal that they are undesirable employees.

So income decline is a problem for those who experience it. How many Americans are we talking about? Several researchers have attempted to estimate the frequency of sharp income drops. In the study mentioned in the previous paragraph, Rose and Winship find that in any given year, 15 to 20 percent of Americans experience an income decline of 25 percent or more from the previous year.28 Using a different data source, the Survey of Income and Program Participation (SIPP), Winship estimates that during the 1990s and 2000s approximately 8 to 13 percent of households suffered this fate each year.29 A study by the CBO matches data from the Survey of Income and Program Participation (SIPP) with Social Security Administration records and gets a similar estimate of approximately 10 percent during the 1990s and 2000s.30 Finally, a team of researchers led by Jacob Hacker uses a third data source, the Current Population Survey (CPS), covering the mid-1980s through 2009, and comes up with 15 to 18 percent.31

These estimates vary, but not wildly. In any given year, approximately 10 to 20 percent of working-age Americans will experience a severe income drop.

Using PSID data, Elizabeth Jacobs has calculated that the share of American households experiencing a severe year-to-year income drop at some point in a ten-year period is roughly twice the share in any given two-year period. If so, the share of working-age Americans who at some point suffer a large income decline is in the neighborhood of 20 to 40 percent.32

Has the incidence of large year-to-year income decline increased over time? Yes, according to calculations by Jacob Hacker’s team and by Scott Winship. But not a lot. These estimates, shown in figure 2.2, suggest a rise in sharp year-on-year income decline of perhaps three to five percentage points since the 1970s or the early 1980s.33 Again, though, this might cumulate into a more substantial increase. If we instead focus on the share of Americans experiencing a sharp year-on-year decline at some point over a decade, Elizabeth Jacobs’s calculation suggests a rise of seven or eight percentage points from the 1970s to the 1990s.34

What’s the bottom line? In my read, the data tell us that sharp declines of income among working-age American households are relatively common and that their incidence has increased over the past generation.

We need to keep in mind that some of these declines are (fully or partially) voluntary. A person may leave a job or cut back on work hours to spend more time with children or an ailing relative. A couple may divorce. Someone may quit a job to move to a more desirable location without having another lined up. We don’t know what portion of income drops are voluntary. But I don’t think we should presume that most are.

Image

FIGURE 2.2 Households experiencing an income decline of 25 percent or more from one year to the next

The lines are loess curves. PSID and SIPP: posttransfer-pretax income, for households with a “head” aged 25–54. PSID is the Panel Study of Income Dynamics. SIPP is the Survey of Income and Program Participation. Data source: Scott Winship, “Bogeyman Economics,” National Affairs, 2012, figure 1. CPS: posttransfer-pretax income, for households of all ages. CPS is the Current Population Survey. Data source: Economic Security Index, www.economicsecurityindex.org, downloaded January 2013.

How should we assess the trend? One perspective is to view it as unavoidable. The American economy has shifted since the 1970s. It’s more competitive, flexible, and in flux. Even though this is bad for some households, it can’t be prevented unless we seal the country off from the rest of the world and heavily regulate our labor market. In this view, we should be happy that the increase in income volatility hasn’t been larger.

I think we should be disappointed. After all, there are ways to insure against income decline. We could have improved our porous unemployment compensation system, added a public sickness insurance program, or created a wage insurance program so that someone who loses a job and gets a new, lower-paying one receives some payment to offset the earnings loss. We could have done more, in other words, to offset the impact of economic shifts.

Large Unanticipated Expense

Low income and a sharp drop in income cause economic insecurity because we may have trouble meeting our expenses. A large unanticipated expense can produce the same result, even for those with decent and stable income.

In the United States, the most common large unexpected expense is medical. About one in seven Americans does not have health insurance. Others are underinsured, in the sense that they face a nontrivial likelihood of having to pay out of pocket for health care if they fall victim to a fairly common accident, condition, or disease.

Of course, many of the uninsured and underinsured won’t end up with a large healthcare bill. And some who do will be able to pay it (due to high income or to assets that can be sold), or will be allowed to escape paying it because of low income or assets, or will go into personal bankruptcy and have the debt expunged.

Yet in a modern society, we should consider most of the uninsured and some of the underinsured as economically insecure, in the same way we do those with low income. They are living on the edge to a degree that should not happen in a rich nation in the twenty-first century. After all, every other affluent country manages to provide health insurance for all (or virtually all) its citizens without breaking the bank.

This form of economic insecurity has increased over the past generation, though we don’t know exactly how much because we lack a continuous data series on the share of Americans without health insurance. Figure 2.3 shows the information we do have, going back to the late 1970s. Each of the three data series shows a rise in the share without insurance. Over the whole period, the increase is on the order of five percentage points.

Figure 2.3 understates vulnerability to a large medical expense in two respects. First, these data capture the average share of Americans who are uninsured at a given point during a year. If we instead ask how many are uninsured at any point during a year or two, the figure is larger. The Lewin Group estimates that during the two-year period of 2007 and 2008, 29 percent of Americans lacked health insurance at some point.35

Image

FIGURE 2.3 Persons without health insurance

The lines are loess curves. Data sources: CNHPS is from Marc Miringoff and Margue-Luisa Miringoff, The Social Health of the Nation, Oxford University Press, 1999, p. 198, using Center for National Health Program Studies data. CPS 1 and CPS 2 are from Census Bureau, “Income, Poverty, and Health Insurance Coverage in the United States: 2011,” table C-1, using Current Population Survey (CPS) data.

Second, it isn’t only the uninsured who are insecure. Some Americans have a health insurance policy that is inadequate. Each year 25 to 30 percent of Americans say they or a member of their family have put off medical treatment because of the extra cost they would have to pay.36 They can indeed end up with a large out-of-pocket medical expense if they get treated. We know this from data on bankruptcy filings. Such filings have increased steadily, from an average of .2 percent of the population each year in the 1980s to .4 percent in the 1990s to .5 percent in the 2000s. About one-quarter of Americans who file for bankruptcy do so mainly because of a large medical bill, and some of them do have health insurance.37

The 2010 healthcare reform is expected to reduce the share of uninsured Americans from 16 percent to perhaps 7 or 8 percent. That represents a substantial reduction in economic insecurity, but it still leaves us well short of where we could be, and where every other affluent nation has been for some time now.

Inadequate Opportunity

Americans believe in equal opportunity. Public opinion surveys consistently find more than 90 percent of Americans agree that “our society should do what is necessary to make sure that everyone has an equal opportunity to succeed.”38

True equality of opportunity is unattainable. Equal opportunity requires that everyone have equal skills, abilities, knowledge, and noncognitive traits, and that’s impossible. Our capabilities are shaped by genetics, developments in utero, parenting styles and traits, siblings, peers, teachers, preachers, sports coaches, tutors, neighborhoods, and a slew of chance events and occurrences. Society can’t fully equalize, offset, or compensate for these influences.

Nor do we really want equal opportunity, as it would require genetic engineering and intervention in home life far beyond what most of us would tolerate. Moreover, if parents knew that everyone would end up with the same skills and abilities as adults, they would have little incentive to invest effort and money in their children’s development, resulting in a lower absolute level of capabilities for everyone.

What we really want is for each person to have the most opportunity possible. We should aim, in Amartya Sen’s helpful formulation, to maximize people’s capability to choose, act, and accomplish.39 Pursuing this goal requires providing greater-than-average help to those in less advantageous circumstances or conditions. This, in turn, moves us closer to equal opportunity, even if, as I just explained, full equality of opportunity is not attainable.

Americans tend to believe that ours is a country in which opportunity is plentiful. This view became especially prominent in the second half of the nineteenth century, when the economy was shifting from farming to industry and Horatio Alger was churning out rags-to-riches tales.40 It’s still present today. On the night of the 2008 presidential election, Barack Obama began his victory speech by saying, “If there is anyone out there who still doubts that America is a place where all things are possible … tonight is your answer.”

There is more than a grain of truth in this sentiment. One of the country’s major successes in the last half century has been its progress in reducing obstacles to opportunity stemming from gender and race. Today, women are more likely to graduate from college than men and are catching up in employment and earnings.41 The gap between whites and nonwhites has narrowed as well, albeit less dramatically.42

When we turn to family background, however, the news is less encouraging. Americans growing up in less advantaged homes have far less opportunity than their counterparts from better-off families, and the gap is growing.

There is no straightforward way to measure opportunity, so social scientists tend to infer from outcomes, such as employment or earnings. If we find a particular group faring worse than others, we suspect a barrier to opportunity. It isn’t proof positive, but it’s the best we can do. To assess equality of opportunity among people from different family backgrounds, we look at relative intergenerational mobility—a person’s position on the income ladder relative to her or his parents’ position. We don’t have as much information as we would like about the extent of relative intergenerational mobility and its movement over time. The data requirements are stiff. Analysts need a survey that collects information about citizens’ incomes and other aspects of their life circumstances, and then does the same for their children and their children’s children, and so on. The best assessment of this type, the PSID, has been around only since the late 1960s.

It is clear, though, that there is considerable inequality of opportunity among Americans from different family backgrounds.43 Think of the income distribution as a ladder with five rungs, with each rung representing a fifth of the population. In a society with equal opportunity, every person would have a 20 percent chance of landing on each of the five rungs, and hence a 60 percent chance of landing on the middle rung or a higher one. The reality is quite different. An American born into a family in the bottom fifth of incomes between the mid-1960s and the mid-1980s has roughly a 30 percent chance of reaching the middle fifth or higher in adulthood, whereas an American born into the top fifth has an 80 percent chance of ending up in the middle fifth or higher.44

Between the mid-1800s and the 1970s, differences in opportunity based on family circumstances declined steadily.45 As the farming-based US labor force shifted to manufacturing, many Americans joined the paid economy, allowing an increasing number to move onto and up the income ladder. Elementary education became universal, and secondary education expanded. Then, in the 1960s and 1970s, school desegregation, the outlawing of discrimination in college admissions and hiring, and the introduction of affirmative action opened economic doors for many Americans.

But since the 1970s, we have been moving in the opposite direction. A host of economic and social shifts have widened the opportunity gap between Americans from low-income families and those from high-income families.

For one thing, poorer children are less likely to grow up with both biological parents. This reduces their likelihood of succeeding, since children who grow up with both parents tend to fare better on a host of outcomes, from school completion to staying out of prison to earning more in adulthood.46 For those with higher incomes, there has been far less change in family structure and, as a consequence, less-drastic implications for children’s success.47

Parenting traits and behaviors have long differed according to parents’ education and income, but this difference has increased with the advent of our modern intensive-parenting culture.48 Low-income parents aren’t able to spend as much on goods and services aimed at enriching their children, such as music lessons, travel, and summer camp. They read less to their children and provide less help with schoolwork. They are less likely to set and enforce clear rules and routines. And they are less likely to encourage their children to aspire to high achievement in school and at work.

Differences in out-of-home care also have widened. A generation ago, most preschool-aged children stayed at home with their mothers. Now, many are enrolled in some sort of childcare program. Children of affluent parents attend high-quality, education-oriented preschools, while kids of poorer parents are left with a neighborhood babysitter who plops them in front of the television.

Elementary and secondary schools help equalize opportunity. And in one respect they have become more effective at doing so: funding for public K-12 schools used to vary sharply across school districts, but this has diminished. Even so, there is a large difference in the quality of education between the best and the worst schools, and the poorest neighborhoods often have the weakest schools.

According to data compiled by Sean Reardon, the gap in average test scores between elementary- and secondary-school children from high-income families and low-income families has risen steadily.49 Among children born in 1970, those from high-income homes scored, on average, about three-quarters of a standard deviation higher on math and reading tests than those from low-income homes. For children born in 2000, the gap has grown to one-and-a-quarter standard deviations. That is much larger than the gap between white and black children.

Partly because they lag behind at the end of high school, and partly because college is so expensive, children from poor backgrounds are less likely than others to enter and complete college.50 In the past generation this gap has widened. Figure 2.4 shows college completion by parents’ income for children growing up in the 1960s and 1970s (birth years 1961–64) and children growing up in the 1980s and 1990s (birth years 1979–82). Among children of high-income parents, defined as those with an income in the top quarter of all families, there was a marked increase in the share completing college, from 36 percent of the first cohort to 54 percent of the second. For those from low-income families, the increase was much smaller, from 5 percent to 9 percent.

Image

FIGURE 2.4 College completion among persons from low-income and high-income families

College completion: four or more years of college. Low-income family: the person’s family income during childhood was on the lowest quarter of the income ladder. High-income family: income during childhood was on the highest quarter. Data source: Martha Bailey and Susan Dynarski, “Gains and Gaps: A Historical Perspective on Inequality in College Entry and Completion,” in Whither Opportunity? Rising Inequality, Schools, and Children’s Life Chances, edited by Greg J. Duncan and Richard J. Murnane, Russell Sage Foundation, 2011, figure 6.3, using National Longitudinal Survey of Youth data.

When it comes time to get a job, the story is no better. Low-income parents tend to have fewer valuable connections to help their children find good jobs. Some people from poor homes are further hampered by a lack of English language skills. Another disadvantage for the lower-income population is that in the 1970s and 1980s, the United States began incarcerating more young men, many for minor offenses. Having a criminal record makes it more difficult to get a stable job with decent pay.51 A number of developments, including technological advances, globalization, a loss of manufacturing employment, and the decline of unions, have reduced the number of jobs that require limited skills but pay a middle-class wage—the kind of jobs that once lifted poorer Americans into the middle class.52

Finally, changes in partner selection have widened the opportunity gap. Not only do those from better-off families tend to end up with more schooling and higher-paying jobs. They also increasingly marry (or cohabit with) others like themselves.53

Do we have conclusive evidence of rising inequality of opportunity in earnings and income? Not yet.54 Existing panel data sets are too young to give us a clear signal. But given the large increases in inequality of test scores and college completion between children from low-income families and those from high-income families, it is very likely that the same will be true, and perhaps already is true, for their earnings and incomes when they reach adulthood.

Slow Income Growth

As a society gets richer, the living standards of its households should rise.55 The poorest needn’t benefit the most; equal rates of improvement may be good enough. We might not even mind if the wealthiest benefit a bit more than others; a little increase in income inequality is hardly catastrophic. But in a good society, those in the middle and at the bottom ought to benefit significantly from economic growth. When the country prospers, everyone should prosper.

In the period between World War II and the mid-to-late 1970s, economic growth was good for Americans in the middle and below. Figure 2.5 shows that as GDP per capita increased, so did family income at the fiftieth percentile (the median) and at the twentieth percentile. Indeed, they moved virtually in lockstep. Since then, however, household income has been decoupled from economic growth. As the economy has grown, relatively little of that growth has reached households in the middle and below.

Image

FIGURE 2.5 GDP per capita and the incomes of lower-half families

P50 is the fiftieth percentile (median) of the income ladder; P20 is the twentieth percentile. Each series is displayed as an index set to equal 1 in 1947. The family income data are posttransfer-pretax. Inflation adjustment for each series is via the CPI-U-RS. Data sources: Bureau of Economic Analysis, “GDP and the National Income and Product Account Historical Tables,” table 1.1.5; Council of Economic Advisers, Economic Report of the President, table B-34; Census Bureau, “Historical Income Tables,” tables F-1 and F-5.

Why has this happened? Rising inequality. Since the 1970s, a larger and larger share of household income growth has gone to Americans at the very top of the ladder—roughly speaking, those in the top 1 percent. The income pie has gotten bigger, and everyone’s slice has increased in size, but the slice of the richest has expanded massively while that of the middle and below has gotten only a little bigger.

Figure 2.6 shows average incomes among households in the top 1 percent and in the bottom 60 percent.56 The years 1979 and 2007 are business-cycle peaks, so they make for sensible beginning and ending points. Average income for households in the top 1 percent soared from $350,000 in 1979 to $1.3 million in 2007. For the bottom 60 percent the rise was quite modest, from $30,000 in 1979 to $37,000 in 2007.

This is a disappointing development. But does the trend in lower-half incomes paint an accurate picture of changes in living standards?

Image

FIGURE 2.6 Average income of households in the top 1 percent and bottom 60 percent

Posttransfer-posttax income. The income measure includes earnings, capital gains, government transfers, other sources of cash income, in-kind income (employer-paid health insurance premiums, Medicare and Medicaid benefits, food stamps), employee contributions to 401(k) retirement plans, and employer-paid payroll taxes. Tax payments are subtracted. The incomes are in 2007 dollars; inflation adjustment is via the CPI-U-RS. Data source: Congressional Budget Office, “Average Federal Tax Rates and Income, by Income Category, 1979–2007.”

‘It’s Better Than It Looks”

To some, the picture conveyed by figure 2.5 is too pessimistic. They argue that incomes or broader living standards have grown relatively rapidly, keeping pace with the economy.57 There are eight variants of this view. Let’s consider them one by one.

1. The income data are too thin. The data for family income shown in figure 2.5 don’t include certain types of government transfers or the value of health insurance contributions from employers or (in the case of Medicare and Medicaid) from government. And they don’t subtract taxes. If these sources of income have risen rapidly for middle-class households, or if taxes have fallen sharply, the story conveyed by figure 2.5 will understate the true rate of progress.

Happily, we have a good alternative source of information: the data compiled by the CBO used in figure 2.6. I didn’t use these data in figure 2.5 because they don’t begin until 1979. But if figure 2.5 is replicated using the CBO data for average income in the middle or lower quintiles of households instead of median or p20 family income, the trends since the 1970s look similar.58

2. The income data miss upward movement over the life course. The family income data shown in figure 2.5 are from the Current Population Survey. Each year a representative sample of American adults is asked what their income was in the previous year. But each year, the sample consists of a new group; the survey doesn’t track the same people as they move through the life course.

If we interpret figure 2.5 as showing what happens to typical American families over the life course, we conclude that they see very little increase in income as they age. But that’s incorrect. In any given year, some of those with below-median income are young. Their wages and income are low because they are in the early stages of the work career and/or because they’re single. Over time, many will experience a significant income rise, getting pay increases or partnering with someone who also has earnings, or both. Figure 2.5 misses this income growth over the life course.

Figure 2.7 illustrates this. The lower line shows median income among families with a family “head” aged 25 to 34. The top line shows median income among the same cohort of families twenty years later, when their heads are aged 45 to 54. Consider the year 1979, for instance. The lower line tells us that in 1979 the median income of families with a 25- to 34-year-old head was about $54,000 (in 2010 dollars). The data point for 1979 in the top line looks at the median income of that same group of families in 1999, when they are 45 to 54 years old. This is the peak earning stage for most people, and their median income is now about $85,000.

Image

FIGURE 2.7 Median income within and across cohorts

For each year, the lower line is median income among families with a “head” aged 25–34, and the top line is median income for the same cohort of families twenty years later. In the years for which the calculation is possible, 1947 to 1990, the average increase in income during this two-decade portion of the life course is $30,500. The data are in 2010 dollars; inflation adjustment is via the CPI-U-RS. Data source: Census Bureau, “Historical Income Tables,” table F-11.

In each year, the gap between the two lines is roughly $30,000. This tells us that the incomes of middle-class Americans tend to increase substantially as they move from the early years of the work career to the peak years.

Should this reduce our concern about the over-time pattern shown in figure 2.5? No, it shouldn’t. Look again at figure 2.7. Between the mid-1940s and the mid-1970s, the median income of families in early adulthood (the lower line) rose steadily. In the mid-1940s median income for these young families was around $25,000; by the mid-1970s, it had doubled to $50,000. Americans during this period experienced income gains over the life course, but they also tended to have higher incomes than their predecessors, both in their early work years and in their peak years. That’s because the economy was growing at a healthy clip and the economic growth was trickling down to Americans in the middle.

After the mid-1970s, this steady gain disappeared. From the mid-1970s to 2007, the median income of families with a 25- to 34-year-old head was flat. They continued to achieve income gains during the life course. (Actually, we don’t yet know about those who started out in the 1990s and 2000s because they are just now beginning to reach ages 45 to 54. The question marks in the chart show what their incomes will be if the historical trajectory holds true.) But the improvement across cohorts that characterized the period from World War II through the 1970s—each cohort starting higher and ending higher than earlier ones—disappeared.

For many Americans, income rises during the life course, and that fact is indeed hidden by charts such as figure 2.5. But that shouldn’t lessen our concern about the decoupling of household income growth from economic growth that has occurred over the past generation. We want improvement not just within cohorts, but also across them.

3. Families have gotten smaller. The size of the typical American family and household has been shrinking since the mid-1960s, when the baby boom ended. Perhaps, then, we don’t need income growth to be so rapid any more.

Let me pause briefly to explain why figure 2.5 shows the income trend for families rather than households. The household is the better unit to look at. A “family” is defined by the Census Bureau as a household with two or more related persons. Families therefore don’t include adults who live alone or with others to whom they aren’t related. It’s a bit silly to exclude this group, but that’s what the Census Bureau did until 1967. Only then did it begin tabulating data for all households. I use families in figure 2.5 in order to begin earlier, in the mid-1940s. As it happens, though, the trends for households since the mid-1970s have been virtually identical to the trends for families.

Should the shrinkage in family size alter our interpretation of slow income growth? No. As noted earlier, incomes have become decoupled from economic growth because a steadily rising share of economic growth has gone to families or households at the top of the ladder. But family size has decreased among the rich, too; they don’t need the extra income more than those in the middle and below do.

4. More people are in college or retired. The income data in figure 2.5 are for families with a “head” aged 15 or older. However, the share of young Americans attending college has increased since the 1970s, and the share of Americans who are elderly and hence retired has risen. Because of these developments, the share of families with an employed adult head may be falling. Does this account for the slow growth of family income relative to the economy? No, it does not. The trend in income among families with a head aged 25 to 54, in the prime of the work career, is very similar to that for all families.59

5. There are more immigrants. Immigration into the United States began to increase in the late 1960s. The foreign-born share of the American population, including both legal and illegal immigrants, rose from 5 percent in 1970 to 13 percent in 2007.60 Many immigrants arrive with limited labor market skills and little or no English, so their incomes tend to be low. For many such immigrants, a low income in the United States is a substantial improvement over what their income would be in their home country. So if this accounts for the divorce between economic growth and median income growth over the past generation, it should allay concern.

Immigration is indeed part of the story. But it is a relatively small part. The rise in lower-half family income for non-Hispanic whites, which excludes most immigrants, has been only slightly greater than the rise in lower-half income for all families shown in figure 2.5.61

6. Consumption has continued to rise rapidly. Some consider spending a better indicator of standard of living than income. Even though the incomes of middle- and low-income Americans have grown slowly, they may have increased their consumption more rapidly by drawing on assets (equity in a home, savings) and/or debt.

But that is not the case. According to the best available data, from the Consumer Expenditures Survey (CES), median family expenditures rose at the same pace as median family income in the 1980s, 1990s, and 2000s.62

7. Wealth has increased sharply. Maybe the slow growth of income has been offset by a rapid growth of wealth (assets minus debts). Perhaps many middle- and low-income Americans benefited from the housing boom in the 1990s and 2000s. In this story, their income and consumption growth may have lagged well behind growth of the economy, but they got much richer due to appreciation of their assets.

This is true, but only up to 2007. We have data on wealth from the Survey of Consumer Finances (SCF), administered by the Federal Reserve every three years. Figure 2.8 shows the trend in median family wealth along with the trend in median family income (the same as in figure 2.5). The wealth data are first available in 1989. What we see is a sharp upward spike in median wealth in the 1990s and much of the 2000s. The home is the chief asset of most middle-class Americans, and home values jumped during this period. But then the housing bubble burst, and between 2007 and 2010 median family wealth fell precipitously, erasing all the gains of the preceding two decades.63 And for those who lost their home, in foreclosure, things are worse than what’s conveyed by these data.

In fact, even before the bubble burst, not everyone benefited. Of the one-third of Americans who don’t own a home, many are on the lower half of the income ladder. For them, the rise in home values in the 1990s and 2000s did nothing to compensate for the slow growth of income since the 1970s.

8. There have been significant improvements in quality of life. The final variant of the notion that income data understate the degree of advance in living standards focuses on improvements in the quality of goods, services, and social norms. It suggests that adjusting the income data for inflation doesn’t do justice to the enhancements in quality of life that have occurred in the past generation.

Fewer jobs require hard physical labor, and workplace accidents and deaths have decreased. Life expectancy rose from 74 years in 1979 to 78 years in 2007. Cancer survival is up. Infant mortality is down. An array of new pharmaceuticals now help relieve various conditions and ailments. Computed tomography (CT) scans and other diagnostic tools have enhanced physicians’ ability to detect serious health problems. Organ transplants, hip and knee replacements, and lasik eye surgery are now commonplace. Violent crime has dropped to pre-1970s levels. Air quality and water quality are much improved.

Image

FIGURE 2.8 Median family income and median family wealth

The wealth measure is “net worth,” calculated as assets minus liabilities. The wealth data are available beginning in 1989. The income data are the same as those shown in figure 2.5. Both series are in 2010 dollars; inflation adjustment is via the CPI-U-RS. Data sources: Jesse Bricker et al., “Changes in U.S. Family Finances from 2007 to 2010: Evidence from the Survey of Consumer Finances,” Federal Reserve Bulletin, June 2012; Federal Reserve, 2007 SCF Chartbook; Census Bureau, “Historical Income Tables,” table F-5.

We live in bigger houses; the median size of new homes rose from 1,600 square feet in 1979 to 2,300 in 2007. Cars are safer and get better gas mileage. Food and clothing are cheaper. We have access to an assortment of conveniences that didn’t exist or weren’t widely available a generation ago: personal computers, printers, scanners, microwave ovens, TV remote controls, TIVO, camcorders, digital cameras, five-blade razors, home pregnancy tests, home security systems, handheld calculators. Product variety has increased for almost all goods and services, from cars to restaurant food to toothpaste to television programs.

We have much greater access to information via the Internet, Google, cable TV, travel guides, Google Maps and GPS, smartphones, and tablets. We have a host of new communication tools: cell phones, call waiting, voicemail, e-mail, social networking websites, Skype. Personal entertainment sources and devices have proliferated: cable TV, high-definition televisions, home entertainment systems, the Internet, MP3 players, CD players, DVD players, Netflix, satellite radio, video games.

Last, but not least, discrimination on the basis of sex, race, and more recently, sexual orientation have diminished. For women, racial and ethnic minorities, and lesbian and gay Americans, this may be the most valuable improvement of all.

There is no disputing these gains in quality of life. But did they occur because income growth for middle- and low-income Americans lagged well behind growth of the economy? In other words, did we need to sacrifice income growth to get these improved products and services?

Some say yes, arguing that returns to success soared in such fields as high tech, finance, entertainment, and athletics, as well as for CEOs. These markets became “winner take all,” and the rewards reaped by the winners mushroomed. For those with a shot at being the best in their field, this increased the financial incentive to work harder or longer or to be more creative. This rise in financial incentives produced a corresponding rise in excellence—new products and services and enhanced quality.

Is this correct? Consider the case of Apple and Steve Jobs. Apple’s Macintosh, iPod, iTunes, MacBook Air, iPhone, and iPad were so different from and superior to anything that preceded them that their addition to living standards isn’t likely to be adequately measured. Did slow middle-class income growth make this possible? Would Jobs and his teams of engineers, designers, and others at Apple have worked as hard as they did to create these new products and bring them to market in the absence of massive winner-take-all financial incentives?

It’s difficult to know. But Walter Isaacson’s comprehensive biography of Steve Jobs suggests that he was driven by a passion for the products, for winning the competitive battle, and for status among peers.64 Excellence and victory were their own reward, not a means to the end of financial riches. In this respect, Jobs mirrors scores of inventors and entrepreneurs over the ages. So, while the rise of winner-take-all compensation occurred simultaneously with surges in innovation and productivity in certain fields, it may not have caused those surges.

For a more systematic assessment, we can look at the preceding period—the 1940s, 1950s, 1960s, and early 1970s.65 In these years, lower-half incomes grew at roughly the same pace as the economy and as incomes at the top. Did this squash the incentive for innovation and hard work and thereby come at the expense of broader quality-of-life improvements?

During this period, the share of Americans working in physically taxing jobs fell steadily as employment in agriculture and manufacturing declined. Life expectancy rose from 65 years in 1945 to 71 years in 1973. Antibiotic use began in the 1940s, and open-heart bypass surgery was introduced in the 1960s.

In 1940, only 44 percent of Americans owned a home; by 1970 the number had jumped to 64 percent. Home features and amenities changed dramatically, as the following list makes clear. Running water: 70 percent in 1940, 98 percent in 1970. Indoor flush toilet: 60 percent in 1940, 95 percent in 1970. Electric lighting: 79 percent in 1940, 99 percent in 1970. Central heating: 40 percent in 1940, 78 percent in 1970. Air conditioning: very few (we don’t have precise data) in 1940, more than half of homes in 1970. Refrigerator: 47 percent in 1940, 99 percent in 1970. Washing machine: less than half of homes in 1940, 92 percent in 1970. Vacuum cleaner: 40 percent in 1940, 92 percent in 1970.

In 1970, 80 percent of American households had a car, compared to just 52 percent in 1940. The interstate highway system was built in the 1950s and 1960s. In 1970, there were 154 million air passengers versus 4 million in 1940. Only 45 percent of homes had a telephone in 1945; by 1970, virtually all did. Long-distance phone calls were rare before the 1960s. In 1950, just 60 percent of employed Americans took a vacation; in 1970 the number had risen to 80 percent. By 1970, 99 percent of Americans had a television, up from just 32 percent in 1940. In music, the “album” originated in the late 1940s, and rock ‘n’ roll began in the 1950s. Other innovations that made life easier or more pleasurable include photocopiers, disposable diapers, and the bikini.

The Civil Rights Act of 1964 outlawed gender and race discrimination in public places, education, and employment. For women, life changed in myriad ways. Female labor force participation rose from 30 percent in 1940 to 49 percent in 1970. Norms inhibiting divorce relaxed in the 1960s. The pill was introduced in 1960. Abortion was legalized in the early 1970s. Access to college increased massively in the mid-1960s.

Comparing these changes in quality of life is difficult, but I see no reason to conclude that the pace of advance, or of innovation, has been more rapid in recent decades than before.66

Yes, there have been significant improvements in quality of life in the United States since the 1970s. But that shouldn’t lessen our disappointment in the fact that incomes have grown far more slowly than the economy.

“It’s Worse Than It Looks”

Rather than understate the true degree of progress for middle-and low-income Americans, the income trends shown in figure 2.5 might overstate it, for the following reasons.67

1. Income growth is due mainly to the addition of a second earner. The income of American households in the lower half has grown slowly since the 1970s. But it might not have increased at all if not for the fact that more households came to have two earners rather than one. From the 1940s through the mid-1970s, wages rose steadily. As a result, the median income of most families, whether they had one earner or two, increased at about the same pace as the economy.68 Since then, wages have barely budged.69

It’s important to emphasize that most of this shift from one earner to two has been voluntary. A growing number of women seek employment, as their educational attainment has increased, discrimination in the labor market has dissipated, and social norms have changed. The transition from the traditional male-breadwinner family to the dual-earner one isn’t simply a product of desperation to keep incomes growing.

Even so, the fact that income growth for lower-half households has required adding a second earner has two problematic implications. First, single-adult households have seen no income rise at all.70 Second, as more two-adult households have both adults in employment, more struggle to balance the demands of home and work. High-quality childcare and preschools are expensive, and elementary and secondary schools are in session only 180 of the 250 weekdays each year. The difficulty is accentuated by the growing prevalence of long work hours, odd hours, irregular hours, and long commutes. By the early 2000s, 25 percent of employed men and 10 percent of employed women worked fifty or more hours per week.71 And 35 to 40 percent of Americans worked outside regular hours (9 a.m. to 5 p.m.) and/or days (Monday to Friday).72 Average commute time rose from forty minutes in 1980 to fifty minutes in the late 2000s.73

2. The cost of key middle-class expenses has risen much faster than inflation. The income numbers in figure 2.5 are adjusted for inflation. But the adjustment is based on the price of a bundle of goods and services considered typical for American households. Changes in the cost of certain goods and services that middle-class Americans consider essential may not be adequately captured in this bundle. In particular, because middle-class families typically want to own a home and to send their kids to college, they suffered more than other Americans from the sharp rise in housing prices and college tuition costs in the 1990s and 2000s. Moreover, as middle-class families have shifted from having one earner to two, their spending needs may have changed in ways that adjusting for inflation doesn’t capture. For example, they now need to pay for childcare and require two cars rather than one.74

Consider a four-person family with two adults and two preschool-age children. In the early 1970s, one of the adults in this family was probably employed, and the other stayed at home. By the mid-2000s, it’s likely that both were employed. Here is how their big-ticket expenses might have differed.75 Childcare: $0 in the early 1970s, $12,500 in the mid-2000s. Car(s): $5,800 for one car in the early 1970s, $8,800 for two cars in the mid-2000s. Home mortgage: $6,000 in the early 1970s, $10,200 in the mid-2000s. When the children reach school age, the strain eases. But when they head off to college it reappears; the average cost of tuition, fees, and room and board at public four-year colleges rose from $6,500 in the early 1970s to $12,000 in the mid-2000s.76

Overall, among American households, debt as a share of personal disposable income jumped from 74 percent in 1979 to 138 percent in 2007.77 The confluence of slowly rising income and rapidly rising big-ticket costs is part of the reason why.78

We Can Do Better

In the past generation, ordinary Americans have had less economic security, less opportunity, and less income growth than should be the case in a country as prosperous as ours. Can we do better? Yes. In the next chapter I explain how.