Chapter 5
INCOME FACTS AND FALLACIES
Measuring the growth of incomes or the inequality of incomes is a little like Olympic figure skating—full of dangerous leaps and twirls and not nearly as easy as it looks. Yet the growth and inequality of incomes are topics that seem to inspire many people to form very strong opinions about very weak statistics.
Mark Twain said that there are three kinds of lies—“lies, damned lies, and statistics.” Income statistics are classic examples of numbers that can be arranged differently to suggest, not merely different, but totally opposite conclusions. Among the bountiful supply of fallacies about income and wealth are the following:
1. Except for the rich, the incomes of Americans have stagnated for years.
2. The American middle class is growing smaller.
3. Over the years, the poor have been getting poorer.
4. Corporate executives are overpaid, at the expense of both stockholders and consumers.
There are statistics which can be cited to support each of these propositions—and other statistics, or even the same statistics looked at differently—that can make these propositions collapse like a house of cards. Despite an abundance of statistical data collected by the Bureau of the Census, other government agencies, and a variety of private research enterprises, controversies rage on, even though the numbers themselves are seldom in dispute. It is the analyses—or the fallacies—that are at issue.
Some of the most misleading fallacies come from confusing the fate of statistical categories with the fate of flesh-and-blood human beings. Statistical data for households, income brackets, and other statistical categories can be very misleading because (1) there are often different numbers of people in each category and (2) individuals move from one category to another. Thus the statistical category “top one percent” of income recipients has received a growing share of the nation’s income in recent years—while the actual flesh-and-blood taxpayers who were in that category in 1996 actually saw their income go down by 2005. What makes it possible for both these apparently contradictory statements to be true is that more than half of the people who were in the top one percent at the beginning of the decade were no longer there at the end. As their incomes declined, they dropped out of the top one percent.
The same principle applies in the lower income brackets. The share of the national income going to the statistical category “lowest 20 percent” of taxpayers has been declining somewhat over the years but the actual flesh-and-blood human beings who were in the bottom 20 percent in 1996 had their incomes increase by an average of 91 percent by 2005. This nearly doubling of their incomes took more than half of them out of the bottom 20 percent category.
2
INCOME STAGNATION
What might seem to be one of the easiest questions to answer—whether most Americans’ incomes have been growing or not—is in fact one of the most hotly disputed.
Household Income
It has often been claimed that there has been very little change in the average real income of American households over a period of decades. It is an undisputed fact that the average real income—that is, money income adjusted for inflation—of American households rose by only 6 percent over the entire period from 1969 to 1996. That might well be considered to qualify as stagnation. But it is an equally undisputed fact that the average real income per person in the United States rose by 51 percent over that very same period.
3
How can both these statistics be true? Because the average number of individuals per household has been declining over the years. Half the households in the United States contained six or more people in 1900, as did 21 percent in 1950. But, by 1998, only ten percent of American households had that many people.
4
The average number of persons per household not only varies over time, it also varies from one racial or ethnic group to another at a given time, and varies from one income bracket to another. As of 2007, for example, black household income was lower than Hispanic household income, even though black per capita income was higher than Hispanic per capita income, because black households average fewer people than Hispanic households. Similarly, Asian American household income was higher than white household income, even though white per capita income was higher than Asian American per capita income, because Asian American households average more people.
5
Income comparisons using household statistics are far less reliable indicators of standards of living than are individual income data because households vary in size while an individual always means one person. Studies of what people actually consume—that is, their standard of living—show substantial increases over the years, even among the poor,
6 which is more in keeping with a 51 percent increase in real per capita income than with a 6 percent increase in real household income. But household income statistics present golden opportunities for fallacies to flourish, and those opportunities have been seized by many in the media, in politics, and in academia.
A
Washington Post writer, for example, said, “the incomes of most American households have remained stubbornly flat over the past three decades,”
7 suggesting that there had been little change in the standard of living. A
New York Times writer likewise declared: “The incomes of most American households have failed to gain ground on inflation since 1973.”
8 The head of a Washington think tank was quoted in the
Christian Science Monitor as declaring: “The economy is growing without raising average living standards.”
9 Harvard economist Benjamin M. Friedman said, “the median family’s income is falling after allowing for rising prices; only a relatively few at the top of the income scale have been enjoying any increase.”
10
Sometimes such conclusions arise from statistical naivete but sometimes the inconsistency with which data are cited suggests a bias. Long-time
New York Times columnist Tom Wicker, for example, used per capita income statistics when he depicted success for the Lyndon Johnson administration’s economic policies and family income statistics when he depicted failure for the policies of Ronald Reagan and George H. W. Bush.
11 Families, like households, vary in size over time, from one group to another, and from one income bracket to another.
12
A rising standard of living is itself one of the factors behind reduced household size over time. As far back the 1960s, a Census Bureau study noted “the increased tendency, particularly among unrelated individuals, to maintain their own homes or apartments rather than live with relatives or move into existing households as roomers, lodgers, and so forth.”
13 Increased real income per person enables more people to live in their own separate dwelling units, instead of with parents, roommates, or strangers in a rooming house. Yet a reduction in the number of people living under the same roof as a result of increased prosperity can lead to statistics that are often cited as proof of economic stagnation. In a low-income household, increased income may either cause that household’s income to rise above the poverty level or cause overcrowding to be relieved by having some members go form their own separate households—which in turn can lead to statistics showing two households living below the poverty level, where there was only one before. Such statistics are not inaccurate but the conclusion drawn can be fallacious.
Differences in household size are very substantial from one income level to another. U.S. Census data show 39 million people living in households whose incomes are in the bottom 20 percent of household incomes and 64 million people living in households in the top 20 percent.
14 Under these circumstances, measuring income inequality or income rises and falls by households can lead to completely different results from measuring the same things with data on individuals. Comparing households of highly varying sizes can mean comparing apples and oranges. Not only do households differ greatly in the numbers of people per household at different income levels, the number of
working people varies even more widely.
In the year 2000, the top 20 percent of households by income contained 19 million heads of households who worked, compared to fewer than 8 million heads of households who worked in the bottom 20 percent of households. These differences are even more extreme when comparing people who work full-time and year-round. There are nearly six times as many such people in the top 20 percent of households as in the bottom 20 percent.
15 Even the top
five percent of households by income had more heads of household who worked full-time for 50 or more weeks a year than did the bottom
twenty percent. In absolute numbers, there were 3.9 million heads of household working full-time and year-round in the top 5 percent of households and only 3.3 million working full-time and year-round in the bottom 20 percent.
16
There was a time when it was meaningful to speak of “the idle rich” and the “toiling poor” but that time has long past. Most households in the bottom 20 percent by income do not have
any full-time, year-round worker and 56 percent of these households do not have anyone working even part-time.
17 Some of these low-income households contain single mothers on welfare and their children. Some such households consist of retirees living on Social Security or others who are not working, or who are working sporadically or part-time, because of disabilities or for other reasons.
Household income data can therefore be very misleading, whether comparing income differences as of a given time or following changes in income over the years. For example, one study dividing the country into “five equal layers” by income reached dire conclusions about the degree of inequality between the top and bottom 20 percent of households.
18 These equal percentages of
households, however, were by no means equal percentages of
people, since the poorest fifth of households contain 25 million fewer people than the fifth of households with the highest incomes. Increasing income inequality over time also becomes much less mysterious in an era when people are paid more for their work, because this means that people who don’t work as much, or at all, lose opportunities to share in this income rise. In addition to differences among income brackets in how many heads of household work, there are even larger differences in how many total members of households work. The top 20 percent of households have four times as many workers as the bottom 20 percent, and more than five times as many full-time, year-round workers.
19
No doubt these differences in the number of paychecks per household have something to do with the differences in income, though such facts often get omitted from discussions of income “disparities” and “inequities” caused by “society.” The very possibility that inequality is not caused by society but by people who contribute less than others to the economy, and are correspondingly less rewarded, is seldom mentioned, much less examined. But not only do households in the bottom 20 percent contribute less work, they contribute far less skills, based on education. While nearly 60 percent of Americans in the top 20 percent graduated from college, only 6 percent of those in the bottom 20 percent did so.
20 Such glaring facts are often omitted from discussions which center on the presumed failings of “society” and resolutely ignore facts counter to that vision.
Most statistics on income inequality are very misleading in yet another way. These statistics almost invariably leave out money received as transfers from the government in various programs for low-income people which provide benefits of substantial value for which the recipients pay nothing. Since people in the bottom 20 percent of income recipients receive more than two-thirds of their income from transfer payments, leaving those cash payments out of the statistics greatly exaggerates their poverty—and leaving out in-kind transfers as well, such as subsidized housing, distorts their economic situation even more. In 2001, for example, cash and in-kind transfers together accounted for 77.8 percent of the economic resources of people in the bottom 20 percent.
21 In other words, the alarming statistics on their incomes so often cited in the media and by politicians count
only 22 percent of the actual economic resources at their disposal.
Given such disparities between the economic reality and the alarming statistics, it is much easier to understand such apparent anomalies as the fact that Americans living below the official poverty level spend far more money than their incomes
22—as their income is defined in statistical studies. As for stagnation, by 2001 most people defined as poor had possessions once considered part of a middle class lifestyle. Three-quarters of them had air-conditioning, which only a third of all Americans had in 1971. Ninety-seven percent had color television, which less than half of all Americans had in 1971. Seventy-three percent owned a microwave, which less than one percent of Americans owned in 1971, and 98 percent of “the poor” had either a videocassette recorder or a DVD player, which no one had in 1971. In addition, 72 percent of “the poor” owned a car or truck.
23 Yet the rhetoric of the “haves” and the “have nots” continues, even in a society where it might be more accurate to refer to the “haves” and the “have lots.”
No doubt there are still some genuinely poor people who are genuinely hurting. But they bear little resemblance to most of the millions of people in the often-cited statistics on households in the bottom 20 percent. Much poverty is imported across the southern border of the United States that immigrants cross, legally or illegally, from Mexico. The poverty rate among foreign nationals in the the United States is nearly double the national average.
24 Homeless people, some disabled by drugs or mental problems, are another source of many people living in poverty. However, the image of “the working poor” who are “falling behind” as a result of society’s “inequities” bears little resemblance to the situation of most of the people earning the lowest 20 percent of income in the United States. Despite a
New York Times columnist’s depiction of people who are “working hard and staying poor”
25 in 2007, Census data from that same year showed the poverty rate among full-time, year-round workers to be 2.5 percent.
26
Workers’ Incomes
Some people deny that American workers’ incomes have risen at all in recent times. Such claims require a careful scrutiny of statistics. Here again, there are heated disputes over very basic facts that are readily documented in statistics. A
Washington Post editorial, for example, said that in a quarter of a century, from 1980 to 2004, “the wages of the typical worker actually fell slightly.” Many others, writing in similarly prominent publications and in books, have repeated similar claims over the years. But economist Alan Reynolds, referring to those very same years, said “Real consumption per person increased 74 percent”—and others have likewise categorically rejected the claims that workers’ incomes have not risen. Such complete contrasts and contradictions have been common on this issue,
27 with both sides citing official statistics.
Here, as elsewhere, we cannot simply accept blanket assertions that “statistics prove” one thing or another, without scrutinizing the definitions used and noting what things have been included and excluded when compiling numbers.
In the case of statistics claiming that workers’ incomes have not risen significantly—or at all—over the years, these data exclude the value of job benefits such as health insurance, retirement benefits and the like, which have been a growing share of employee compensation over the years.
28 Moreover, “workers” lump together both full-time and part-time employees—and part-timers have been a growing proportion of all workers. Part-time workers receive lower weekly pay than full-time workers, both because they work fewer hours and because they are usually paid less per hour. While the real hourly earnings of production workers declined somewhat in the last two decades of the twentieth century, the real value of the total compensation package received by those workers continued to rise during that same period.
29
In short, the weekly earnings of part-time workers drag down the statistical average of workers as a group, even though part-timers’ work adds to both national output and to their own families’ incomes. It is not that full-time workers are paid less than before, but that more part-time workers’ earnings are being averaged in with theirs statistically. Thus increased prosperity can be represented statistically as stagnating worker compensation because average weekly pay as of 2003 is very similar to what it was 30 years earlier. The difference is that the average weekly hours have declined over that span of time, due to more part-time workers being included in the statistics, and because more of workers’ compensation is now being taken in the form of health insurance, retirement benefits and the like. Even so, the money income of full-time wage and salary workers increased between 1980 and 2004 and so did real income—either by 13 percent or 17 percent, depending on which price index is used.
30 Counting health and retirement benefits, worker compensation rose by nearly a third between 1980 and 2004, even though this still excludes “the statistically invisible returns inside IRA and 401(k) plans.”
31
The way real income is computed tends to understate its growth over time. Since real income is simply money income divided by some price index to take account of inflation, everything depends on the accuracy and validity of such indexes. The construction and use of these indexes is by no means an exact science. Many leading economists regard the consumer price index, for example, as inherently—even if unintentionally—exaggerating inflation. To the extent that the price index over-estimates inflation, it under-estimates real income.
The inflationary bias of the consumer price index results from the fact that it counts the prices of a given collection of goods over time, while those goods are themselves changing over time. For example, the price of automobiles is increasing but so are the features of these automobiles, with today’s cars routinely including air conditioning, stereos, and many other features that were once confined to luxury vehicles. Therefore not all the rise in the price of automobiles is simply inflation. If Chevrolets today contain many features once confined to Cadillacs, the rise in the price of Chevrolets over the years to become similar to the price of Cadillacs in the past is not all inflation. When similar cars cost similar prices, that is not inflation just because the similar cars had different names in different eras.
Another inflationary bias to the consumer price index is that it counts only those things that most people are likely to buy. Reasonable as that might seem, what people will buy obviously depends on the price, so new products that are very expensive do not get included in the index until after their prices come down to a level where most people can afford them, as typically happens over time, so that things like laptop computers and videocassette recorders that were once luxuries of the rich have now become readily affordable to vastly larger numbers of people. What this means statistically is that price increases and price decreases over time are not equally reflected in the consumer price index.
How much difference does this make in estimating real incomes over time? If a price index estimates 3 percent inflation and statistics on money income are reduced accordingly to get real income, then if a more realistic estimate is 2 percent, that one percentage point difference can have very serious effects on the resulting statistics on real income. The cumulative effect of a difference of one percentage point per year, over a period of 25 years, has been estimated to statistically understate the real annual income of an average American by nearly $9,000 at the end of a quarter century.
32 That is yet another contribution to the fallacy of stagnating real incomes, even when those incomes are rising.
One of the perennial fallacies is that the jobs being lost in the American economy—whether to foreign competition or to technological change—are high-wage and the new jobs being created are low-wage jobs, flipping hamburgers being a frequent example. But seven out of ten new jobs created between 1993 and 1996 paid wages above the national average.
33 Economist Alan Reynolds used consumption data as the most realistic indicator of living standards—and found that consumption in real terms had increased by 74 percent over the period during which workers’ pay had supposedly stagnated.
34
There are other, more technical, fallacies involved in generating statistics that are widely cited to support claims that workers’ pay has stagnated.
35 But we have already seen enough to get a general idea of what is wrong with those statistics. Why so many people have been so eager to accept and repeat the dire conclusions reached is another question that goes beyond the realm of economics.
INCOME INEQUALITY
Ultimately, we are concerned with people rather than statistical categories, and especially our concern is with the standard of living of people. Since the affluent and the wealthy can take care of themselves, people of modest or low incomes are a special focus. Obvious as all this might seem, much ingenuity has gone into concocting statistical alarms having little or nothing to do with the standard of living of actual flesh-and-blood human beings.
One widely quoted study, for example, used income tax data to show dramatically growing income inequality among “tax units,” leaving the impression that there was a similarly sharp increase in income inequality among human beings. Some tax units coincide with individuals, some coincide with married couples, and some coincide with neither, because some of these tax units are businesses. Comparisons among such heterogeneous categories are comparisons of apples and oranges. In some media translations of these studies, these tax units are often referred to loosely as “families.”
36 But a couple living together and filing separate income tax returns are not two families, and to record their incomes as family incomes means artificially creating two statistical “families” averaging half the income of the real family.
Tax laws changed significantly during the period when this dramatic increase in statistical inequality occurred, so that some income that had previously been taxed as business income was now being taxed as personal income, particularly at the highest income levels, where business income is an especially large share of total income. In other words, money that would previously not have been counted as personal income among the higher-income tax units was now counted, creating the statistical impression that there was a dramatic change in real income among real people, when in fact there was a change in definitions used when compiling statistics. This study mentioned such crucial caveats in a footnote but that footnote was seldom, if ever, quoted in the many alarming media accounts.
37
Just as income statistics greatly under-estimate the economic resources available to people in the lower income brackets, steeply progressive income taxes substantially over-estimate the actual economic resources at the disposal of people in the upper income brackets. Most income statistics count income before taxes and leave out both cash transfers and in-kind transfers from the government. Since most of the taxes are paid by people earning above-average incomes and most of the income of people in the lowest income bracket comes from government transfers, income statistics exaggerate the differences in actual standards of living. Disparities between A and B will always be greater if you exaggerate what A has and understate what B has. Yet that simple fallacy underlies much of the political, media, and even academic alarm over income “disparities” and “inequities.”
Concern over poverty is often confused with concern over differences in income, as if the wealth of the wealthy derives from the poverty of the poor. But this is just one of the many forms of the zero-sum fallacy. Since the United States contains several times as many billionaires as any other country, ordinary Americans would be among the most poverty-stricken people in the world if the wealth of the wealthy derives from the poverty of the poor. Conversely, billionaires are much rarer in the most poverty-stricken parts of the world, such as sub-Saharan Africa. Some people have tried to salvage the zero-sum view by claiming that wealthy people in wealthy countries exploit poor people in poor countries. That fallacy will be examined in the discussion of Third World countries in Chapter 7. But, first, poverty and inequality require separate analysis and careful definitions.
“The Rich” and “The Poor”
Even such widely used terms as “the rich” and “the poor” are seldom defined and are often used in inconsistent ways. By “the rich,” for example, we usually mean people with large accumulations of wealth. However, most statistics used in discussions of “the rich” are
not about accumulations of wealth, but are about the current flow of income during a given year. Similarly, “the poor” are usually defined in terms of current income, rather than in terms of how much wealth they have or have not accumulated. Income and wealth are not only different in concept, they are very different in terms of who has how much of each. Among the people with low incomes who are
not poor are the following:
1. Wives of affluent or rich men and husbands of affluent or rich women
2. Affluent or wealthy speculators, investors, and business owners whose enterprises are having an off year, and who may even be losing money in a given year
3. People who graduate in the middle of the year from high schools, colleges, or postgraduate institutions, and who therefore earn only one-half or less of what they will be earning the following year
4. Doctors, dentists, and other independent professionals who are just beginning their careers, and who have not yet built up a sufficient clientele to pay office and other expenses with enough left over to create an income at all comparable to what they will be making in a few years
5. Young adults still living in the homes of affluent or wealthy parents, rent-free, or living elsewhere at their parents’ expense, while they explore their possibilities, work sporadically or in low-paid entry-level jobs, or as volunteers in philanthropic or political enterprises
6. Retirees who have no rent to pay or mortgage payments to make because they own their own homes, and who have larger assets in general than younger people have, even if the retirees’ current income is low.
None of these is what most people have in mind when they speak of “the poor.” But statistics do not distinguish between people whose current incomes are low and people who are genuinely poor in the sense that they are an enduring class of people whose standards of living will remain low for many years, or even for life, because they lack either the income or the wealth to live any better. Similarly, most of the people whose current incomes are in the top 10 or 20 percent are not rich in the sense of being people who have been in top income and wealth brackets most of their lives. Most income statistics present a snapshot picture as of a given moment—and their results are radically different from those statistics which follow the same given individuals over a period of years.
For example, three-quarters of those Americans whose incomes were in the
bottom 20 percent in 1975 were also in the
top 40 percent at some point during the next 16 years.
38 In other words, a large majority of those people who would be considered poor on the basis of current incomes as of a given year later rise into the top half of the income recipients in the country. Nor is this pattern peculiar to the United States. A study in Britain followed thousands of individuals for six years and found that, at the end of that period, nearly two-thirds of those individuals whose incomes were initially in the bottom 10 percent had risen out of that bracket. Other studies showed that one-half of the people in Greece and two-thirds of the people in Holland who were below the poverty line in a given year had risen above that line within two years. Studies in Canada and New Zealand showed similar results.
39
More recent data on Americans, based on income-tax returns, show similar patterns, even more dramatically and in greater detail. Among people who were 25 years old and older who filed income tax returns in 1996, and who were initially in the bottom 20 percent, their incomes had risen by 91 percent by 2005. Meanwhile, people of the same description whose incomes were in the top one percent in 1996 had a
drop in income of 26 percent by 2005.
40 In short, the picture of the rich getting richer and the poor getting poorer that is repeated endlessly in the media and in politics is directly the opposite of what the income tax data show. Yet both pictures are based on official statistics whose accuracy is not in dispute.
The difference is that one set of statistics—such as those from the Bureau of the Census—compares changes in the income received in particular
income brackets over the years while other statistics, such as income tax data from the Treasury Department, compare income changes among
given individuals over the years. The crucial difference is due to individuals moving from one income bracket to another over time. More than half the people tracked by the Internal Revenue Service data moved to a different quintile between 1996 and 2005.
41 When people in the bottom 20 percent of income-tax filers nearly doubled their incomes in a decade, many were no longer in the bottom income bracket any more. Similarly, when people in the top income bracket in a given year had their income decline by about one-fourth during the same decade, many of them dropped out of the top bracket.
It might be thought that surely those in the top one percent of income recipients, and especially those in the top one-hundredth of one percent, are an enduring class of the truly rich. In reality, however, Treasury Department data based on income tax returns show that more than half the people who were in the top one percent in 1996 were no longer there in 2005 and that three-quarters of the taxpayers who were in the top one-hundredth of one percent in 1996 were no longer there by 2005.
42 A spike in income for any of a number of reasons can put someone in that rarefied income stratum in a given year—for example, the sale of a home, receiving an inheritance, cashing in stocks or bonds accumulated over a period of years, or hitting the jackpot in Las Vegas. But such things are no guarantee of a continuing income at that level. There are genuinely rich people, just as there are genuinely poor people, but “snapshot” statistics on income brackets can be grossly misleading as to how many such people there are. Comparing what happens to statistical categories over time—in this case, income brackets—is not the same as comparing what happens to flesh-and-blood individuals over time, when those individuals are moving from one category to another.
Ironically, sports statistics are dealt with more carefully than statistics on more weighty things such as income and wealth. Not only are data on the same individuals over time more common in sports statistics than in statistics on income, in sports there is less confusion between abstract categories and flesh-and-blood human beings. No one imagines that the San Francisco 49ers football team of today is the same as the 49ers of ten years ago, though it is common to act as if the top one percent of income recipients are the same people over the years, so that one can speak of how “the rich” are getting a higher proportion of the national income, even when the flesh-and-blood human beings who initially constituted the top one percent of income recipients—“the rich”—in 1996 actually had a substantial decline in their income over the decade. Too often statistics about abstract statistical categories, such as income brackets, are used to reach conclusions and make public policy about flesh-and-blood human beings.
Given the transience of individuals in low income brackets, it becomes easier to understand such anomalies as hundreds of thousands of families with annual incomes below $20,000 living in homes worth $300,000 or more.
43 In addition to such exceptional people, the
average person in the lowest fifth in income spends about twice as much money annually as his or her annual income.
44 Clearly there must be some supplementary source of purchasing power—whether savings from previous and more prosperous years, credit based on past income and future prospects, unreported illegal income, or money supplied by a spouse, parents, the government, or other benefactors.
Despite many depictions of the elderly as people struggling to get by, households headed by people aged 70 to 74 have the highest average wealth of any age bracket in American society. While the average
income of households headed by someone 65 years old or older is less than half that of households headed by someone 35 to 44 years old, the average
wealth of these older households is nearly three times the wealth of households headed by people in the 35 to 44 year old bracket—and more than 15 times the wealth of households headed by people under 35 years of age.
45 Of the income of people 65 and older, only 24 percent comes from earnings, while 57 percent comes from Social Security or other pensions.
46 This means that “income distribution” statistics based on
earnings grossly understate the incomes of the elderly, which are four times as high as their earnings.
This does not even count the money available to elderly homeowners by tapping the equity in their homes with “reverse mortgages.” The money received by borrowing against the equity in their homes is not counted as income, since these are loans to be repaid posthumously by their estates. But the economic reality is that money available by transferring home equity into a current flow of dollars serves the same purposes as income, even if it is not counted in income statistics.
Despite media and political depictions of the elderly as mired in poverty and having to eat dog food in order to afford medicine, in 2007 the poverty rate among persons 65 years old and older was below the national average. Moreover, fewer than two percent of them were without health insurance.
47 The elderly have lower than average incomes, since many are retired, but they are far from poor otherwise. Eighty percent of people 65 and older are either homeowners or home buyers. Of these 80 percent, their median monthly housing costs in 2001 averaged just $339. That includes property taxes, utilities, maintenance costs, condominium and association costs for people with such living arrangements, and mortgage payments for those who do not own their homes outright. Eighty-five percent of their homes have air-conditioning.
48 Not only are housing costs lower in these age brackets, retirees of course do not have the daily transportation and other costs of going to and from work.
The elderly tend to have higher medical costs but the net cost to them depends on the nature of their medical insurance coverage, including Medicare. Whatever their net costs of living, their economic situation compared to younger groups cannot be determined simply by comparing their average earnings, or even average incomes.
If “the poor” are ill-defined by statistics on current income, so are “the rich.” Seldom is any specific amount of money—whether as wealth or even income—used to define who is rich. Most often, some percentage level—the top 10 or 20 percent, for example—is used to label people as rich. Moreover, laws to raise taxes on “the rich” are almost invariably laws to raise the taxes on particular income brackets, without touching accumulations of wealth. But the incomes of those who are declared to be rich by politicians or in the media are usually far below what most people would consider rich.
For example, as of 2001 a household income of $84,000 was enough to put those who earned it in the top 20 percent of Americans. A couple making $42,000 each is hardly what most people would consider rich. Even to make the top 5 percent required a household income of just over $150,000—that is, about $75,000 apiece for a working couple.
49 As for individuals, to reach the top ten percent in individual income required an income of $87,300 in 2004.
50 These are comfortable incomes but hardly the kinds of incomes that would enable people to live in Beverly Hills or to own a yacht or a private plane.
The different ages of people in different income brackets—with the highest average incomes being among people 45 to 54 years old—strongly suggests that most of the people in upper income brackets have reached that level only after having risen from lower income levels over the course of their careers. In other words, they are no more of a lifetime class than are “the poor.” Despite heady rhetoric about economic disparities between classes, most of those economic differences reflect the mundane fact that most people start out in lower-paid, entry-level jobs and then earn more as they acquire more skills and experience over the years. They are transients in particular income brackets, rather than an enduring class of either rich or poor. The same individual can be in statistical categories with each of these labels, at different times of their lives.
There are various ways of measuring income inequality but a more fundamental distinction is between inequality at a given time—however that might be measured—and inequality over a lifetime, which is what is implied in discussions of “classes” of “the rich” and “the poor” or the “haves” and “have-nots.” Given the widespread movement of individuals from one income level to another in the course of a lifetime, it is hardly surprising that lifetime inequality is less than inequality as measured at any given time.
51 Moreover, medical interns are well aware that they are on their way to becoming doctors, as people in other entry-level jobs do not expect to stay at that level for life. Yet measurements of income inequality as of a given time are what dominate discussions of income “disparities” or “inequities” in the media, in politics, and in academia. Moreover, a succession of such measurements of inequality in the population as a whole over a period of years still misses the progression of individuals to higher income brackets over time.
To say that the bottom 20 percent of households are “falling further behind” those in the upper income brackets—as is often said in the media, in politics, and among the intelligentsia—is not to say that any given flesh-and-blood individuals are falling further behind, since most of the people in the bottom 20 percent move ahead over time to rise into higher income brackets. Moreover, even when an abstract statistical category is falling behind other abstract statistical categories, that does not necessarily represent a declining real per capita income, even among those people transiently within that category. The fact that the share of the bottom 20 percent of households declined from 4 percent of all income in 1985 to 3.5 percent in 2001 did not prevent the real income of the households in these brackets from rising—quite aside from the movement of actual people out of the bottom 20 percent between the two years.
52
Even when discussions of “the rich” are in fact discussions of people who have large accumulations of wealth—as distinguished from high levels of current income—much of what is said or assumed is incorrect. In the United States, at least, most of the people who are wealthy did not inherit that wealth as part of a wealthy class. When
Forbes magazine’s annual list of the 400 richest people first appeared in 1982, people with inherited wealth were 21 percent of that 400—which is to say, nearly four-fifths of these rich people earned the money themselves. By 2006, fewer than 2 percent of the 400 wealthiest people on the
Forbes magazine list were there because of inherited wealth. Despite the old saying that “the rich get richer and the poor get poorer,” the number of billionaires in the world declined from more than a thousand to less than eight hundred in 2008, while the number of American millionaires fell from 9.2 million to 6.7 million.
53
The “Vanishing” Middle Class
One of the perennial alarms based on income statistics is that the American middle class is declining in size, presumably leaving only the small group of the rich and the masses of the poor. But what has in fact been happening to the middle class?
One of the simplest statistical illusions has been created by defining the middle class by some fixed interval of income—such as between $40,000 and $60,000—and then counting how many people are in that interval over the years. If the interval chosen is in the middle of a statistical distribution of incomes, that may be a valid definition so long as the midpoint in that distribution of incomes does not change. But, as already noted, American incomes have been rising over the years, despite strenuous statistical efforts to make incomes seem to be stagnating. As the statistical distribution of incomes shifts to the right over the years (see the graphs on the next page), the number of people in the income range originally in the center of that distribution declines. In other words, the number of middle class people declines when there is a fixed definition of “middle class” in a country with rising levels of income.
The simple situation illustrated in these two graphs—a general rise in incomes—has generated large and recurring waves of journalistic and political rhetoric deploring an ominous shrinking of the middle class, implicitly defined as a reduction in the numbers of people between the income levels represented by perpendicular lines a and b on these graphs.
Let the top graph illustrate the initial distribution of income, with incomes between line
a and line
b being defined as “middle class” incomes:
Now let the graph below illustrate an increase in median income:
The fact that there are now fewer people within the fixed income brackets between
a and
b that
previously defined the middle class does not mean that the middle class is disappearing when the median income increases. Despite the simplicity of this fallacy, people who should know better (and perhaps do know better) have been depicting this reduction in the number of people within fixed income brackets as something dire. Economist Paul Krugman, for example, has said:
By almost any measure the middle class is smaller now than it was in 1973. . . . There is now a pervasive sense that the American Dream has gone astray, that children can expect to live worse than their parents.
54
The insinuation is that the statistical distribution of incomes has shifted to the left, when in fact all the evidence shows that it has shifted to the right. Yet Professor Krugman was by no means alone in his depiction of a shrinking middle class. The same theme has been echoed over the years in such prominent publications as the
New York Times, the
Washington Post, and
The Atlantic magazine.
55
One of the complications of making income comparisons over time—especially time as long as a generation—is that inflation can move people into higher income brackets without their actual purchasing power or standard of living rising. To avoid that problem, real income (money income adjusted for inflation) can be compared. Using real income data, there is no question that the income distribution has shifted to the right. That is, real income has increased, not just money income. As of 2007, for example, just over half (50.3 percent) of all American households had incomes of $50,000 and up. Back in 1967, on the other hand, just over one-third (33.7 percent) had incomes with that same purchasing power. Moreover, in 1967 most of that third (20.6 percent of all households) had incomes equivalent to from $50,000 to $74,999 in real purchasing power in 2007 dollars. By 2007, however, more people were concentrated at the top of this range, rather than at the bottom of the range.
In 2007, those whose incomes were $100,000 and over were more numerous than those in the $50,000 to $74,999 bracket. In fact the whole income distribution moved to the right—low-income households as well as high-income households. In 1967, 18.3 percent of the households had money income equivalent to less than $15,000 in 2007 purchasing power. But the proportion of the households with incomes that low had shrunk to 13.2 percent by 2007.
56 The claim that the rich were getting richer and the poor were getting poorer was simply false. A rising tide had lifted all boats.
Executives’ Pay
The high pay of corporate executives in general, and of chief executive officers in particular, has attracted much popular, media, and political attention—much more so than the similar or higher pay of professional athletes, movie stars, media celebrities, and others in very high income brackets. The median pay of chief executive officers of corporations important enough to be listed in the Standard and Poor’s index in 2006 was $8.3 million a year. While that is obviously many times more than most people make, it is exceeded by the income of women’s golf star Michelle Wie ($12 million), tennis star Maria Sharapova ($26 million), baseball star Alex Rodriguez ($34 million), basketball star Kobe Bryant ($39 million) and golfing great Tiger Woods ($115 million).
57 Even the highest paid corporate CEO, earning $71.7 million a year,
58 made less than a third of what Oprah Winfrey makes.
Yet it is rare—almost unheard of—to hear criticisms of the incomes of sports, movie, or media stars, much less hear heated denunciations of them for “greed.” While “greed” is one of the most popular—and most fallacious—explanations of the very high salaries of corporate executives, when your salary depends on what other people are willing to pay you, you can be the greediest person on earth and that will not raise your pay in the slightest. Any serious explanation of corporate executives’ salaries must be based on the reasons for those salaries being offered, not the reasons why the recipients desire them. Anybody can desire anything but that will not cause others to meet those desires. Why then do corporations go so high in their bidding for top executive talent? Supply and demand is probably the quickest short answer—and any fuller answer would probably require the kind of highly specific knowledge and experience of those corporate officials who make the decisions as to whom to hire and how much pay to offer. Given the billions of dollars at stake in corporate decisions, $8.3 million a year can be a bargain for someone who can reduce mistakes by 10 percent and perhaps save the corporation $100 million.
Some have argued that corporate boards of directors have been overly generous with the stockholders’ money and that this explains the high pay of corporate CEOs. To substantiate this as a general explanation would require more than a few specific examples. This theory could be tested as a general explanation by comparing the pay of CEOs in corporations owned by a large number of stockholders, most of whom are in no position to keep abreast of—much less evaluate—decisions made within these corporations, versus the pay of CEOs of corporations owned and controlled by a few huge financial institutions with both expertise and experience, and spending their own money.
It is precisely these latter corporations which offer the highest pay of all for chief executive officers.
59 These giant financial institutions do not have to justify their decisions to public opinion but can base these decisions on far greater specific knowledge and professional experience than that of the public, the media, or politicians. They are the least likely to pay more than they have to—or to be penny-wise and pound-foolish when choosing someone to run a business where billions of dollars of the institutional investors’ own money are at stake. While various activists have urged a larger voice for stockholders in determining the pay of CEOs in publicly held corporations, significantly the mutual funds that invest in such corporations have opposed this,
60 just as major financial institutions that invest in privately held corporations are less concerned with corporate executives’ pay than with getting executives who can safeguard their investments and make them profitable.
Although many outsiders have expressed incredulity and non-comprehension at the vast sums of money paid to various people in the corporate world, there is no reason why those people should be expected to comprehend why
A pays
B any given sum of money for services rendered. Those services are not rendered to third party observers, most of whom have neither the expertise nor the specific experience required to put a value on such services. Still less is there any reason why they should have a veto over the decisions of those who
do have the expertise and experience to assess the value of the services rendered. For example, the director of the company that publishes the
Washington Post assessed the recommendations of one member of his board of directors this way: “Mr. Buffet’s recommendations to management have been worth—no question—billions.”
61
It is very doubtful whether Mr. Buffet’s compensation from the Washington Post Company alone runs into billions of dollars but it may well run into enough millions to cause third party onlookers to exclaim their incredulity and perhaps moral outrage. The source of moral outrage over corporate compensation is by no means obvious. If it is based on a belief that individuals are overpaid for their contribution to the corporation, then there would be even more outrage toward people who receive hundreds of millions of dollars for doing nothing at all, when they simply inherit fortunes. Yet inheritors of fortunes are seldom resented, much less denounced, the way corporate CEOs are. Three heirs to the Rockefeller fortune, for example, have been elected as popular governors of three states.
Two things seem especially to anger critics of high corporate executive salaries: (1) the belief that their high compensation comes at the expense of consumers, stockholders, and/or employees and (2) the multimillion dollar severance pay package often given to executives who have clearly failed. But, like anybody who is hired anywhere, whether in a high or low position, a corporate CEO is hired precisely because the benefits that the CEO is expected to confer on the employer exceed what the employer offers to pay. If, for example, an $8.3 million a year CEO saves the corporation $100 million as expected, then the stockholders have lost nothing and are in fact better off by more than $90 million. Neither have the consumers nor the employees lost anything. Like most economic transactions, the hiring of a corporate CEO is not a zero-sum transaction. It is intended to make both parties better off.
It would be immediately obvious why the zero-sum view is wrong if someone suggested that money paid to George C. Scott for playing the title role in the movie Patton was a loss to stockholders, moviegoers, or to lower-level employees who performed routine tasks during the making of the movie. Only if we believe that Patton would have made just as much money without George C. Scott can his pay be regarded as a deduction from the money otherwise available to stockholders, moviegoers, and other people employed making the movie. Much has been made of the fact that corporate executives make many times the pay of ordinary workers under them—the number varying according to who is making the claim—but no one would bother to figure out how many times larger George C. Scott’s pay was than that of movie extras or people who handled lights or carried film during the production of Patton.
The most puzzling and most galling aspect of corporate executives’ compensation, for many people, are the multimillion dollar severance payments—the “golden parachutes”—paid to CEOs who are clearly being gotten rid of because they failed. For example, in 2007 the chief executive officer of Merrill Lynch received a “retirement” package of “in excess of $160 million,” according to the
Wall Street Journal, which called it “an obscenely rich reward for failure,” since Merrill Lynch lost $7.9 billion in mortgage-related transactions under his leadership.
62
Since human beings are going to make mistakes, whether hiring an entry-level, unskilled employee or a corporate CEO, the question is: What options are available when it becomes clear that the CEO is a failure and a liability? Speed may be the most important consideration when someone is making decisions which may be losing millions—or even billions—of dollars. Getting that CEO out the door as soon as possible, without either internal battles within the corporation or lawsuits in the courts, may be well worth many millions of dollars. Merrill Lynch’s $160 million “retirement” package for its departing CEO may have been a bargain to prevent losses of another $7.9 billion.
This is not a unique situation, even if the sums of money involved are larger in a multibillion dollar corporation than in other situations that people are more familiar with. Aging university professors who have not kept up with recent developments in their fields may be offered a lucrative early retirement package, in order to replace such professors with people who have mastered the latest advances. Similarly, many a married person has paid very substantial sums of money to get a divorce—perhaps larger in proportion to income compared to what a corporation pays to bring a bad relationship to an end.
In this and other situations, putting an end to a relationship may be just as valuable, or even more valuable, than the initial beginning of the relationship once seemed. As with the original hiring decision, neither stockholders nor consumers nor other employees are worse off for the payment of a large severance package, if that cuts losses that would be even bigger if the failed CEO stayed on. Nor need the original hiring decision have been mistaken when it was made. Times change and individuals change over the years, so that a CEO who was perfect for the circumstances that existed at the time of hiring may be out of touch with very different conditions that evolve in later years.
When Sewell Avery was head of U.S. Gypsum from 1905 to 1931 and then head of the Montgomery Ward retail store chain after 1931, he was regarded as one of the premier business leaders in the country. However, during his later years, when conditions in retailing became quite different,
63 there were complaints about his leadership of Montgomery Ward, and bitter internal struggles to try to get rid of him. When he finally left, the value of Montgomery Ward stock shot up immediately. It might well have been a bargain for the stockholders, the customers, and the employees to have paid Avery enough to get him to leave earlier, since a badly run company hurts all of these people.
Third party observers may find it galling that some people seem to be rewarded handsomely for failing. But third parties are neither paying their own money nor are in a position to know how much it is worth to be rid of someone. When an individual pays dearly to divorce a spouse who is impossible to live with, that too might be seen as rewarding failure. But does any third party presume to say that the decision to divorce was wrong, much less feel entitled to be morally outraged, or to call on government to stop such things?
Social Mobility
We have already noted one kind of economic and social mobility, the movement of people out of the lowest income brackets in the course of their own working lifetime. A major study at the University of Michigan has followed the same individuals—tens of thousands of them—over a period of decades. Among individuals who are actively in the labor force, only 5 percent of those who were in the bottom 20 percent in income in 1975 were still there in 1991, compared to 29 percent of those in the bottom quintile in 1975 who had risen to the
top quintile by 1991.
64 More than half of those in the bottom quintile in 1975 had been in the top quintile at some point during these years.
65 However, as we have also seen, not everyone is working, especially in the lowest income brackets. The rises of those who are working indicates what opportunities there are. How many people take advantage of those opportunities is another question.
There is another kind of socioeconomic mobility that many have written about—the extent to which people born in low-income families rise to higher income or occupational levels than those of their parents. Here a number of things get confused with one another, including the amount of opportunity available versus the amount of opportunity used. Much discussion of social mobility is based on the concept of “life chances”—the likelihood that someone born into given socioeconomic circumstances will grow up to achieve some given economic or occupational level. Sometimes causation is confused with blame, as when any attempt to point out factors in any social group which inhibit their progress is called “blaming the victim,” presumably the victim of “society.”
Many factors, however, involve no blame, and may be due to neither the individual nor to society, but to circumstances. For example, someone born deaf is unlikely to become a musician, even though Beethoven continued to write music after losing his hearing. Physical or mental handicaps beyond the individual’s control may reduce the likelihood of utilizing various opportunities that are otherwise available in a given society. Cultural values, inherited socially rather than biologically, may also reduce the statistical probability of advancing in income or occupations, even when the opportunity to do so is available—and no given individual chooses which culture to be born into. Even sophisticated statistical analyses of probabilities of people from various groups achieving various income or occupational levels often equate low probabilities with high barriers created by others.
A child raised in a home where physical prowess is valued more than intellectual prowess is unlikely to have the same goals and priorities as a child raised in a home where the reverse is true. Some have seen such circumstances as examples of “barriers” and “privileges.” For example, a
New York Times article that said it is “harder to move up from one economic class to another” and that this was due to a new kind of privilege:
Merit has replaced the old system of inherited privilege, in which parents to the manner born handed down the manor to their children. But merit, it turns out, is at least partly class-based. Parents with money, education and connections cultivate in their children the habits that the meritocracy rewards. When their children then succeed, their success is seen as earned.
66
In a similar vein, the head of the Russell Sage Foundation conceded that the “old system of hereditary barriers and clubby barriers has pretty much vanished” but regarded these barriers as now being replaced by “new ways of transmitting advantage.”
67
Failure to make a distinction between
external impediments to individual advancement and
internal differences in individual orientation makes attempts to determine or measure empirically the opportunities available an exercise in futility or confusion. For example, when a study shows that “only” 32 percent of sons of fathers in the bottom quarter of income earners reached the top half of income earners by their early thirties,
68 that statistic tells us nothing about whether this was due to external barriers or internal orientations. Moreover, statistics from this widely reported study arbitrarily omit any upward mobility that occurs to males after their early thirties, all upward mobility by women, and any movement upward that does not get as far as the top half. What purpose this serves is open to speculation.
To the extent that blaming “society” is more or less the default setting for explaining differences in social mobility among income classes, ethnic groups, or among other segments of society, this itself shifts attention away from internal factors which inhibit many individuals from using opportunities that are available. By reducing awareness of such internal impediments to advancement, this approach reduces the chances of changes in such internal impediments—and thereby reduces the very chances for lower income people to advance that these studies claim to be concerned about.
SUMMARY AND CONCLUSIONS
Some very plain and straightforward facts about income and wealth have been obscured by fallacies based on vague and inconsistent words, garnished with misleading statistics. There is, after all, nothing very mysterious about the fact that inexperienced young people, beginning their working careers, are unlikely to be paid as much as older, more experienced and more skilled people with proven track records. Nor is there anything very hard to understand about the fact that households in which fewer people are working at all are unlikely to receive as much money as households in which people who work full-time and year-round are the norm. Nor should it be surprising that some people are paid millions of dollars when their decisions can affect a corporation’s profit-and-loss statement by billions of dollars.
A hasty leap from statistical categories to economic realities underlies many fallacies about income and wealth. When more than two-thirds of the economic resources available to people in the bottom 20 percent of income earners get left out of income statistics because they are transfers in cash or in kind from government, that is a serious discrepancy between statistics and reality. Similarly when three-quarters of the economic resources available to the elderly do not get counted in statistics on earnings. Nor are these random discrepancies. Almost invariably, such widely publicized statistics overstate poverty and understate standards of living. When income statistics leave out both taxes on people in upper income brackets and transfers to people in lower income brackets, they exaggerate inequalities as of a given time. When they fail to follow given individuals over time, they exaggerate lifetime inequality, as well as enabling observers to speak of people who are transiently in various income brackets as if they are enduring “classes.”
To say that some people have less probability of achieving a given income or occupational level is too often automatically equated with saying that “society” puts barriers in their path. This precludes a priori the very possibility that there might be internal reasons for not doing as well economically as some other people. Moreover, this is not just a matter of an abstract judgment. To the extent that there may in fact be internal reasons for not achieving as much as others, directing attention away from those reasons has the practical effect of reducing the likelihood that those reasons will be addressed and the potential for advancement improved. In short, those who are lagging are offered a better public image instead of better prospects.
Claims by some that they cannot understand or justify large income differences (“disparities,” “inequities”) are another version of the presumption that third parties are the best judges—as if people’s incomes, like their housing arrangements, should be judged by what a tableau they present to outsiders, rather than how they reflect the choices and mutual accommodations of those directly involved. Such third-party presumptions are often based on an awareness of being part of a more educated group having, on average, more general knowledge than most other people—and an unawareness that the total knowledge of all the others vastly exceeds theirs, as well as being more specific knowledge relevant to the decisions at hand. No third parties can possibly know the values, preferences, priorities, potentialities, circumstances, and constraints of millions of individuals better than those individuals know themselves.
Sometimes the presumptions are moral, rather than intellectual. Third parties who take on the task of deciding who “really” deserves how much income often confuse merit with productivity, quite aside from the question whether they have the competence to judge either. In no society of human beings has everyone had the same probabilities of achieving the same level of productivity. People born into families with every advantage of wealth, education, and social position may be able to achieve a high level of productivity without any great struggle that would indicate individual merit. Conversely, people who have had to struggle to overcome many disadvantages, in order to achieve even a modest level of productivity, may show great individual merit. But an economy is not a moral seminar authorized to hand out badges of merit to deserving people. An economy is a mechanism for generating the material wealth on which the standard of living of millions of people depend.
Pay is not a retrospective reward for merit but a prospective incentive for contributing to production. Given the enormous range of things produced and the complex processes by which they are produced, it is virtually inconceivable that any given individual could be capable of assessing the relative value of the contributions of different people in different industries or sectors of the economy. Few even claim to be able to do that. Instead, they express their bafflement and repugnance at the wide range of income or wealth disparities they see and—implicitly or explicitly—their incredulity that individuals could differ so widely in what they deserve. This approach has a long pedigree. George Bernard Shaw, for example, said:
A division in which one woman gets a shilling and another three thousand shillings for an hour of work has no moral sense in it: it is just something that happens, and that ought not to happen. A child with an interesting face and pretty ways, and some talent for acting, may, by working for the films, earn a hundred times as much as its mother can earn by drudging at an ordinary trade.
69
Here are encapsulated the crucial elements in most critiques of “income distribution” to this day. First, there is the implicit assumption that wealth is collective and hence must be divided up in order to be dispensed, followed by the assumption that this division currently has no principle involved but “just happens,” and finally the implicit assumption that the effort put forth by the recipient of income is a valid yardstick for gauging the value of what was produced and the appropriateness of the reward. In reality, most income is not distributed, so the fashionable metaphor of “income distribution” is misleading. Most income is earned by the production of goods and services, and how much that production is “really” worth is a question that need not be left for third parties to determine, since those who directly receive the benefits of that production know better than anyone else how much that production is worth to them—and have the most incentives to seek alternative ways of getting that production as inexpensively as possible.
In short, a collective decision for society as a whole is as unnecessary as it is impossible, not to mention presumptuous. It is not a question of rewarding input efforts or merits, but of securing output at values determined by those who use that output, rather than by third party onlookers. If the pleasure gained by watching a child movie star is valued more highly by millions of moviegoers than the benefits received by a much smaller number of people who benefit from buying the product of the drudgery of that child’s mother, by what right is George Bernard Shaw or anyone else authorized to veto all these people’s choices of what to do with their own money?
Although one person’s income maybe a hundred or a thousand times greater than another’s, it is of course very doubtful that one person is a hundred or a thousand times more intelligent or works a hundred or a thousand times as hard. But, again, input is not the measure of value. Results are.
The absence of Tiger Woods from various golf tournaments in the United States for several months, due to a knee operation in 2008, led to declines in television audiences ranging from 36 percent for the World Golf Championship to 55 percent for the PGA Championship.
70
In a multibillion dollar corporation, one person’s business decisions can easily make a difference of millions—or even billions—of dollars, compared to someone else’s decisions. Those who see paying such a person $10 million or $20 million a year as coming at the expense of consumers or stockholders have implicitly accepted the zero-sum view of economics. If the value of the services rendered exceeds the pay, then both consumers and stockholders are better off, not worse off, whether the person hired is a corporate CEO or a production line employee.
Would anyone say that the pay of an airline pilot comes at the expense of passengers or of the airline’s stockholders, when both are better off as a result of the services rendered? Would anyone even imagine that one pilot is as good as another when it comes to flying a commercial jet airliner with hundreds of people on board, so that getting some crop-duster pilot at lower pay to fly the jet would make the stockholders and the passengers better off? Yet that is the kind of reasoning, or lack of reasoning, that is often applied when discussing the pay of corporate CEOs—and virtually no one else in any other field, including professional athletes or entertainers who earn similar or higher incomes. Perhaps the most fallacious assumption of all is that third parties with neither experience nor expertise can make better decisions, on the basis of their emotional reactions, than the decisions of those who have both experience and expertise, as well as a stake in the results.
Despite the popularity of the phrase “income distribution,” most income is
earned—not distributed. Even millionaires seldom simply inherited their fortunes.
71 Only a fraction of the income in American society is actually distributed, in such forms as Social Security checks or payments to welfare recipients, for example. Most income is “distributed” only in the figurative statistical sense that the incomes of different people are in varying amounts that can be displayed in a curve on a graph, as in the previous discussion of middle class incomes. But much of the rhetoric surrounding variations in income proceeds as if “society” is collectively deciding how much to hand out to different individuals. From there it is a small step to arguing that, since “society”
distributes income with given results today that many do not understand or like, there should be a simple change to distributing income in a different pattern that would be more desirable.
In reality, this would by no means be either a simple or innocuous change. On the contrary, it would mean going from an economic system in which most people are paid by those particular individuals who benefit from their goods and services—at rates of compensation determined by supply and demand involving those consumers, employers, and others who assess the benefits received by themselves—to an economy in which incomes are in fact distributed by “society,” represented by surrogate, third-party decision-makers who determine what everyone “deserves.” Those who think that such a profound change would produce better economic or social results can make the case for such a change. But making such a case explicitly is very different from gliding into a fundamentally different world through verbal sleight of hand about “income distribution.”