Most of the chapters in this book are chock-full of statistics and economically oriented policy analyses. This one is different. It addresses moral matters that are hard to quantify but nonetheless important. The basic question is whether government health care programs like Medicare, Medicaid, and the VHA have compelling moral justifications. We think not. To the contrary, they are subject to obvious moral complaints. Medicare is an intergenerational Ponzi scheme that moves dollars from younger people who are relatively poor to older people who are relatively rich, while making health care more dangerous and expensive for everyone and wasting one-third of the dollars it doles out. How can a moral case be made for a program like that? Medicaid helps the poor, but it does so inefficiently and is needlessly paternalistic. Poor people would fare better if the program was replaced with cash grants. The VHA suffers similar deficiencies. Veterans should receive cash so they can buy medical services anywhere.
Americans love Robin Hood, the legendary English bandit who took from the rich and gave to the poor. And, like him, we are givers. The vast majority of us donate to charities every year. Total giving exceeds $200 billion a year, with most of the dollars coming from individuals and much of it being earmarked for people in need.1
The impulse to help others is one of the moral pillars of civilization. But it also makes Medicare seem odd, because Medicare is a reverse Robin Hood scheme. Medicare’s dominant tendency is to move dollars from the poor to the rich. Young people, including those who are raising families, saddled with educational debts, and struggling to make ends meet, are taxed to buy medical services for seniors, who tend to be much wealthier. That is the opposite of charity.2
The wealth disparity between young and old is large. In 2013, the median net worth of a family headed by a person ages 35–44 was $46,700. Families whose heads were ages 65–74 had a median worth of $232,100, almost five times as much.3 Young people are much more likely than the elderly to live in poverty too. In 2015, 20 percent of America’s children lived in poverty, as did 12 percent of adults ages 19–64. The comparable figure for Americans 65 years and up was only 9 percent.4
The elderly’s wealth and income advantages are easy to explain. Old people have had their entire working lives to accumulate wealth and build careers, and many of them have done so. They’ve paid off their mortgages and their cars, and many built retirement savings or stock portfolios. Young people have yet to live through their working years. Consequently, they tend to have less wealth, lower earnings, and more debt.
That Medicare forces people who are relatively poor to support people who are relatively rich is abhorrent. Every moral theory we know of posits that wealth transfers should run in the opposite direction. It’s easy to see why. When poor young people, especially those with children, acquire houses, cars, education, medical services, and many other things, the welfare gains are enormous because the benefits are large and pay out over many years. But, when the equivalent number of dollars are spent on seniors, the utility gains are small because the improvements in the quality of life are frequently minor and life expectancy is short. This is especially true of intensive medical treatments delivered in the last months of life that cost tens of thousands of dollars.
If children are the future, we shouldn’t force young people to pay for old people’s health care. We should reverse our spending priorities. The ratio of federal benefits to the old versus the young is roughly 3 to 1. A ratio of 1 to 3 would make more sense.
Years ago, one of us asked a health policy expert who supported Medicare to defend the morality of moving money from poorer people to those who are richer. The answer went something like this: Everyone gets old. Medicare is an inter-generational pact that ensures that health care will be available when we’re aged and infirm. There are so many things wrong with this answer that it’s hard to know where to begin.
Consider the premise: “Everyone gets old.” That’s not true. We all know people who died before they turned 65, the eligibility threshold for Medicare. Of every 100,000 baby boomers born between 1949 and 1951, only 67,555 lived to age 65.5 African-American baby boomers born during the same years fared even more poorly. Only 48,649 per 100,000 lived to Medicare’s eligibility age. The unfortunates who died early never received any Medicare benefits. But many did pay Medicare taxes, so they both died young and were taxed to help others whose lives were longer. There are also many people like Tom Petty, the rock star, who die shortly after turning 65. For them, Medicare is also a bad deal.
Medicare isn’t a “pact” either. Tens of millions of people who are paying Medicare taxes today were born after 1965, the year Medicare was created. No one asked them whether they’d rather buy health care for other people or use the money in some other way, such as saving it for their own retirements. Nor, obviously, has anyone asked the permission of the hundreds of millions of yet-to-be-born Americans who will have to pay Medicare taxes in the future, if the program is to survive. Medicare isn’t a “pact.” It is governmentally imposed coercion, and it is intellectually dishonest to obscure this truth by babbling on about a “pact” that doesn’t exist.
The argument that Medicare is an intergenerational pact also assumes that Medicare will last forever. That is far from guaranteed, if only because the demographics are against it. When Medicare was created, there were 4.5 working people for every eligible beneficiary.6 Over time, the ratio has steadily declined. In 2016, there were only 3.1 workers per beneficiary. In 2030, there will be only 2.4.7 As the ratio of workers to beneficiaries falls, taxes per worker will have to rise.
Will tomorrow’s workers keep paying those taxes? Or will they rise up and demand that Congress repeal Medicare? No one can be certain. But if you’re confident that they will let themselves be fleeced, you might read Boomsday, a novel by Christopher Buckley. He foresees an uprising against Medicare by young workers who stage protests at golf courses and nursing homes. Nor is Buckley alone in predicting a revolt. Phillip Longman, a demographer and investigative journalist, expressed the same concern, writing, “The 75 million members of the Baby Boom generation . . . have good reason to fear desertion by their successors.”8 If you think that’s farfetched, remember how unlikely it seemed that Donald Trump would win the 2016 election. Strange things can happen, and the baby boom generation has imposed an impossible burden on younger Americans.
The truth is that Medicare is a Ponzi scheme in which money from new participants is used to pay earlier participants. The program will last only as long as the government supports it. How much longer that will be is anyone’s guess, but it seems likely that support for the program among taxpaying workers will steadily erode. Even if Medicare survives, Congress may address its rising cost by raising the eligibility age, increasing premiums and copays, capping benefits, excluding services from coverage, or taking other steps that force retirees to shoulder more costs. Aging baby boomers may soon discover that, after paying Medicare taxes throughout their careers, they are the big losers in an intergenerational Ponzi scheme.
In case the defects already pointed out aren’t sufficiently fatal, the claim that Medicare is an intergenerational pact has one more glaring flaw. Its central thrust is that it is fair to tax today’s young people because they will eventually benefit from taxes that tomorrow’s young people will be made to pay. That is utterly wrong. Would you say that it is fair for older kids to steal younger kids’ lunch money because the younger kids will soon grow up and steal from the new kids who come in behind them? Of course not. The fact that a person may benefit from a forced transfer of wealth in the future does not justify forcibly taking wealth from that person today. For the same reason, no one should think it morally desirable to coercively move wealth from young people to old people today simply because dollars will be coercively taken from future generations tomorrow. Everyone should recognize that coercive transfers are presumptively immoral and that their immorality is increased when the victims are poor and the beneficiaries are rich. The possibility that today’s plunderees will be tomorrow’s plunderers shows only that an immoral practice can continue far longer than it should—and that redistributionist schemes will have no shortage of apologists.
The Harvard philosopher Norman Daniels, who has devoted much of his academic career to the subject of justice between generations, rejects most of the points we just made. In Am I My Parents’ Keeper?, he offers a Prudential Lifespan Account of justice between age groups, according to which public institutions like Medicare and Social Security help people improve their lives by budgeting resources prudently over time. These institutions are also equitable, he contends, because when they are applied consistently, everyone benefits from them at the appropriate stage of life. Thus, of Social Security he writes:
[I]f the Social Security system remains stable, young workers will be entitled to claim benefits when they age and retire. There is an inter-generational compact that has the effect of transferring resources from an individual’s working years to his retirement years, insuring that basic needs can be met over the whole lifespan.9
And of Medicare he states:
What is crucial about the health-care system is that we pass through it as we grow older. The system transfers resources from stages of our lives in which we have relatively little need for them into stages in which we do. We pay for health care we do not use in our middle years, but we receive health care we do not pay for in . . . old age. . . . The inflated premium in our adult years is needed to pay for our needs in other stages of our lives. . . . We all benefit from an institution that reallocates health-care resources from stages of our lives in which we have many resources and few needs into those stages in which we have fewer resources and greater needs.10
On Daniels’ view, then, what justifies the Social Security and Medicare programs is not that they redistribute wealth across persons. It is that they help individuals move wealth from fat years to lean years. The programs are redistributive within persons across different stages of life.
This argument suffers from a host of deficiencies, some of which Daniels acknowledges. For example, it works only if one assumes that Social Security and Medicare will last indefinitely, which seems unlikely. But, even if the argument were sound, it would not justify Medicare. It would warrant only a legal requirement or other inducement that causes people to save money for the predictable medical needs of old age.
To see this, imagine that, over the course of a lifetime, all Americans earn enough money to pay for all of their medical needs. But there is a problem: most of the dollars come in during their working years, and many people save too little to pay for medical treatments they will need after they retire. Would Medicare be the only way to solve this problem? No. Any program that encouraged or required people to save more during their working years would do the trick.
There could be another problem, though. Even if we succeeded in getting people to save more during their working years, we might fear that, after retiring, many would spend their accumulated wealth on travel, gambling, or other entertainments instead of designating the money for health care. Even if this were true, though, we still would not need Medicare. We would only have to encourage people to use the money for medical treatments or require them to do so by limiting the uses to which the saved dollars could be put.
To put the point succinctly: if the problem is that people save too little or spend what they do save unwisely, the solution is not Medicare. Medicare neither encourages savings nor discourages profligacy in retirement. It does the opposite. Medicare discourages workers from saving during their income-generating years by assuring them that the government will give them heavily subsidized health care when they retire. And it encourages profligacy in old age by telling seniors that, no matter how they spend their retirement dollars, their access to medical services won’t be imperiled. Medicare is a Rube Goldberg–style mechanism that takes care of the elderly by politicizing the health care economy, driving up health care costs for everyone, wasting hundreds of billions of dollars a year, moving money from the poor to the rich, degrading the quality of care, and discouraging individuals from acting responsibly. And that’s even without considering the overwhelming financial challenges to the long-term viability of the program.
Some readers may be thinking that it is fine in theory to suggest that people should save during their working years for the medical treatments they will need in old age, but the idea will never work in practice. How can anyone be expected to save that much? And what about the poor and less fortunate? Can we seriously think that, when they become seniors, they should pay for their own care too?
Doubters may reconsider when they learn what Singapore does. Singapore, an Asian city-state with a population of roughly 5.6 million, requires residents to put a defined share of their earnings into tax-free “Medisave” accounts, from which funds can be withdrawn only to pay for health care. To prevent Singaporeans from depleting their Medisave accounts too early, the spending rules require them to pay out of pocket for most of the ordinary medical treatments they need. To encourage thrift, the law also allows Singaporeans to bequeath any funds left in their accounts at death to their heirs. Singapore provides for its poorest citizens by topping up their Medisave accounts, operating a safety net of public hospitals that provide standardized care at bargain-basement prices and, on rare occasions, by dipping into a fund and helping to pay some of the bills.11 But treatments are not doled out for free. All Singaporeans must contribute something to the cost of their care.
Singapore’s Medisave is much better than our Medicare. It prevents people from overspending during their fruitful years, encourages prudent purchasing of medical services when illness strikes, and reinforces the ethic of personal responsibility for health and health care.12 Medicare does none of these things. If anything, Medicare discourages saving and encourages irresponsibility by telling people that, when they are old, the government will give them health care for free. Medisave is also less coercive than Medicare. Singaporeans keep both the money they set aside and the interest it earns. They do not regard their forced savings program as a tax because the money is still theirs.
Medisave also has the singular advantage of insulating the health care payment system from government control.13 As Professor David A. Reisman, an economist who lives in Singapore, explains:
[Singapore’s] personal-account, defined-contribution system shelters social security from the rise in dependency [on the government]. There is no pooling, no sharing, no cross-subsidisation [sic] and no redistribution. There is no crediting-in of payments for university students or the unemployed. Retirement balances are sealed off from the electoral cycle and the vote motive. Save-as-you-earn, [Medisave] does not presuppose that pension plans should be augmented out of tax revenues or that there should be an intergenerational promise. For that reason, there is no looming exhaustion of reserves and no pension-driven pressure for a budget deficit. There is no imminent threat that payouts will have to be pruned back because current and future generations will not tolerate the higher tax rates of PAYGO [pay-as-you-go] in an ageing society where the worker-to-pensioner ratio is decreasing.14
In contrast, Medicare is an unfunded, pay-as-you-go system, whose survival depends on a never-ending inflow of new tax dollars and deficit spending. Its future, including its eligibility threshold and the coverage it provides, is subject to the whims of voters and manipulation by scheming politicians. Medisave, by contrast, is fully funded at all times, privately owned and controlled, and resistant to governmental interference. By switching to a program designed along the lines of Medisave, Americans would eliminate both the political corruption of medicine and the medical corruption of politics.
Finally, and perhaps most important, Singapore lets patients decide how their health care dollars are spent. Unlike the Medicare program, which puts health care dollars under the control of bureaucrats, Singapore’s mandated savings program leaves consumers in charge. They decide which services to use and how much to pay for them. This approach encourages providers to compete for business by improving quality of care, cutting their fees, and offering guarantees. First-party payment also makes life much more difficult for fraudsters.
In sum, a Singaporean-style savings program would be far more compatible with the ideals of small government, individual freedom, and personal responsibility than Medicare. But Singapore is a small city-state with only 5.6 million people. And its system of government is, to put it mildly, quite different from our own. Could a similar program take root in the United States, across 50 states and 325 million people?
We think so. Tens of millions of Americans already participate in voluntary retirement plans. Most of these individuals do so via their employers, who withhold contributions from their wages. Others contribute on their own. For these people, a transition to a mandated or tax-incentivized savings program would be nearly invisible. Americans are also accustomed to paying Social Security and Medicare taxes. In 2014, 166 million workers had dollars deducted from their wages in the form of payroll taxes to support these programs.15 Converting these dollars from taxes into savings plan contributions should generate enthusiasm—not resistance—among workers. Finally, 35 million Americans have tax-favored, health-care-targeted flexible spending accounts (FSAs), and 30 million are expected to have similar health savings accounts (HSAs) by 2018.16 People with these accounts would obviously be comfortable with a program that rewarded them for saving for old age.
Medicaid isn’t a Ponzi scheme. It’s a welfare program for the poor, albeit one with a strange design. Poverty is a money problem. Poor people have too little of it, and the solution is to give them more. Medicaid doesn’t do that. Not a single dollar winds up in any poor person’s hands. Instead, Medicaid pays health care providers to treat poor people for free.
The health care industry likes this arrangement, which ensures that its members receive every one of the nearly $600 billion that Medicaid pays out each year. But is this the best way to help the poor? We think not.
To see why, ask yourself a question. Across the United States, the average amount spent per Medicaid enrollee per year is about $6,500.17 Suppose that poor people were offered a choice: receive $6,500 in cash and use it to buy an insurance package equivalent to Medicaid or receive $6,500 in cash and use it some other way. If you think that some, many, or most poor people would spend the money some other way, you’re with the majority and you should agree that forcing them to take Medicaid in lieu of cash isn’t the welfare-maximizing approach.
We are not the first to contend that Medicaid could help the poor more by giving them cash. Many economists and commentators with libertarian leanings have made this point. But centrist health policy thinkers don’t agree. After observing that “every developed country, including the United States, subsidize[s] health insurance for the poor,” Harold Pollack, Bill Gardner, and Timothy Jost explain:
Part of the reason is that those countries have broader moral and public-health criteria for thinking about health insurance and poor people’s lives. Universal health care expresses a commitment to the well-being of fellow citizens. Everyone should have access to a decent minimum of care; caring for others in distress is a primary expression of human solidarity.18
These are lofty sentiments, but they do not survive scrutiny.
Consider the assertion, “Universal health care expresses a commitment to the well-being of fellow citizens.” That’s not really what it does. It expresses a commitment to something—access to medical services—that can affect people’s well-being (for better or for worse) and that, in the opinion of mainstream health policy thinkers, matters more than other goods and services that affect people’s well-being and could be bought with the same dollars. This more accurate way of framing the commitment makes two things clear. First, health care does not equal well-being but is only one of many means by which well-being might be maintained or improved. Second, Medicaid policy is fundamentally paternalistic. If the program were actually devoted to the well-being of fellow citizens, it would hand out cash and let people spend it on goods and services that, from their perspective, have the most potential to help. Instead, Medicaid pays only for medical treatments. In effect, Pollack, Gardner, and Jost are saying that they know how poor people should spend money better than poor people do themselves. They want poor people to have what they think the poor need, not what poor people themselves might want.
There is a good reason why paternalism has a bad name, apart from dealing with minors and the mentally incompetent. By giving benefits in-kind rather than in cash, the state expresses the opinion that recipients cannot be trusted
. . . to make what [the state] regards as wise choices about how to spend their money. This, despite the fact that both economic theory and a growing body of empirical evidence suggest that individuals are better off with the freedom of choice that a cash grant brings. In-kind grant programs like SNAP (food stamps) persist in their present form not because they are effective but because they are the product of a classic Bootleggers-and-Baptists coalition: well-meaning members of the public like the idea that welfare recipients have to use their vouchers on food rather than alcohol and cigarettes, and the farm lobby likes that beneficiaries are forced to buy its own products. Poor people, meanwhile, are deprived of the opportunity to save [what] a cash grant would give them, and they are forced to waste time and effort trading what SNAP allows them to buy for what they really want.19
A bootleggers-and-Baptists coalition supports Medicaid too. Well-meaning health policy types such as Pollack, Gardner, and Jost like the idea that poor people have to buy medical services—and so do health care providers. Their touted “commitment to the well-being of fellow citizens” really just amounts to leaving poor people less well-off than they could be, while wasting hundreds of billions of dollars on fraud and health care services of little or no value.
Pollack, Gardner, and Jost also offer a second justification. They argue that Medicaid works, citing a study that, they report, “found that [a] previous expansion of state Medicaid programs significantly reduced all-cause mortality.”20 That claim can be disputed—recall the study of Oregon’s Medicaid expansion, which found negligible health effects—but the argument fails even if they are right. What Pollack, Gardner, and Jost need to provide, but do not, is a study that compares the impact of two different programs: one that doles out Medicaid and one that gives out cash. Without a comparative study, there is no way to establish that poor people benefit more from in-kind awards of access to health care than from cash grants.
Let there be no mistake: cash grants help poor people immensely. The Earned Income Tax Credit and the Child Tax Credit transfer more than $100 billion to the working poor every year.21 These outlays generate documented reductions in poverty and health improvements, especially among women and children.22 These effects are not surprising. Recipients spend the money they receive on necessities like food, housing, clothing, transportation, furniture, and school supplies. “The overall pattern suggests that recipients allocate their refunds carefully, meeting essential needs that they may have difficulty addressing with regular income.”23 We know of no basis upon which Pollack, Gardner, and Jost could reasonably posit that poor people intrinsically derive greater value from government-subsidized medical services than from cash grants of equal size. Indeed, there are strong reasons to believe the opposite—starting with the revealed preferences of the poor when they are spending their own money.
What’s true of Medicaid also applies to Medicare. Instead of paying for seniors’ health care, the federal government could give them money and let them decide how to spend it. Its refusal to do this makes sense from a bootleggers-and-Baptists perspective, but not from a moral one. The vast majority of seniors are competent and make important choices for themselves. They decide whether to sell their homes and move closer to their grandchildren or into retirement centers, whether to draw money out of their retirement accounts, and whether and where to take vacations. There is little reason to fear that they would spend Medicare dollars unwisely if given control of them. However, there is reason to think that many of them would buy things other than medical services with the money, which is why the bootleggers won’t let the federal government give them control.
Some people, elderly, poor, or otherwise, lack the competence needed to handle their own affairs. These people require special arrangements, which typically include judicial proceedings and the appointment of legal guardians. But, even for these people, there is no moral argument for bestowing benefits in-kind rather than in cash. Once a guardian is appointed, a competent decisionmaker exists and the need for governmental paternalism disappears. The government’s proper role is to monitor the guardian’s performance, for example, by policing mismanagement of assets and self-dealing.
We said above that we see no basis for thinking that poor people intrinsically benefit more from health care coverage than from cash grants of equal size. In fact, the opposite is more likely true. Poor people probably derive greater value from things they buy with cash than from the medical services that Medicaid provides at the public’s expense.
The reason is simple. When people spend their own money, they purchase things whose subjective value exceeds the dollars they have to give up. Rather than spend $10 on an item that bestows only $5 worth of value, a person would hold onto the cash. But, when a third party pays for an item or service, a beneficiary will rationally accept it even if the cost exceeds the subjective benefit by far. Who cares if a medical service that costs $10 conveys only $5 worth of value when the government foots the bill? And who cares if the service actually costs $100 or even $1,000? By accepting the service, the recipient would still be $5 better off. The cost to taxpayers is irrelevant.
Medicaid, Medicare, and private health insurance all tempt beneficiaries to use medical services imprudently. That’s a big part of the reason why America is awash in unnecessary treatments. That’s also why switching from Medicaid to cash grants would make poor people better off. Rather than undergo medical treatments with little potential to help, they would buy high-value goods and services of other types.
The study that occasioned the commentary by Pollack, Gardner, and Jost suggested that low-value consumption is a real problem in Medicaid. Looking yet again at Oregon’s health insurance experiment (see Chapter 15), a team of researchers from the Massachusetts Institute of Technology, Harvard, and Dartmouth attempted to measure the value recipients derived from services covered by Medicaid. The authors concluded: “All of our estimates indicate a welfare benefit from Medicaid to recipients that is below the government’s cost of providing Medicaid.”24 In plain English, if we took the money the government is spending on Medicaid, gave it to Medicaid enrollees, and offered them the right to buy into Medicaid, the people who purportedly benefit from this program wouldn’t buy it. Think about that. The people receiving Medicaid don’t think the benefits are worth what the government is spending.
Indeed, we think it is likely that Medicaid recipients derive less than the 40 cents of value the researchers estimated recipients derive from one dollar of Medicaid spending. The basic reason for this conclusion is straightforward. Prior to enrolling in Medicaid, the people who won Oregon’s lottery paid only 20–40 cents on the dollar for the medical services they received. The rest of the cost of their care was either borne by others or absorbed as charity care. Medicaid primarily benefited those other payers and charitable providers by freeing them from having to pitch in.25 Other studies have also found that “Medicaid significantly reduces the provision of uncompensated care by hospitals.”26 No wonder the bootleggers like Medicaid—it exists for them, not the poor.
Pollack, Gardner, and Jost do not dispute our conclusion that Medicaid is perceived as a bad deal even by those who benefit from it. “Given the choice between a Medicaid benefit that costs $4,000 and $4,000 in simple cash,” they write, “many or most low-income people might well prefer to take the cash.” But they defend Medicaid against this scathing critique by arguing that poor people’s willingness to pay for medical services is irrelevant:
Voters want to provide for the health of their neighbors and to protect [their] neighbors from the financial ruin that often follows serious illness. When my neighbor requires $50,000 for a life-saving kidney transplant, that need has special urgency, as do other basic necessities such as nutrition and shelter. The moral interest of the community does not hinge on my neighbor’s willingness to pay. We respond to critical needs because we understand the consequences of failing to meet them, not just for the individual in question but for all those whose lives are connected to hers.27
Once again, this argument invokes lofty-sounding sentiments but does not survive scrutiny. To begin with, it is unclear why Pollack, Gardner, and Jost talk about voters’ desires. If it is morally right to protect people from the financial consequences of illnesses, then it is hard to see why voters’ preferences should matter. But, if voters’ preferences do matter, they don’t offer Pollack, Gardner, and Jost much solace. As Obamacare architect Jonathan Gruber confessed (Chapter 21), the entire premise of that law is that voters do not want what Pollack, Gardner, and Jost think they do. Voters had to be fooled because Obamacare would not have been passed if its true potential to transfer wealth to sick people had been understood. Opinion polls further show that most Americans support reductions in Medicaid spending28 and that most also believe that the federal government should play either a minor role in making America’s health care system work well or no role at all.29 And any argument based on voters’ preferences has to deal with the outcome of the 2016 election, which gave control of the federal government to people who are not fond of Medicaid. By resting their defense of Medicaid on voters’ desires, Pollack, Gardner, and Jost have built their cathedral on a foundation made of sand.
One should also ask why “the financial ruin that often follows serious illness” ought to be treated differently from financial calamities with other causes. Millions of people have suffered financially after losing their jobs or on account of stock market declines. Millions of others, most of whom are women and children, have been impoverished by divorces, financial abuses committed by lovers or spouses, or the deaths of breadwinners who had little or no life insurance. Floods, earthquakes, and other natural disasters have taken their toll on millions more. And perhaps the leading cause of poverty is the misfortune of having been born poor, a fact for which no one bears any personal responsibility. Because extreme financial difficulties can occur for many reasons, an argument is needed to explain why sick people are singled out for special treatment. Pollack, Gardner, and Jost do not provide one.
The need to explain why financial difficulties stemming from illnesses are special becomes even more urgent when one remembers that Medicaid spends as much money as all other forms of public assistance combined. In 2016, combined federal and state spending on Medicaid totaled about $575 billion.30 Federal spending on Medicaid alone ($369 billion) exceeded all other federal welfare outlays in 2016.31 Instead of paying for medical services for a selected group, the hundreds of billions spent on Medicaid could have been used to provide a financial cushion for all of America’s poor.
The assertion that Medicaid protects people who become ill from financial ruin is also overstated. Medicaid absorbs only their medical expenses. But many sick people experience financial ruin for other reasons, even when their health care costs are covered. Consider victims who suffer from post-traumatic stress disorder, paralysis, fibromyalgia, back pain, or other maladies that are cheap to treat but that can prevent people from working. Medicaid covers the relatively minor costs of pills and wheelchairs for these people but otherwise leaves them in poverty.
Having mentioned that Medicaid provides long-term care coverage for seniors, we can also point out that many Medicaid beneficiaries bear little resemblance to the kidney transplant recipient that Pollack, Gardner, and Jost use to make their point. Medicaid is, disproportionately and increasingly, a retirement program that protects people from the predictable consequences of aging, not from the financial costs of severe illnesses that strike people at random.32 Why we are doling out retirement funds inefficiently through Medicaid rather than giving people cash via Social Security or the Earned Income Tax Credit is anybody’s guess.
Thus, even though Medicaid moves money in the right direction (from richer to poorer), there is no clear moral justification for this program either. Medicaid absorbs an enormous fraction of the dollars that are available to provide poverty relief. And, like Medicare, it treats people disrespectfully, by denying them the opportunity to make their own choices. Medicaid also doles out money inefficiently by paying for medical services that benefit providers substantially more than beneficiaries. Poor people would do much better if Medicaid simply handed out cash. Even those who use the money to buy health care or nursing home services would be better off because they would be in the driver’s seat.
Many health policy scholars worry that people would make poor decisions if they were required to pay for medical treatments with their own money. In support, they cite an important study known as the RAND Health Insurance Experiment (HIE). The HIE concluded that people who face greater cost sharing cut back on their use of health care, including treatments that physicians believe are medically necessary.33 The desire to ensure that everyone receives all medically appropriate treatments has led some commentators to conclude that everyone must be supplied with comprehensive health insurance that imposes minimal out-of-pocket costs.
A typical example of the genre is provided by Professor Allison K. Hoffman of the University of Pennsylvania School of Law, who wrote a long article and a short column on the topic.34 Citing the HIE, Hoffman says that “people generally do a poor job of differentiating between effective and ineffective medical care,” and that “poor and sick people . . . fare better when they have access to free or low-cost care.”35 She then observes that, since Obamacare took effect, “people have been using increased levels of preventive care and are complying more with drugs and treatment recommended by their doctors.”36 From there, it is only a short step to the conclusion that we should avoid any and all strategies that force consumers to use their own money to pay for health care. After all, what kind of monster would be interested in policies that might place people’s health and lives at risk?
The deep problem with Hoffman’s approach is that it focuses on health care instead of health. The HIE did not find that people who cut back on medical treatments were less healthy than those who got everything the doctor ordered for free. To the contrary, it found that, even though people who were treated for free used more medical treatments, their health was effectively the same as that of people who had to pay.
The HIE is one of the most important studies ever conducted of the effect of insurance on health care and health. (The studies of Oregon’s Medicaid Experiment, which we discuss in Chapter 15, are its chief rivals.) The HIE included more than 7,700 people belonging to 2,750 families from across the United States. Families were randomly assigned to five different health insurance plans. Some plans offered free medical care, while others required participants to share costs. Among the latter plans, some imposed higher cost-sharing levels than others. Out-of-pocket costs were capped to protect poorer families. Families participated in the HIE for three to five years.
Predictably enough, participants who shared in the cost of medical treatments used less health care than people who received services for free: “cost sharing reduced the use of nearly all health services. . . . [Participants] with cost sharing made one to two fewer physician visits annually and had 20 percent fewer hospitalizations than those with free care. Declines were similar for other types of services as well, including dental visits, prescriptions, and mental health treatment.”37 Usage reductions correlated with the level of cost sharing too. Participants “with 25 percent coinsurance spent 20 percent less than participants with free care, and those with 95 percent coinsurance spent about 30 percent less.”38 The HIE makes it clear that demand for health care is actually fairly elastic and that insurance encourages the consumption of health care by making treatments cheaper for patients at the point of service.
Although participants in cost-sharing plans used fewer medical treatments, the HIE found that, “in general, the reduction in services induced by cost sharing had no adverse effect on participants’ health.”39 The only statistically significant differences were improved vision owing to eyeglasses and lower blood pressure. The first of these findings is trivial. Since retailers have moved into optometry, eye exams and eyeglasses have become cheap enough for people to pay for themselves. The second finding is more troubling but could well be the result of data mining. If you study enough dependent variables, you’re likely to stumble across at least a few statistically significant findings.
In sum, Hoffman is on solid ground in contending that, when health care is free, patients are more likely to comply with doctors’ recommendations and to use more preventive treatments. That’s because, when health care is free, everyone uses more of everything. But giving away health care for free has enormous downsides too. It encourages people to use treatments that are ineffective, harmful, and wasteful, and to rely on intensive medical treatments to cure what ails them instead of doing other things that would be cheaper and more efficacious, like exercising, dieting, sleeping more, and cutting back on smoking. The lavish public spending that is needed to make medical treatments free also diverts enormous amounts of money from other public health priorities that deliver more bang for the buck, like sanitation, housing, education, school lunches, and pollution abatement. A person who wanted to maximize the use of medical treatments would make health care free at the point of delivery; a person who wanted to make people healthy would not.
The definition of “effective” medical care also involves further complexities. At first glance, the matter might seem like a question to be decided by doctors using data and their clinical judgment—either a treatment is effective or it isn’t. But effectiveness isn’t the only thing that matters. Cost matters too. There are significant problems with ignoring costs when deciding whether people should use medical treatments.
Imagine that you once had a rich uncle (Sam) who gave you a new top-of-the-line Lexus to drive. Uncle Sam is now broke, and you have to buy your own car. Another top-of-the-line Lexus would be an “effective” car for you. It would get you from place to place, in luxury and with the utmost safety. But a much cheaper car—even a used car—would get you where you need to go—albeit with considerably less luxury and probably a little less safety. Which car should you choose from the vast array of options?
The answer depends, at least in part, on the price of each car, the amount you can afford to spend, and your desires and priorities. If your budget is tight, you might reasonably decide to buy something cheaper than a top-of-the-line Lexus because, for you, that car isn’t cost effective. Or you might purchase the Lexus and make room for it in your budget by cutting other expenses. By itself, effectiveness-as-transportation doesn’t tell you what you should do.
Suppose you decide to buy a less expensive car. The fact that you used to drive a Lexus when it was free doesn’t imply that a Lexus provides a valid baseline against which to measure your future transportation choices, particularly now that you have to foot the bill. Nor should we listen to the complaints of those who believe you are making a bad choice by purchasing something other than a Lexus. (Of course, it is no doubt a coincidence that those who make this argument are quite often in the business of making and selling Lexuses.)
The key is that the HIE didn’t ask whether participants in cost-sharing plans made cost-effective choices—decisions that maximized their expected health status given the positive cost of health care and their other needs and priorities. It also didn’t ask whether participants in the free plans made cost-effective choices. It just found that participants in cost-sharing plans used fewer services that doctors ranked as effective. Given the HIE’s failure to find significant health effects, the natural inference to draw is that participants in cost-sharing plans found other uses for their dollars that they rightly thought were better for them.
To be sure, good consumer choices are preferable to bad ones. Ideally, patients would use only cost-effective medical treatments, whether spending their own money or dollars provided by their insurers. But it is first-party payment (and not third-party payment) that encourages patients to learn whether treatments confer benefits that exceed their costs. And only first-party payment encourages patients to seek out alternatives that are better and cheaper.
Some medical treatments are so cost effective that everyone should have them, including people who are so poor or so short sighted they might not buy them on their own. Vaccinations against childhood diseases fall into this category. Solving this problem doesn’t require third-party payment. Instead, we should identify the few treatments that fit this description and design means of delivering them. Mass inoculations already take place at schools, workplaces, and other locations. Vaccines are available at little or no cost at retail outlets too. If we want to improve population health, that’s the best way to go about doing so.
Several of the moral criticisms we have leveled against Medicare and Medicaid apply with equal or greater force to the VHA, a $65 billion program that provides medical services for almost nine million veterans each year.40 This program, too, is paternalistic. Instead of giving America’s veterans access to health care, we could give them cash and let them choose how to use it. The VHA is also inefficient. If veterans were given money instead of access to health care, many would spend the dollars in other ways that enhance their welfare more.
The VHA has also experienced scandalous quality problems.41 In 2014, it emerged that veterans needing care often experienced long waits, that many died before being seen, and that VHA administrators had deceived their superiors by maintaining secret lists of patients who were in the queue.42 In 2017, investigations of VHA medical centers conducted by USA Today and the GAO found that “the [VHA] has for years concealed medical mistakes and misconduct by health care workers.”43 It failed to open investigations promptly when errors were reported, failed to document investigations properly, failed to file mandatory reports on providers who committed malpractice to the National Practitioner Data Bank, entered into hundreds of secret settlements of malpractice claims, and hired doctors with histories of harming patients.44 Given this dismal record and the VHA’s proven willingness to lie to Congress, why should anyone trust the VHA with the care of our nation’s veterans?
The discussion of moral issues relating to the VHA can be streamlined by asking one question: Why does a separate health care system for veterans exist? Even if one believes, as we do, that veterans who put their lives at risk are entitled to preferential treatment, there is no obvious need for a separate health care system to deliver that treatment. Most American veterans—about 13 million of them—use the same health care providers as everyone else. The veterans who are eligible for treatment in VHA facilities are those with service-related disabilities or who are poor. They constitute a minority of America’s former uniformed military personnel.
Do the differences between VHA-eligible veterans and other service personnel provide a moral justification for the existence of a separate health care system? It does not seem so to us. Start with veterans who are poor. Rather than have a separate medical system for them, the federal government could put them on Medicaid or enroll them in Medicare. This would enable them to obtain health care via the private sector, just like other people who participate in these programs. Many poor veterans would see this as a plus, especially those who live a considerable distance from the nearest VHA facility.
Insofar as poor veterans are concerned, history provides the only reason for having a separate VHA system: neither Medicaid nor Medicare existed when the VHA was created. But, now that these programs do exist, the need for a separate system for veterans with low incomes has disappeared. And, if (as we recommend) Medicaid and Medicare were both converted to cash-grant programs, poor veterans would have the same access to medical services as everyone else—better access, if (as we would suggest) their grants were significantly topped up in recognition of their past service.
Turning from poor veterans to those with service-related disabilities, the need for a federally run system for the latter is not obvious either. The Veterans Access, Choice, and Accountability Act of 2014 (the Choice Act) suggests as much. When waiting times at VHA facilities exceed 30 days or the nearest VHA facility is more than 40 miles away, the Choice Act entitles veterans to see civilian providers who accept Medicare or participate in TRICARE, another federal program for veterans and their families. The plain implication would seem to be that all veterans, including those with service-related disabilities, can be treated adequately outside the VHA.
This makes sense. Although veterans are special and those with service-related disabilities may have special needs, hospitals, physicians, therapists, and other civilian providers can deliver all of the medical treatments that are available through the VHA. And, if injured veterans benefit by being in the company of others with service-related disabilities, civilian providers can help with that too. They need only attract veterans in numbers by developing practices that cater to them or by hiring them onto their staffs. In fact, nothing would prevent a private sector hospital company from operating a string of VHA-style facilities, and one would surely do so if veterans wanted a separate health care system and had the means to pay for it.
The Choice Act actually makes clear the difficulty of justifying the existence of the VHA. Why offer the option of using civilian providers only when the delay or distance requirement is met? Why not give veterans this option in all contexts? Many veterans might reasonably think it better to use private providers than to wait even 10 days for appointments or to venture even 10 miles to VHA facilities. Nothing requires civilian patients to encounter such obstacles before seeing providers whose hours and locations are convenient. Many veterans might also envy the freedom that civilians have to use the providers who are best at handling their medical needs. By subjecting veterans who want to use non-VHA providers to the indicated limits, the Choice Act denies them the opportunity of taking advantage of the most appealing options while also protecting the VHA from private sector competition that would pressure it to improve. One is inevitably led to the conclusion that the limitations on the Choice Act are there to protect the VHA—and the jobs that VHA facilities provide in congressional districts—rather than to ensure that veterans receive the high-quality care they deserve at convenient locations.
Some writers have defended the VHA by pointing out that, for a time and in certain ways, VHA hospitals provided better care than private facilities.45 In a 2005 article, we also touted the superiority of VHA hospitals, while noting that the impetus to serve veterans better came from Congress in response to decades of high-profile scandals involving quality of care at VHA facilities.46 VHA personnel certainly didn’t seem inclined to take the necessary steps to prevent these problems from exploding into full-blown scandals—raising questions about whether that system should be trusted with the care of veterans at all.
Even so, the argument that VHA facilities are sometimes better than civilian facilities misses the point. Oldsmobiles may have been better cars than Renaults, but the U.S. auto market drove both brands to extinction because both were inferior. The same goes for the civilian medical sector and the VHA. Both could be and should be much better than they are, and both would be better if patients bought medical services directly instead of relying so heavily on third-party payers.