The problem of irresponsibility is endemic to the Benevolent Community. Any system of social benevolence involves pooling the perils of a precarious existence. Because both fate and individual decisions determine who will require the community’s assistance, any instrument of collective obligation—whether founded on the principle of insurance or that of welfare—offers individuals a device for shifting the cost of their own irresponsibility onto others. Knowing this, they may be less inclined to responsible behavior and well-considered choices in the first place. Irresponsibility can be infectious. As the community’s members learn that risks are shared while pleasures are personal, more may be tempted into laxity and recklessness. The result is analogous to the economic gridlock discussed earlier: Without a potent sense of accountability, any system capable of conferring mutual benefits may be exploited. The greater the tendency toward irresponsibility and exploitation, the greater the costs of guarding against it, and the less capable the system is of realizing the ideals of community.
Americans appreciate this basic dilemma, but in a way that appears curiously unbalanced. The potential for irresponsibility inherent in social insurance programs has been largely neglected, with the result that spending on these programs has ballooned. The potential for irresponsibility in programs thought of as welfare, conversely, has come to dominate the debate about poverty, with the result that efforts to eradicate “waste, fraud, and abuse” have gone so far as to deny help to the destitute.
Fear and insecurity focus the mind. People who know that the consequences of their actions will fall on themselves or their families tend toward prudent behavior. Any social device that makes people more secure by protecting them against hardship will in general cause them to relax their vigilance, at least a bit. The certainty that the rest of the community is pledged to keep harm at bay or compensate victims can entice individuals into irresponsibility.
When the Social Security Act was first proposed in 1935, opponents argued that it would induce dependency and sloth. If people knew in advance that they would be insured against unemployment and poverty in old age, so the argument went, they would see no reason to work hard or save money. The National Association of Manufacturers laid out the logic succinctly:
Legislation which from its very nature tends to increase dependency and indigency decreases individual energy and efficiency of individuals in attempting to take care of themselves. It would thereby decrease the sum total of national productive effort in the country, and in the long run thereby decrease the aggregate income available for distribution among the body of citizens, and hence inevitably lower the standard of living.1
A comparable protest emerged a half century later, as some economists argued that Social Security reduced individuals’ incentives to save during the course of their working lives. Americans who had come to expect generous Social Security checks on retirement lost the will to save and invest, they said, because these people considered their payroll taxes a sufficient sum to dedicate to their old age. The problem was that these payroll taxes had simply paid for the previous generation’s retirement. The present generation, in turn, collected far more from the system than its contributions had earned. From society’s standpoint, then, Social Security was no savings system at all. When people acted on the illusion that it was, and cut back their savings as payroll taxes rose, the total amount the economy could draw on for investment shrank. As a result, American industry suffered from a shortage of capital for the investment that would have to pay for the next generation.2
Similarly, the soothing certainty of Medicare might reduce individuals’ inclination to take care of their bodies. Studies revealed that one fifth of Medicare patients accounted for nearly 70 percent of the expenses. This high cost group was not simply older than other recipients; the same skewed pattern held at every age. But a significant portion of this high-cost group had bad health habits. They smoked, they were alcoholic, or they were obese.3 Without the promise that the rest of us would pay their health-care bills, irresponsible souls like these might be more strongly motivated to foreswear their dissolute ways.
The underlying problem plagues all forms of insurance: When the risk of misfortune is related in any way to individual choices, compensating people when bad things happen will increase the incidence of bad things. Any promise of rescue encourages risk taking. If the government offers cheap insurance against flood damage, more people will build houses in low-lying flood plains. Bankers and businessmen who know that federal insurers and the public at large will come to their aid if bankruptcy looms are more likely to go for the long shot. Workers secure that unemployment compensation will keep them solvent may choose jobs less carefully in the first place, and may balk at seeking retraining or moving in search of work. Doctors with insurance against malpractice may be a bit more cavalier in the treatments they prescribe; patients who anticipate liability awards in the event of error may be less careful in choosing and evaluating their doctors.4
How can we enjoy the benefits of mutual responsibility without eroding individual responsibility? Welfare dependency and mushrooming health care and pension costs are part and parcel of the same phenomenon. But the answer is no more to abolish all forms of social insurance than it is to abandon the very poor to hunger and homelessness.
It has been an article of faith in liberal circles that the poor are not really responsible for their plight; society is. They are poor because they have been deprived of good schools, good jobs, adequate diets and medical attention, safe neighborhoods, and all the other things that equip someone to cope. We should attack the root causes of poverty rather than blame its victims. In the meantime, the poor are entitled to a decent standard of living.
This argument has never been particularly compelling to a majority of Americans, who stubbornly cling to the belief that people are, and should be, responsible for themselves. Even at the height of the Great Society in 1967, fully 42 percent of Americans thought that poverty reflected mostly a “lack of effort,” and another 39 percent felt that lack of effort had something to do with it. Only 19 percent concluded that poverty was due to “circumstances beyond the control” of the poor.5
By the 1980s the uneasiness about absolving the poor of all responsibility had grown. Two decades of antipoverty programs seemed to have done little good, and in one respect the poverty problem had become worse. The nation discovered a large and growing population of poor, unmarried teenage mothers living off welfare in our central cities. Many of these teenagers were the children of welfare mothers, and their babies seemed destined to live out their lives on the dole and spawn another generation of dependents.6
Conservative disciplinarians concluded that welfare itself caused this situation. Since the welfare payments a woman collected rose with the number of dependent children in her care, the system encouraged teenage girls to have babies and live off the dole, and teenage fathers to abandon them. The misguided principle of paying girls to have babies made welfare perversely alluring, and once dependent on it, recipients lost all motivation to better themselves. They remained trapped in their own incompetence and irresponsibility. The only solution, by this view, was to get rid of welfare altogether. We should, in the words of one influential commentator, “leave the working-aged person with no recourse whatsoever except the job market, family members, friends, and public or private locally funded services. It is the Alexandrian solution: cut the knot, for there is no way to untie it.”7
The disciplinarians thus invoked a version of the liberals’ old argument: The poor should not be blamed for their situation; it was not a matter of congenitally bad character. The environment they lived in was largely responsible for what had become of them, and we were all at fault for creating that environment. But—and here the conservative story departed sharply from the liberal one—the most damaging part of the environment was welfare itself, which condemned the poor to lives of permanent dependence. The welfare system had allowed people to survive without making any effort to work or to improve themselves.
Liberals took issue with this story on the seldom compelling grounds of empirical fact. Most of the poor, they said, were clearly incapable of taking care of themselves however willing they might be; fully 40 percent were children, and another 15 percent were old or disabled.8 Liberals pointed out that cash benefits for the nonelderly poor hardly rose at all during the 1970s, when teenage illegitimacy began to rise precipitously. They noted that teenage mothers comprised only a relatively small proportion of welfare recipients, and that the highest increases in illegitimacy occurred in states with relatively low welfare payments. The lure of welfare may be one minor cause of the plight of some of the poor, they conceded, but it was a minor factor within the more important environment of deprivation.
The curious thing about these two stories—the liberal one about deprivation and the conservative one about dependency—was that both were told as if the much larger system of social benevolence did not exist. Surely some poverty was attributable to the lure of welfare. But precisely the same dilemma of personal responsibility plagued social insurance programs. In walling off welfare as a separate system, premised on charity rather than mutual obligation, we emphasized the passivity of the poor and neglected the inevitable dilemma of accountability. Thus liberal conciliators set themselves up to be surprised when the poor behaved like anyone else—like beneficiaries of Social Security, Medicare, or unemployment compensation—and exploited a system with few bulwarks against exploitation. Conservative disciplinarians, equally oblivious of the parallels between welfare and social insurance, could envision no approach to welfare dependency except for eliminating the system. Had we considered welfare as one strand of the network of community support by which we all guard each other against adversity, however, we might have contemplated other ways of encouraging responsibility short of abandoning people to their fates.
One common approach to preserving individual responsibility in an insurance system has been to make support contingent on private behavior. Victims are compensated for hardship only if they have conformed to reasonable standards of prudence. If their plight is mostly due to recklessness, they must live with the consequences.
Examples of this approach can be found in both public and private sectors. Judges and juries refuse to award damages (or scale down the awards) if an accident victim’s negligence contributed to the mishap. Banks often refuse to loan money (or charge higher interest rates) to people who have defaulted before. Private insurers typically decline to insure (or charge higher premiums to) people who have had several accidents in the past. It has even been suggested that people who are addicted to cigarettes or alcohol, or are chronically obese, should pay higher premiums for health insurance and higher payroll taxes for subsequent Medicare protection.9
The problem with all such attempts to condition help on personal responsibility has come in defining exactly what responsibility means. Alcoholism and gluttony, for example, shade easily into heavy drinking and overeating. Where should the line be drawn? Contributory negligence has been so difficult to determine that courts have all but given up; by the late 1980s many judges and politicians were urging that we abolish personal-injury lawsuits altogether in favor of a system that would automatically compensate accident victims.10 Even private insurance companies have found it too expensive to draw fine categories of risk and responsibility; instead, they employ broad criteria like age and location, which have been found to be rough indicators of the degree of personal responsibility to be expected of the populations they describe.
“Workfare” schemes have been subject to a similar ambiguity. Programs that simply require able-bodied recipients to take any job offered or assigned to them, regardless of the skills it conveys or the salary it pays, are not so much a screen to sort the willing from the shiftless as they are a simple penalty on requiring aid. The purpose of such programs (like the workhouses in nineteenth-century England) is to deter misrepresentation by making welfare less attractive to all recipients. To the extent that beneficiaries are capable of work but aspire to a life of leisure, such requirements might indeed enforce responsibility. But they do nothing to increase people’s capacity to take active responsibility for their fate.11
“Workfare” schemes informed by the ideal of reciprocal responsibility, however, have been organized quite differently. The idea is not to penalize dependency, but to make work a more feasible option—that is, to penalize not low attainment but only willfully low aspiration. Assistance is conditioned upon the recipients taking steps to improve their prospects by, for example, finishing high school or obtaining vocational training. Along the way they might be counseled in how to find a job and perform reliably in it, and assisted with day care for their small children and with transportation to and from work. Regardless of the specific features of the program, the basic logic is to impose requirements that improve recipients’ odds for a more self-sufficient future, and are understood to do so by the recipients themselves.12
In the late 1980s both approaches were being tried. Both were labeled “workfare” and justified as campaigns against chronic dependency. But one was designed to get “them” off the welfare rolls primarily by making welfare harder to obtain. The other was designed to cut the welfare rolls by turning recipients from dependents into productive members of society. The public philosophies underlying the two could not have been more different.
A second broad approach to the problem of reconciling social insurance with personal responsibility has been to rely on the groups in which people live to guide individual behavior. This is hardly a new device; for much of human history, the extended family or ethnic community has been the principal monitor of individual conduct and the cardinal check on careless or dangerous behavior. These traditional social units functioned as rudimentary systems of insurance all their own. Parents took care of children until the children were responsible for themselves, and then the children took care of the parents when the parents were no longer able to. When a fire consumed someone’s house, the community rebuilt it, with the understanding that the beneficiary would respond in kind when the next house succumbed. Sick or temporarily disabled members were cared for, in the expectation that they would care for other victims of misfortune. Those who had saved money lent it to members who aspired to start businesses, with the understanding that the recipient acknowledged the obligation to foster community enterprise once he gained the means.
Traditional social units—the extended family, the village, the clan—were notably, even notoriously efficient at monitoring behavior and enforcing norms of acceptable conduct. This was of course absolutely required by the premise of mutual responsibility. A member who failed to exercise industry, prudence, or self-discipline imposed a cost on the entire group. He also menaced the tradition of reciprocal obligation itself; if his irresponsibility went unchecked, it could undermine the motivation of others to act responsibly. Each member thus had a personal stake in fostering, and enforcing, every other member’s sense of responsibility.
But with the displacement of the traditional community and family as central social institutions, these older forms of control became less effective. The decline was both a cause and a consequence of the creation of a broader system of social benevolence. It was a cause in the sense that, as these traditional social units disintegrated and many communities and families found themselves unable to care for their members, they sought help from society at large. But the decline was also a consequence, in that the very availability of public support made it less necessary for the communities and families to bear the burden. Once it was clear that the profligate man could rely on Social Security when he retired, for example, his children, siblings, spouse, or neighbors needed no longer worry that his imprudence might impose a burden on them later on. To this extent, they could ignore his behavior with impunity.
One solution to this problem, in principle, would be to shift authority for monitoring behavior to the new source of support—to have the public at large, through the agency of the government, enforce appropriate conduct among recipients. This is what some conservative disciplinarians have meant by “tightening” welfare eligibility. Welfare agencies in a few states, for example, have been authorized to ensure by whatever means necessary that a beneficiary is unable to work, or that a single mother is not living with an employable man. But such nitpicking surveillance is both inefficient and invasive. Americans would never stomach it for the social insurance programs that benefit the majority. Another approach, equally draconian, would be to concede that the problem of personal responsibility is insoluble, and thereby give up on all forms of social insurance, including Social Security and Medicare, as some disciplinarians have urged we do with regard to welfare. But there is no turning back; the public is unwilling to give up the freedom that social insurance allows. The traditional, tightly knit neighborhood and family exacted unquestioning allegiance and conformity; its members intruded conspicuously on one another’s affairs. Social insurance lifted the yoke.13
There is, in any event, a middle course between jettisoning all forms of social benevolence and imposing strict government surveillance. This is to merge the principle of collective obligation with the advantages of smaller groups as agents for inculcating responsibility among their members. By insuring the smaller group as a whole rather than the individuals who comprise it, the group becomes an intermediary in the system of social benevolence. The costs of any specific hardship are still pooled; society continues to be insurer of last resort. But because the other members of the group share a significant portion of the cost, they have a strong incentive to look out for one another and take preventive action.
By the 1980s the American corporation fulfilled this role for a portion of the population. Its willingness to take on this responsibility was not due to any outbreak of benevolence on the parts of top management or shareholders. As we have observed, the advantages of tax-free employee benefits had pushed many corporations into the business of insuring their workers and workers’ families against all sorts of adversity. But while American taxpayers funded a portion of these benefits, the corporations financed the rest. The bill could be high. In 1982, for example, over $600 of the cost of a new Chrysler was attributable to payments on employee health insurance.14 Companies that did a better job than their rivals of controlling these costs could gain a competitive advantage; those whose employees led more dissipated lives would end up paying the bill. Corporations were thus motivated to encourage employees to avoid the risk of costly afflictions.
Accordingly, American corporations invested ever larger sums on preventing risks. One company, Johnson & Johnson, encouraged its employees to quit smoking and to eat and exercise properly; after five years the program slashed hospitalization costs by 35 percent and recaptured three times the cost of the effort.15 Other companies were providing their employees with counseling for alcoholism and drug abuse, advice on how to deal with financial and legal problems, therapy for coping with family crises and emotional problems, and lessons for improving personal safety and hygiene. The American corporation was fast becoming the community service agency of the 1980s. But unlike the older version, it was an agency with a direct financial stake in helping people to become more responsible for themselves.
Prepaid group health plans, prepaid group legal plans, and other forms of collective insurance generated similar incentives. Because the costs of insurance to the entire group depended upon the responsibility that each member assumed for avoiding large expenses later on, all such plans emphasized prevention as well as compensation. Health maintenance organizations educated their members in good health habits, legal plans offered guidance in avoiding costly litigation, hospitals counseled their doctors in how to reduce carelessness and thereby stem future malpractice claims.
In its own way, the welfare system was groping toward the principle of group liability as well. By the 1980s welfare agencies were imposing financial responsibility on people other than those directly in need—on those who could help prevent hardship from occurring in the first place.
One example was the man (or boy) who might father a child and then abandon it. His cooperation was induced by a simple strategy: Welfare agencies agreed to provide aid to the mother and child on condition that the agency be authorized to seek out the absent father and, if he had a job, dock his pay for child support. This innovation had rather the opposite effect of another rule, still in effect in several states, which denied mothers and children aid if an employable male were found on the premises. Instead of encouraging fathers to leave, the new rule encouraged them to take more responsibility from the start.16
Alternatively, welfare agencies held the parents of the unmarried teenagers financially liable for their childrens’ plight. The boy’s parents would bear an equal share of the responsibility with the girl’s. (A sponsor of such legislation in one state offered a modest hope that the provision would induce parents to “at least talk about the subject [with their children] before there’s an unwanted pregnancy.”)17 A third, related approach involved assigning partial responsibility to family members when calculating the aid due to the claimants. Childrens’ assets would be considered, for example, in deciding whether their elderly parents needed welfare.18 The intent and effect here was to induce the family to help its wayward members take greater personal responsibility in advance of any difficulties, and simultaneously to render their insistence on such responsibility highly persuasive.
In all these instances, public welfare remained a last resort. The idea was not simply to cut costs to the public budget—although that was an aspect as well—but rather to tap the potential of the recipient’s family to encourage responsible behavior, and at the same time to affirm these relationships by sharing responsibility. There remained one central problem, however: Those who were asked to share the responsibility for the needy were often in the same desperate straits as recipients. Runaway fathers, the parents of wayward teenagers, and the children of the elderly poor were apt to be as needy as those whom they were to influence. A group comprised primarily of needy people may not have the resources to both ameliorate hardship and inculcate responsibility. It is differences in condition—the coexistence of prosperity and want within the same community—that makes social insurance workable. Thus the principle of group responsibility is of limited utility so long as it is confined to the welfare system.
Many of the poor in America have remained outside the groups in society with resources to help them gain responsibility. Being unemployed, or at least unsalaried, they have had no access to corporate-sponsored prevention programs. Living together in poor communities, they have been too readily shunned by health maintenance organizations or other group insurance plans, which regard them as too risky. So long as “the poor” exist as a separately identifiable population, therefore, they continue to be locked out of the very organizations that might otherwise best help them avoid perpetual dependence.