BY THE END of the ’80s, many women had become bitterly familiar with these “statistical” developments:
• A “man shortage” endangering women’s opportunities for marriage
Source: A famous 1986 marriage study by Harvard and Yale researchers
Findings: A college-educated, unwed woman at thirty has a 20 percent likelihood of marriage, at thirty-five a 5 percent chance, and at forty no more than a 1.3 percent chance.
•A “devastating” plunge in economic status afflicting women who divorce under the new no-fault laws
Source: A 1985 study by a sociologist then at Stanford University
Findings: The average woman suffers a 73 percent drop in her living standard a year after a divorce, while the average man enjoys a 42 percent rise.
•An “infertility epidemic” striking professional women who postpone childbearing
Source: A 1982 study by two French researchers
Findings: Women between thirty-one and thirty-five stand a 39 percent chance of not being able to conceive, a big 13 percent jump from women in their late twenties.
A “great emotional depression” and “burnout” attacking, respectively, single and career women
Source: Various psychological studies
Findings: No solid figures, just the contention that women’s mental health has never been worse, and is declining in direct proportion to women’s tendency to stay single or devote themselves to careers.
These are the fundamental arguments that have supported the backlash against women’s quest for equality. They have one thing in common: they aren’t true.
That no doubt sounds incredible. We’ve all heard these facts and figures so many times, as they’ve bounced back and forth through the backlash’s echo chamber, that it’s difficult to discount them. How is it possible that so much distorted, faulty, or plain inaccurate information can become so universally accepted? Before turning to these myths, a quick look at the way the media handled two particular statistical studies may help in part to answer that question.
In 1987, the media had the opportunity to critique the work of two social scientists. One of them had exposed hostility to women’s independence; the other had endorsed it.
“The picture that has emerged of Shere Hite in recent weeks is that of a pop-culture demagogue,” the November 23, 1987, issue of Newsweek informed its readers, under the headline MEN AREN’T HER ONLY PROBLEM. Shere Hite had just published the last installment of her national survey on sexuality and relationships, Women and Love: A Cultural Revolution in Progress, a 922-page compendium of the views of 4,500 women. The report’s main finding: Most women are distressed and despairing over the continued resistance from the men in their lives to treat them as equals. Four-fifths of them said they still had to fight for rights and respect at home, and only 20 percent felt they had achieved equal status in their men’s eyes. Their quest for more independence, they reported, had triggered mounting rancor from their mates.
This was not, however, the aspect of the book that the press chose to highlight. The media were too busy attacking Hite personally. Most of the evidence they marshaled against her involved tales that, as Newsweek let slip, “only tangentially involve her work.” Hite was rumored to have punched a cabdriver for calling her “dear” and phoned reporters claiming to be Diana Gregory, Hite’s assistant. Curious behavior, if true, but one that suggests a personality more eccentric than demagogic. Nonetheless, the nation’s major publications pursued tips on the feminist researcher’s peculiarities with uncharacteristic ardor. The Washington Post even brought in a handwriting expert to compare the signatures of Hite and Gregory.
Certainly Hite’s work deserved scrutiny; many valid questions could be raised about her statistical approach. But Hite’s findings were largely held up for ridicule, not inspection. “Characteristically grandiose in scope,” “highly improbable,” “dubious,” and “of limited value” was how Time dismissed Hite’s report in its October 12, 1987, article “Back Off, Buddy”—leading one to wonder why, if the editors felt this way, they devoted the magazine’s cover and six inside pages to the subject. The book is full of “extreme views” from “strident” women who are probably just “malcontents,” the magazine asserted. Whether their views were actually extreme, however, was impossible to determine from Time’s account: the lengthy story squeezed in only two two-sentence quotes from the thousands of women that Hite had polled and quoted extensively. The same article, however, gave plenty of space to Hite’s critics—far more than to Hite herself.
When the media did actually criticize Hite’s statistical methods, their accusations were often wrong or hypocritical. Hite’s findings were “biased” because she distributed her questionnaires through women’s rights groups, some articles complained. But Hite sent her surveys through a wide range of women’s groups, including church societies, social clubs, and senior citizens’ centers. The press charged that she used a small and unrepresentative sample. Yet, as we shall see, the results of many psychological and social science studies that journalists uncritically report are based on much smaller and nonrandom samples. And Hite specifically states in the book that the numbers are not meant to be representative; her goal, she writes, is simply to give as many women as possible a public forum to voice their intimate, and generally silenced, thoughts. The book is actually more a collection of quotations than numbers.
While the media widely characterized these women’s stories about their husbands and lovers as “man-bashing diatribes,” the voices in Hite’s book are far more forlorn than vengeful: “I have given heart and soul of everything I am and have . . . leaving me with nothing and lonely and hurt, and he is still requesting more of me. I am tired, so tired.” “He hides behind a silent wall.” “Most of the time I just feel left out—not his best friend.” “At this point, I doubt that he loves me or wants me. . . . I try to wear more feminine nightgowns and do things to please him.” “In daily life he criticizes me for trivial things, cupboards and doors left open I don’t like him angry. So I just close the cupboards, close the drawers, switch off the lights, pick up after him, etc., etc., and say nothing.”
From these personal reports, Hite culls some data about women’s attitudes toward relationships, marriage, and monogamy. That the media find this data so threatening to men is a sign of how easily hysteria about female “aggression” ignites under an antifeminist backlash. For instance, should the press really have been infuriated—or even surprised—that the women’s number-one grievance about their men is that they “don’t listen”?
If anything, the media seemed to be bearing out the women’s plaint by turning a deaf ear to their words. Maybe it was easier to flip through Hite’s numerical tables at the back of the book than to digest the hundreds of pages of rich and disturbing personal stories. Or perhaps some journalists just couldn’t stand to hear what these women had to say; the overheated denunciations of Hite’s book suggest an emotion closer to fear than fury—as do the illustrations accompanying Time’s story, which included a woman standing on the chest of a collapsed man, a woman dropping a shark in a man’s bathwater, and a woman wagging a viperish tongue in a frightened male face.
At the same time the press was pillorying Hite for suggesting that male resistance might be partly responsible for women’s grief, it was applauding another social scientist whose theory—that women’s equality was to blame for contemporary women’s anguish—was more consonant with backlash thinking. Psychologist Dr. Srully Blotnick, a Forbes magazine columnist and much quoted media “expert” on women’s career travails, had directed what he called “the largest long-term study of working women ever done in the United States.” His conclusion: success at work “poisons both the professional and personal lives of women.” In his 1985 book, Otherwise Engaged: The Private Lives of Successful Women, Blotnick asserted that his twenty-five-year study of 3,466 women proved that achieving career women are likely to end up without love, and their spinsterly misery would eventually undermine their careers as well. “In fact,” he wrote, “we found that the anxiety, which steadily grows, is the single greatest underlying cause of firing for women in the age range of thirty-five to fifty-five.” He took some swipes at the women’s movement, too, which he called a “smoke screen behind which most of those who were afraid of being labeled egomaniacally grasping and ambitious hid.”
The media received his findings warmly—he was a fixture everywhere from the New York Times to “Donahue”—and national magazines like Forbes and Savvy paid him hundreds of thousands of dollars to produce still more studies about these anxiety-ridden careerists. None doubted his methodology—even though there were some fairly obvious grounds for skepticism.
For starters, Blotnick claimed to have begun his data collection in 1958, a year in which he would have been only seventeen years old. On a shoestring budget, he had somehow personally collected a voluminous data base (“three tons of files, plus twenty-six gigabytes on disk memory,” he boasted in Otherwise Engaged)—more data than the largest federal longitudinal studies with multimillion-dollar funding. And the “Dr.” in his title was similarly bogus; it turned out to be the product of a mail-order degree from an unaccredited correspondence school. When tipped off, the editors at Forbes discreetly dropped the “Dr.” from Blotnick’s by-line—but not his column.
In the mid-’80s, Dan Collins, a reporter at U.S. News & World Report, was assigned a story on that currently all-popular media subject: the misery of the unwed. His editor suggested he call the ever quotable Blotnick, who had just appeared in a similar story on the woes of singles in the Washington Post. After his interview, Collins recalls, he began to wonder why Blotnick had seemed so nervous when he asked for his academic credentials. The reporter looked further into Blotnick’s background and found what he thought was a better story: the career of this national authority was built on sand. Not only was Blotnick not a licensed psychologist, almost nothing on his resume checked out; even the professor that he cited as his current mentor had been dead for fifteen years.
But Collins’s editors at U.S. News had no interest in that story—a spokeswoman explained later that they didn’t have a news “peg” for it—and the article was never published. Finally, a year later, after Collins had moved to the New York Daily News in 1987, he persuaded his new employer to print the piece. Collins’s account prompted the state to launch a criminal fraud investigation against Blotnick, and Forbes discontinued Blotnick’s column the very next day. But the news of Blot-nick’s improprieties and implausibilities made few waves in the press; it inspired only a brief news item in Time, nothing in Newsweek. And Blotnick’s publisher, Viking Penguin, went ahead with plans to print a paperback edition of his latest book anyway. As Gerald Howard, then Viking’s executive editor, explained at the time, “Blotnick has some very good insights into the behavior of people in business that I continue to believe have an empirical basis.”
• • •
THE PRESS’S treatment of Hite’s and Blotnick’s findings suggests that the statistics the popular culture chooses to promote most heavily are the very statistics we should view with the most caution. They may well be in wide circulation not because they are true but because they support widely held media preconceptions.
Under the backlash, statistics became prescriptions for expected female behavior, cultural marching orders to women describing only how they should act—and how they would be punished if they failed to heed the call. This “data” was said to reflect simply “the way things are” for women, a bedrock of demographic reality that was impossible to alter; the only choice for women was to accept the numbers and lower their sights to meet them.
As the backlash consensus solidified, statistics on women stopped functioning as social barometers. The data instead became society’s checkpoints, positioned at key intervals in the life course of women, dispatching advisories on the perils of straying from the appointed path. This prescriptive agenda governed the life span of virtually every statistic on women in the ’80s, from initial gathering to final dissemination. In the Reagan administration, U.S. Census Bureau demographers found themselves under increasing pressure to generate data for the government’s war against women’s independence, to produce statistics “proving” the rising threat of infertility, the physical and psychic risks lurking in abortion, the dark side of single parenthood, the ill effects of day care. “People I’ve dealt with in the [Reagan] government seem to want to recreate the fantasy of their own childhood,” Martin O’Connell, chief of the Census Bureau’s fertility statistics branch, says. And results that didn’t fit that fantasy were discarded, like a government study finding that federal affirmative action policies have a positive effect on corporate hiring rates of women and minorities. The Public Health Service censored information on the beneficial health effects of abortion and demoted and fired federal scientists whose findings conflicted with the administration’s so-called pro-family policy.
“Most social research into the family has had an immediate moral purpose—to eliminate deviations like divorce, desertion, illegitimacy, and adultery—rather than a desire to understand the fundamental nature of social institutions,” social scientist Kingsley Davis wrote in his 1948 classic Human Society. More than forty years later, it is one of the few statements by a demographer that has held up.
Valentine’s Day 1986 was coming up, and at the Stamford Advocate, it was reporter Lisa Marie Petersen’s turn to produce that year’s story on Cupid’s slings and arrows. Her “angle,” as she recalls later, would be “Romance: Is It In or Out?” She went down to the Stamford Town Center mall and interviewed a few men shopping for flowers and chocolates. Then she put in a call to the Yale sociology department, “just to get some kind of foundation,” she says. “You know, something to put in the third paragraph.”
She got Neil Bennett on the phone—a thirty-one-year-old unmarried sociologist who had recently completed, with two colleagues, an unpublished study on women’s marriage patterns. Bennett warned her the study wasn’t really finished, but when she pressed him, he told her what he had found: college-educated women who put schooling and careers before their wedding date were going to have a harder time getting married. “The marriage market unfortunately may be falling out from under them,” he told her.
Bennett brought out the numbers: never married college-educated women at thirty had a 20 percent chance of being wed; by thirty-five their odds were down to 5 percent; by forty, to 1.3 percent. And black women had even lower odds. “My jaw just dropped,” recalls Petersen, who was twenty-seven and single at the time. Petersen never thought to question the figures. “We usually just take anything from good schools. If it’s a study from Yale, we just put it in the paper.”
The Advocate ran the news on the front page. The Associated Press immediately picked up the story and carried it across the nation and eventually around the world. In no time, Bennett was fielding calls from Australia.
In the United States, the marriage news was absorbed by every outlet of mass culture. The statistics received front-page treatment in virtually every major newspaper and top billing on network news programs and talk shows. They wound up in sitcoms from “Designing Women” to “Kate and Allie;” in movies from Crossing Delancey to When Harry Met Sally to Fatal Attraction; in women’s magazines from Mademoiselle to Cosmopolitan; in dozens of self-help manuals, dating-service mailings, night-class courses on relationships, and greeting cards. Even a transit advertising service, “The Street Fare Journal,” plastered the study’s findings on display racks in city buses around the nation, so single straphangers on their way to work could gaze upon a poster of a bereft lass in a bridal veil, posed next to a scorecard listing her miserable nuptial odds.
Bennett and his colleagues, Harvard economist David Bloom and Yale graduate student Patricia Craig, predicted a “marriage crunch” for baby-boom college-educated women for primarily one reason: women marry men an average of between two and three years older. So, they reasoned, women born in the first half of the baby boom between 1946 and 1957, when the birthrate was increasing each year, would have to scrounge for men in the less populated older age brackets. And those education-minded women who decided to get their diplomas before their marriage licenses would wind up worst off, the researchers postulated—on the theory that the early bird gets the worm.
At the very time the study was released, however, the assumption that women marry older men was rapidly becoming outmoded; federal statistics now showed first-time brides marrying grooms an average of only 1.8 years older. But it was impossible to revise the Harvard-Yale figures in light of these changes, or even to examine them—since the study wasn’t published. This evidently did not bother the press, which chose to ignore a published study on the same subject—released only a few months earlier—that came to the opposite conclusion. That study, an October 1985 report by researchers at the University of Illinois, concluded that the marriage crunch in the United States was minimal. Their data, the researchers wrote, “did not support theories which see the marriage squeeze as playing a major role in recent changes in marriage behavior.” (In fact, in their historical and geographic review of marital data, they could find “marriage crunches” only in a few European nations back in the 1900s and in some Third World countries in more modern times.)
In March 1986, Bennett and his co-researchers released an informal “discussion paper” that revealed they had used a “parametric model” to compute women’s marital odds—an unorthodox and untried method for predicting behavior. Princeton professors Ansley Coale and Donald McNeil had originally constructed the parametric model to analyze marital patterns of elderly women who had already completed their marriage cycle. Bennett and Bloom, who had been graduate students under Coale, thought they could use the same method to predict marriage patterns. Coale, asked about it later, was doubtful. “In principle, the model may be applicable to women who haven’t completed their marital history,” he says, “but it is risky to apply it.”
To make matters worse, Bennett, Bloom, and Craig took their sample of women from the 1982 Current Population Survey, an off year in census-data collection that taps a much smaller number of households than the decennial census study. The researchers then broke that sample down into ever smaller subgroups—by age, race, and education—until they were making generalizations based on small unrepresentative samples of women.
As news of the “man shortage” study raced through the media, Jeanne Moorman, a demographer in the U.S. Census Bureau’s marriage and family statistics branch, kept getting calls from reporters seeking comment. She decided to take a closer look at the researchers’ paper. A college-educated woman with a doctoral degree in marital demography, Moorman was herself an example of how individual lives defy demographic pigeonholes: she had married at thirty-two, to a man nearly four years younger.
Moorman sat down at her computer and conducted her own marriage study, using conventional standard-life tables instead of the parametric model, and drawing on the 1980 Population Census, which includes 13.4 million households, instead of the 1982 survey that Bennett used, which includes only 60,000 households. The results: At thirty, never-married college-educated women have a 58 to 66 percent chance at marriage—three times the Harvard-Yale study’s predictions. At thirty-five, the odds were 32 to 41 percent, seven times higher than the Harvard-Yale figure. At forty, the odds were 17 to 23 percent, twenty-three times higher. And she found that a college-educated single woman at thirty would be more likely to marry than her counterpart with only a high school diploma.
In June 1986, Moorman wrote to Bennett with her findings. She pointed out that more recent data also ran counter to his predictions about college-educated women. While the marriage rate has been declining in the general population, the rate has actually risen for women with four or more years of college who marry between ages twenty-five and forty-five. “This seems to indicate delaying rather than forgoing marriage,” she noted.
Moorman’s letter was polite, almost deferential. As a professional colleague, she wrote, she felt obligated to pass along these comments, “which I hope will be well received.” They were received with silence. Two months passed. Then, in August, writer Ben Wattenberg mentioned Moorman’s study in his syndicated newspaper column and noted that it would be presented at the Population Association of America Conference, an important professional gathering for demographers. Moorman’s findings could prove embarrassing to Bennett and Bloom before their colleagues. Suddenly, a letter arrived in Moorman’s mailbox. “I understand from Ben Wattenberg that you will be presenting these results at PAA in the spring,” Bennett wrote; would she send him a copy “as soon as it’s available”? When she didn’t send it off at once, he called and, Moorman recalls, “He was very demanding. It was, ‘You have to do this, you have to do that.’” This was to become a pattern in her dealings with Bennett, she says. “I always got the feeling from him that he was saying, ‘Go away, little girl, I’m a college professor; I’m right and you have no right to question me.’” (Bennett refuses to discuss his dealings with Moorman or any other aspect of the marriage study’s history, asserting that he has been a victim of the over-eager media, which “misinterpreted [the study] more than I had ever anticipated.”)
Meanwhile at the Census Bureau, Moorman recalls, she was running into interference from Reagan administration officials. The head office handed down a directive, ordering her to quit speaking to the press about the marriage study because such critiques were “too controversial.” When a few TV news shows actually invited her to tell the other side of the man-shortage story, she had to turn them down. She was told to concentrate instead on a study that the White House wanted—about how poor unwed mothers abuse the welfare system.
By the winter of 1986, Moorman had put the finishing touches on her marriage report with the more optimistic findings and released it to the press. The media relegated it to the inside pages, when they reported it at all. At the same time, in an op-ed piece printed in the New York Times, the Boston Globe, and Advertising Age, Bennett and Bloom roundly attacked Moorman for issuing her study, which only “further muddled the discussion,” they complained. Moorman and two other Census Bureau statisticians wrote a response to Bennett and Bloom’s op-ed article. But the Census Bureau held up its release for months. “By the time they finished blue-lining it,” Moorman recalls, “it said nothing. We sent it to the New York Times, but by then it was practically the next December and they wouldn’t print it.”
Bennett and Bloom’s essay had criticized Moorman for using the standard-life tables, which they labeled a “questionable technique.” So Moorman decided to repeat her study using the Harvard-Yale men’s own parametric model. She took the data down the hall to Robert Fay, a statistician whose specialty is mathematical models. Fay looked over Bennett and Bloom’s computations and immediately spotted a major error. They had forgotten to factor in the different patterns in college-and high school-educated women’s marital histories. (High school-educated women tend to marry in a tight cluster right after graduation, making for a steep and narrow bell curve skewed to the left. College-educated women tend to spread out the age of marriage over a longer and later period of time, making for a longer and lower curve skewed to the right.) Fay made the adjustments and ran the data again, using Bennett and Bloom’s mathematical model. The results this time were nearly identical to Moorman’s.
So Robert Fay wrote a letter to Bennett. He pointed out the error and its significance. “I believe this reanalysis points up not only the incorrectness of your results,” he wrote, “but also a necessity to return to the rest of the data to examine your assumptions more closely.” Bennett wrote back the next day. “Things have gotten grossly out of hand,” he said. “I think it’s high time that we get together and regain at least some control of the situation.” He blamed the press for their differences and pointedly noted that “David [Bloom] and I decided to stop entirely our dealings with all media,” a hint perhaps that the Census researchers should do the same. But Bennett needn’t have worried about his major error making headlines: Moorman had, in fact, already mentioned it to several reporters, but none were interested.
Still, Bennett and Bloom faced the discomforting possibility that the Census researchers might point out their mistake at the upcoming PAA conference. In what Moorman suspects was an effort to avert this awkward event, Bennett and Bloom suddenly proposed to Moorman that they all “collaborate” on a new study they could submit jointly to the PAA conference—in lieu of Moorman’s. When Bennett and Bloom discovered they had missed the conference deadline for filing such a new paper, Moorman notes, they just as suddenly dropped the collaboration idea.
In the spring of 1987, the demographers flew to Chicago for the PAA conference. The day before the session, Moorman recalls, she got a call from Bloom. He and Bennett were going to try to withdraw their marriage study anyway, he told her—and substitute a paper on fertility instead. But the conference chairman refused to allow the eleventh-hour switch.
When it was time to present the notorious marriage study before their colleagues, Bloom told the assembly that their findings were “preliminary,” gave a few brief remarks and quickly yielded the floor. Moorman was up next. But, thanks to still more interference from her superiors in Washington, there was little she could say. The director of the Census Bureau, looking to avoid further controversy, had ordered her to remove all references to the Harvard-Yale marriage study from her conference speech.
Three and a half years after the Harvard-Yale study made nationwide headlines, the actual study was finally published—without the marriage statistics. Bennett told the New York Times: “We’re not shying away because we have anything to hide.” And the reporter took him at his word. The famous statistics were deleted, the news story concluded, only because the researchers found them “a distraction from their central findings.”
• • •
IN ALL the reportorial enterprise expended on the Harvard-Yale study, the press managed to overlook a basic point: there was no man shortage. As a simple check of the latest Census population charts would have revealed, there were about 1.9 million more bachelors than unwed women between the ages of twenty-five and thirty-four and about a half million more between the ages of thirty-five and fifty-four. If anyone faced a shortage of potential spouses, it was men in the prime marrying years: between the ages of twenty-four and thirty-four, there were 119 single men for every hundred single women.
A glance at past Census charts would also have dispelled the notion that the country was awash in a record glut of single women. The proportion of never-married women, about one in five, was lower than it had been at any time in the 20th century except the ’50s, and even lower than the mid to late 19th century, when one in three women were unwed. If one looks at never-married women aged forty-five to fifty-four (a better indicator of lifelong single status than women in their twenties and thirties, who may simply be postponing marriage), the proportion of unwed women in 1985 was, in fact, smaller than it had ever been—smaller even than in the marriage-crazed ’50s. (Eight percent of these women were single in 1950, compared with 5 percent in 1985.) In fact, the only place where a “surplus” of unattached women could be said to exist in the ’80s was in retirement communities. What was the median age of women who were living alone in 1986? Sixty-six years old. (The median age of single men, by contrast, was forty-two.)
Conventional press wisdom held that single women of the ’80s were desperate for marriage—a desperation that mounted with every passing unwed year. But surveys of real-life women told a different story. A massive study of women’s attitudes by Battelle Memorial Institute in 1986, which examined fifteen years of national surveys of ten thousand women, found that marriage was no longer the centerpiece of women’s lives and that women in their thirties were not only delaying but actually dodging the wedding bands. The 1985 Virginia Slims poll reported that 70 percent of women believed they could have a “happy and complete” life without a wedding ring. In the 1989 “New Diversity” poll by Langer Associates and Significance Inc., that proportion had jumped to 90 percent. The 1990 Virginia Slims poll found that nearly 60 percent of single women believed they were a lot happier than their married friends and that their lives were “a lot easier.” A 1986 national survey commissioned by Glamour magazine found a rising preference for the single life among women in their twenties and thirties: 90 percent of the never-married women said “the reason they haven’t [married] is that they haven’t wanted to yet.” And a 1989 Louis Harris poll of still older single women—between forty-five and sixty—found that the majority of them said they didn’t want to get married. A review of fourteen years of U.S. National Survey data charted an 11 percent jump in happiness among 1980s-era single women in their twenties and thirties—and a 6.3 percent decline in happiness among married women of the same age. If marriage had ever served to boost personal female happiness, the researchers concluded, then “those effects apparently have waned considerably in the last few years.” A 1985 Woman’s Day survey of sixty thousand women found that only half would marry their husbands again if they had it to do over.
In lieu of marriage, women were choosing to live with their loved ones. The cohabitation rate quadrupled between 1970 and 1985. When the federal government finally commissioned a study on single women’s sexual habits in 1986, the first time ever, the researchers found that one-third of them had cohabited at some time in their lives. Other demographic studies calculated that at least one-fourth of the decline in married women could be attributed to couples cohabiting.
The more women are paid, the less eager they are to marry. A 1982 study of three thousand singles found that women earning high incomes are almost twice as likely to want to remain unwed as women earning low incomes. “What is going to happen to marriage and child-bearing in a society where women really have equality?” Princeton demographer Charles Westoff wondered in the Wall Street Journal in 1986. “The more economically independent women are, the less attractive marriage becomes.”
Men in the ’80s, on the other hand, were a little more anxious to marry than the press accounts let on. Single men far outnumbered women in dating services, matchmaking clubs, and the personals columns, all of which enjoyed explosive growth in the decade. In the mid-’80s, video dating services were complaining of a three-to-one male-to-female sex ratio in their membership rolls. In fact, it had become common practice for dating services to admit single women at heavily reduced rates, even free memberships, in hopes of remedying the imbalance.
Personal ads were similarly lopsided. In an analysis of 1,200 ads in 1988, sociologist Theresa Montini found that most were placed by thirty-five-year-old heterosexual men and the vast majority “wanted a long-term relationship.” Dating service directors reported that the majority of men they counseled were seeking spouses, not dates. When Great Expectations, the nation’s largest dating service, surveyed its members in 1988, it found that 93 percent of the men wanted, within one year, to have either “a commitment with one person” or marriage. Only 7 percent of the men said they were seeking “lots of dates with different people.” Asked to describe “what concerns you the day after you had sex with a new partner,” only 9 percent of the men checked “Was I good?” while 42 percent said they were wondering whether it could lead to a “committed relationship.”
These men had good cause to pursue nuptials; if there’s one pattern that psychological studies have established, it’s that the institution of marriage has an overwhelmingly salutary effect on men’s mental health. “Being married,” the prominent government demographer Paul Glick once estimated, “is about twice as advantageous to men as to women in terms of continued survival.” Or, as family sociologist Jessie Bernard wrote in 1972:
There are few findings more consistent, less equivocal, [and] more convincing, than the sometimes spectacular and always impressive superiority on almost every index—demographic, psychological, or social—of married over never-married men. Despite all the jokes about marriage in which men indulge, all the complaints they lodge against it, it is one of the greatest boons of their sex.
Bernard’s observation still applies. As Ronald C. Kessler, who tracks changes in men’s mental health at the University of Michigan’s Institute for Social Research, says: “All this business about how hard it is to be a single woman doesn’t make much sense when you look at what’s really going on. It’s single men who have the worst of it. When men marry, their mental health massively increases.”
The mental health data, chronicled in dozens of studies that have looked at marital differences in the last forty years, are consistent and overwhelming: The suicide rate of single men is twice as high as that of married men. Single men suffer from nearly twice as many severe neurotic symptoms and are far more susceptible to nervous breakdowns, depression, even nightmares. And despite the all-American image of the carefree single cowboy, in reality bachelors are far more likely to be morose, passive, and phobic than married men.
When contrasted with single women, unwed men fared no better in mental health studies. Single men suffer from twice as many mental health impairments as single women; they are more depressed, more passive, more likely to experience nervous breakdowns and all the designated symptoms of psychological distress—from fainting to insomnia. In one study, one third of the single men scored high for severe neurotic symptoms; only 4 percent of the single women did.
If the widespread promotion of the Harvard-Yale marriage study had one effect, it was to transfer much of this bachelor anxiety into single women’s minds. In the Wall Street Journal, a thirty-six-year-old single woman perceptively remarked that being unmarried “didn’t bother me at all” until after the marriage study’s promotion; only then did she begin feeling depressed. A thirty-five-year-old woman told USA Today, “I hadn’t even thought about getting married until I started reading those horror stories” about women who may never wed. In a Los Angeles Times story, therapists reported that after the study’s promotion, single female patients became “obsessed” with marriage, ready to marry men they didn’t even love, just to beat the “odds.” When Great Expectations surveyed its members a year after the study’s promotion, it found that 42 percent of single women said they now brought up marriage on the first date. The Annual Study of Women’s Attitudes, conducted by Mark Clements Research for many women’s magazines, found that the proportion of all single women who feared they would never marry had nearly doubled in that one year after the Harvard-Yale study came out, from 14 to 27 percent, and soared to 39 percent for women twenty-five and older, the group targeted in the study. The year after the marriage report, news surfaced that women’s age at first marriage had dropped slightly and, reversing a twenty-year trend, the number of family households had grown faster between 1986 and 1987 than the number of nonfamily households. (The increase in family households, however, was a tiny 1.5 percent.) These small changes were immediately hailed as a sign of the comeback of traditional marriage. “A new traditionalism, centered on family life, is in the offing,” Jib Fowles, University of Houston professor of human sciences, cheered in a 1988 opinion piece in the New York Times. Fowles predicted “a resurgence of the conventional family by the year 2000 (father working, mother at home with the children).” This would be good for American industry, he reminded business magnates who might be reading the article. “Romance and courtship will be back in favor, so sales of cut flowers are sure to rise,” he pointed out. And “a return to homemaking will mean a rise in supermarket sales.”
This would also be good news for men, a point that Fowles skirted in print but made plain enough in a later interview: “There’s not even going to have to be a veneer of that ideology of subscribing to feminist thoughts,” he says. “Men are just going to feel more comfortable with the changed conditions. Every sign that I can see is that men feel uncomfortable with the present setup.” He admits to being one of them: “A lot of it has to do with my assumptions of what it is to be a male.”
But will his wife embrace the “new traditionalism” with equal relish? After having recently given birth to their second child, she returned immediately to her post as secondary education coordinator for a large Texas school district. “She’s such a committed person to her job,” Fowles says, sighing. “I don’t think she’d give up her career.”
In the 1970s, many states passed new “no-fault” divorce laws that made the process easier: they eliminated the moralistic grounds required to obtain a divorce and divided up a marriage’s assets based on needs and resources without reference to which party was held responsible for the marriage’s failure. In the 1980s, these “feminist-inspired” laws came under attack: the New Right painted them as schemes to undermine the family, and the media and popular writers portrayed them as inadvertent betrayals of women and children, legal slingshots that “threw thousands of middle-class women,” as a typical chronicler put it, “into impoverished states.”
Perhaps no one person did more to fuel the attack on divorce-law reform in the backlash decade than sociologist Lenore Weitzman, whose 1985 book, The Divorce Revolution: The Unexpected Social and Economic Consequences for Women and Children in America, supplied the numbers quoted by everyone assailing the new laws. From Phyllis Schlafly to Betty Friedan, from the National Review to the “CBS Evening News,” Weitzman’s “devastating” statistics were invoked as proof that women who sought freedom from unhappy marriages were making a big financial mistake: they would wind up poorer under the new laws—worse off than if they had divorced under the older, more “protective” system, or if they had simply stayed married.
If the media latched on to Weitzman’s findings with remarkable fervor, they weren’t solely to blame for the hype. Weitzman wasn’t above blowing her own horn. Until her study came along, she writes in The Divorce Revolution, “No one knew just how devastating divorce had become for women and children.” Her data, she asserts, “took years to collect and analyze” and constituted “the first comprehensive portrait” of the effects of divorce under the new laws.
This is Weitzman’s thesis: “The major economic result of the divorce-law revolution is the systematic impoverishment of divorced women and their children.” Under the old “fault” system, Weitzman writes, the “innocent” party stood to receive more than half the property—an arrangement that she says generally worked to the wronged wife’s benefit. The new system, on the other hand, hurts women because it is too equal—an evenhandedness that is hurting older homemakers most of all, she says. “[T]he legislation of equality actually resulted in a worsened position for women and, by extension, a worsened position for children.”
Weitzman’s work does not say feminists were responsible for the new no-fault laws, but those who promoted her work most often acted as if her book indicts the women’s movement. The Divorce Revolution, Time informed its readers, shows how forty-three states passed no-fault laws “largely in response to feminist demand.” A flurry of anti-no-fault books, most of them knockoffs of Weitzman’s work, blamed the women’s movement for divorced women’s poverty. “The impact of the divorce revolution is a clear example of how an equal-rights orientation has failed women,” Mary Ann Mason writes in The Equality Trap. “[J]udges are receiving the message that feminists are sending.”
Actually, feminists had almost nothing to do with divorce-law re form—as Weitzman herself points out. The 1970 California no-fault law, considered the most radical for its equal-division rule, was drafted by a largely male advisory board. The American Bar Association, not the National Organization for Women, instigated the national “divorce revolution”—which wasn’t even much of a revolution. At the time of Weitzman’s work, half the states still had the traditional “fault” system on their books, with no-fault only as an option. Only eight states had actually passed community property provisions like the California law, and only a few required equal property division.
Weitzman argued that because women and men are differently situated in marriage—that is, the husbands usually make more money and, upon divorce, the wives usually get the kids—treating the spouses equally upon divorce winds up overcompensating the husband and cheating the wife and children. On its face, this argument seems reasonable enough, and Weitzman even had the statistics to prove it: “The research shows that on the average, divorced women and the minor children in their households experience a 73 percent decline in their standard of living in the first year after divorce. Their former husbands, in contrast, experience a 42 percent rise in their standard of living.”
These figures seemed alarming, and the press willingly passed them on—without asking two basic questions: Were Weitzman’s statistics correct? And, even more important, did she actually show that women fared worse under the new divorce laws than the old?
• • •
IN THE summer of 1986, soon after Lenore Weitzman had finished testifying before Congress on the failings of no-fault divorce, she received a letter from Saul Hoffman, an economist at the University of Delaware who specializes in divorce statistics. He wrote that he and his partner, University of Michigan social scientist Greg Duncan, were a little bewildered by her now famous 73 percent statistic. They had been tracking the effect of divorce on income for two decades—through the landmark “5,000 Families” study—and they had found the changes following divorce to be nowhere near as dramatic as she described. They found a much smaller 30 percent decline in women’s living standards in the first year after divorce and a much smaller 10 to 15 percent improvement for men. Moreover, Hoffman observed, they found the lower living standard for many divorced women to be temporary. Five years after divorce, the average woman’s living standard was actually slightly higher than when she was married to her ex-husband.
What baffled Hoffman and Duncan most was that Weitzman claimed in her book to have used their methods to arrive at her 73 percent statistic. Hoffman’s letter wondered if he and Duncan might take a look at her data. No reply. Finally, Hoffman called. Weitzman told him she “didn’t know how to get hold of her data,” Hoffman recalls, because she was at Princeton and her data was at Harvard. The next time he called, he says, Weitzman said she couldn’t give him the information because she had broken her arm on a ski vacation. “It sort of went on and on,” Hoffman says of the next year and a half of letters and calls to Weitzman. “Sometimes she would have an excuse. Sometimes she just wouldn’t respond at all. It was a little strange. Let’s just say, it’s not the way I’m used to a scholar normally behaving.” Finally, after the demographers appealed to the National Science Foundation, which had helped fund her research, Weitzman relented and promised she would put her data tapes on reserve at Radcliffe’s Murray Research Center. But six months later, they still weren’t there. Again, Hoffman appealed to NSF officials. Finally, in late 1990, the library began receiving Weitzman’s data. As of early 1991, the archives’ researchers were still sorting through the files and they weren’t yet in shape to be reviewed.
In the meantime, Duncan and Hoffman tried repeating her calculations using her numbers in the book. But they still came up with a 33 percent, not a 73 percent, decline in women’s standard of living. The two demographers published this finding in Demography. “Weitzman’s highly publicized findings are almost certainly in error,” they wrote. Not only was the 73 percent figure “suspiciously large,” it was “inconsistent with information on changes in income and per capita income that she reports.” The press response? The Wall Street Journal acknowledged Duncan and Hoffman’s article in a brief item in the newspaper’s demography column. No one else picked it up.
Weitzman never responded to Duncan and Hoffman’s critique. “They are just wrong,” she says in a phone interview. “It does compute.” She refuses to answer any additional questions put to her. “You have my position. I’m working on something very different and I just don’t have the time.”
Confirmation of Duncan and Hoffman’s findings came from the U.S. Census Bureau, which issued its study on the economic effects of divorce in March 1991. The results were in line with Duncan and Hoffman’s. “[Weitzman’s] numbers are way too high,” says Suzanne Bianchi, the Census Study’s author. “And that seventy-three percent figure that keeps getting thrown around isn’t even consistent with other numbers in [Weitzman’s] work.”
How could Weitzman’s conclusions have been so far off the mark? There are several possible explanations. First, her statistics, unlike Duncan and Hoffman’s, were not based on a national sample, although the press widely represented them as such. She drew the people she interviewed only from Los Angeles County divorce court. Second, her sample was remarkably small—114 divorced women and 114 divorced men. (And her response rate was so low that Duncan and Hoffman and other demographers who reviewed her work questioned whether her sample was even representative of Los Angeles.)
Finally, Weitzman drew her financial information on these divorced couples from a notoriously unreliable source—their own memories. “We were amazed at their ability to recall precisely the appraised value of their house, the amount of the mortgage, the value of the pension plan, etc.,” she writes in her book. Memory, particularly in the emotion-charged realm of divorce, is hardly a reliable source of statistics; one wishes that Weitzman had been a little less “amazed” by the subjects’ instant recall and a little more dogged about referring to the actual records.
To be fair, the 73 percent statistic is only one number in Weitzman’s work. And a 30 percent decline in women’s living standard is hardly ideal, either. Although the media fixed on its sensational implications, the figure has little bearing on her second and more central point—that women are worse off since “the divorce revolution.” This is an important question because it gets to the heart of the backlash argument: women are better off “protected” than equal.
Yet, while Weitzman’s book states repeatedly that the new laws have made life “worse” for women than the old ones, it concludes by recommending that legislators should keep the new divorce laws with a little fine-tuning. And she strongly warns against a return to the old system, which she calls a “charade” of fairness. “[I]t is clear that it would be unwise and inappropriate to suggest that California return to a more traditional system,” she writes.
Needless to say, this conclusion never made it into the press coverage of Weitzman’s study. A closer reading explains why Weitzman had little choice but to abandon her theory on no-fault divorce: she had conducted interviews only with men and women who divorced after the 1970 no-fault law went into effect in California. She had no comparable data on couples who divorced under the old system—and so no way of testing her hypothesis. (A later 1990 study by two law professors reached the opposite conclusion: women and children, they found, were slightly better off economically under the no-fault provisions.)
Nonetheless, Weitzman suggests she had two other types of evidence to show that divorcing women suffered more under no-fault law. Divorcing women, she writes, are less likely to be awarded alimony under the new legislation—a loss most painful to older homemakers who are ill equipped to enter the work force. Second, women are now often forced to sell the family house. Yet Weitzman fails to make the case on either count.
National data collected by the U.S. Census Bureau show that the percentage of women awarded alimony or maintenance payments (all told, a mere 14 percent) is not significantly different from what it was in the 1920s. Weitzman argues that, even so, one group of women—long-married traditional housewives—have been hurt by the new laws, caught in the middle when the rules changed. Yet her own data show that older housewives and long-married women are the only groups of divorced women who actually are being awarded alimony in greater numbers under the new laws than the old. The increase that she reports for housewives married more than ten years is a remarkable 21 percent.
Her other point is that under no-fault “equal division” rules, the couple is increasingly forced to sell the house, whereas under the old laws, she says, the judge traditionally gave it to the wife. But the new divorce laws don’t require house sales and, in fact, the authors of the California law explicitly stated that judges shouldn’t use the law to force single mothers and their children from the home. If more women are being forced to sell the family home, the new laws aren’t to blame.
The example Weitzman gives of a forced house sale is in itself harshly illuminating. A thirty-eight-year-old divorcing housewife wanted to remain in the home where the family had lived for fifteen years. Not only did she want to spare her teenage son further disruption, she couldn’t afford to move—because the child support and alimony payments the judge had granted were so low. In desperation, she offered to sacrifice her portion of her husband’s pension plan, about $85,000, if only he would let her stay in the house. He wouldn’t. She tried next to refinance the house, and pay off her husband that way, but no bank would give her a loan based on spousal support. In court, the judge was no more yielding:
I begged the judge. . . . All I wanted was enough time for Brian [her son] to adjust to the divorce. . . . I broke down and cried on the stand . . . but the judge refused. He gave me three months to move. . . . [M]y husband’s attorney threatened me with contempt if I wasn’t out on time.
The real source of divorced women’s woes can be found not in the fine print of divorce legislation but in the behavior of ex-husbands and judges. Between 1978 and 1985, the average amount of child support that divorced men paid fell nearly 25 percent. Divorced men are now more likely to meet their car payments than their child support obligations—even though, as one study in the early ’80s found, for two-thirds of them, the amount owed their children is less than their monthly auto loan bill.
As of 1985, only half of the 8.8 million single mothers who were supposed to be receiving child support payments from their ex-husbands actually received any money at all, and only half of that half were actually getting the full amount. By 1988, the federal Office of Child Support Enforcement had collected only $5 billion of the $25 billion fathers owed in back child support. And studies on child support collection strategies are finding that only one tactic seems to awaken the moral conscience of negligent fathers: mandatory jail sentences. As sociologist Arlie Hochschild has observed, economic abandonment may be the new method some divorced men have devised for exerting control over their former families: “The ‘new’ oppression outside marriage thus creates a tacit threat to women inside marriage,” she writes. “Patriarchy has not disappeared; it has changed form.”
At the same time, public and judicial officials weren’t setting much of an example. A 1988 federal audit found that thirty-five states weren’t complying with federal child support laws. And judges weren’t even upholding the egalitarian principles of no-fault. Instead, surveys in several states found that judges were willfully misinterpreting the statutes to mean that women should get not one-half but one-third of all assets from the marriage. Weitzman herself reached the conclusion that judicial antagonism to feminism was aggravating the rough treatment of contemporary divorced women. “The concept of ‘equality’ and the sex-neutral language of the law,” she writes, have been “used by some lawyers and judges as a mandate for ‘equal treatment’ with a vengeance, a vengeance that can only be explained as a backlash reaction to women’s demands for equality in the larger society.”
In the end, the most effective way to correct the post-divorce inequities between the sexes is simple: correct pay inequality in the work force. If the wage gap were wiped out between the sexes, a federal advisory council concluded in 1982, one half of female-headed households would be instantly lifted out of poverty. “The dramatic increase in women working is the best kind of insurance against this vulnerability,” Duncan says, observing that women’s access to better-paying jobs saved a lot of divorced women from a far worse living standard. And that access, he points out, “is largely a product of the women’s movement.”
• • •
WHILE THE social scientists whose views were promoted in the ’80s harped on the “devastating consequences” of divorce on women, we heard virtually nothing about its effect on men. This wasn’t for lack of data. In 1984, demographers on divorce statistics at the Institute for Social Research reviewed three decades of national data on men’s mental health, and flatly concluded—in a report that got little notice—the following: “Men suffer more from marital disruption than women.” No matter where they looked on the mental spectrum, divorced men were worse off—from depressions to various psychological impairments to nervous breakdowns, from admissions to psychiatric facilities to suicide attempts.
From the start, men are less anxious to untie the knot than women: in national surveys, less than a third of divorced men say they were the spouse who wanted the divorce, while women report they were the ones actively seeking divorce 55 to 66 percent of the time. Men are also more devastated than women by the breakup—and time doesn’t cure the pain or close the gap. A 1982 survey of divorced people a year after the breakup found that 60 percent of the women were happier, compared with only half the men; a majority of the women said they had more self-respect, while only a minority of the men felt that way. The nation’s largest study on the long-term effects of divorce found that five years after divorce, two-thirds of the women were happier with their lives; only 50 percent of the men were. By the ten-year mark, the men who said their quality of life was no better or worse had risen from one-half to two-thirds. While 80 percent of women ten years after divorce said it was the right decision, only 50 percent of the ex-husbands agreed. “Indeed, when such regrets [about divorcing] are heard, they come mostly from older men,” the study’s director, Judith Wallerstein, observed.
Nonetheless, in her much-publicized 1989 book, Second Chances: Men, Women and Children a Decade After Divorce—hailed by such New Right groups as The Family in America and promptly showcased on the cover of the New York Times Magazine—Wallerstein chooses to focus instead on her belief that children are worse off when their parents divorce. Her evidence? She doesn’t have any: like Weitzman, she had no comparative data. She had never bothered to test her theory on a control group with intact families. Her three-hundred-page book explains away this fundamental flaw in a single footnote: “Because so little was known about divorce, it was premature to plan a control group,” Wallerstein writes, adding that she figured she would “generate hypotheses” first, then maybe conduct the control-group study at a later date—a shoot-first, ask-questions-later logic that sums up the thinking of many backlash opinion makers.
“It’s not at all clear what a control group would be,” Wallerstein explains later. One would have to control for other factors that might have led to the divorce, like “frigidity and other sexual problems,” she argues. “I think people who are asking for a control group are refusing to understand the whole complexity of what a control group is,” she says. “It would just be foolish.”
By the end of the decade, however, Wallerstein was feeling increasingly queasy about the ways her work was being used—and distorted—by politicians and the press. At a congressional hearing, she was startled when Sen. Christopher Dodd proposed that, given her findings, maybe the government should impose a mandatory delay on all couples seeking a divorce. And then national magazines quoted her work, wrongly, as saying that most children from divorced families become delinquents. “It seems no matter what you say,” she sighs, “it’s misused. It’s a very political field.”
If the campaign against no-fault divorce had no real numbers to make its case, then relentless promotion against divorce in the ’80s served as an effective substitute. Americans were finally convinced. Public support for liberalizing divorce laws, which had been rising since 1968, fell 8 percent from the ’70s. And it was men who contributed most to this downturn; nearly twice as many men as women told pollsters they wanted to make it harder for couples to divorce.
On February 18, 1982, the New England Journal of Medicine reported that women’s chances of conceiving dropped suddenly after age thirty. Women between the ages of thirty-one and thirty-five, the researchers claimed, stood a nearly 40 percent chance of being infertile. This was unprecedented news indeed: virtually every study up until then had found fertility didn’t start truly declining until women reached at least their late thirties or even early forties. The supposedly neutral New England Journal of Medicine didn’t just publish the report. It served up a paternalistic three-page editorial, exhorting women to “reevaluate their goals” and have their babies before they started careers. The New York Times put the news on its front page that day, in a story that extolled the study as “unusually large and rigorous” and “more reliable” than previous efforts. Dozens of other newspapers, magazines, and TV news programs quickly followed suit. By the following year, the statistic had found its way into alarmist books about the “biological clock.” And like the children’s game of Telephone, as the 40 percent figure got passed along, it kept getting larger. A self-help book was soon reporting that women in their thirties now faced a “shocking 68 percent” chance of infertility—and promptly faulted the feminists, who had failed to advise women of the biological drawbacks of a successful career.
For their study, French researchers Daniel Schwartz and M. J. Mayaux had studied 2,193 Frenchwomen who were infertility patients at eleven artificial-insemination centers that were all run by a federation that sponsored the research—and stood to benefit handsomely from heightened female fears of infertility. The patients they used in the study were hardly representative of the average woman: they were all married to completely sterile men and were trying to get pregnant through artificial insemination. Frozen sperm, which was used in this case, is far less potent than the naturally delivered, “fresh” variety. In fact, in an earlier study that Schwartz himself had conducted, he found women were more than four times more likely to get pregnant having sex regularly than by being artificially inseminated.
The French study also declared any woman infertile who had not gotten pregnant after one year of trying. (The twelve-month rule is a recent development, inspired by “infertility specialists” marketing experimental and expensive new reproductive technologies; the definition of infertility used to be set at five years.) The one-year cutoff is widely challenged by demographers who point out that it takes young newlyweds a mean time of eight months to conceive. In fact, only 16 to 21 percent of couples who are defined as infertile under the one-year definition actually prove to be, a congressional study found. Time is the greatest, and certainly the cheapest, cure for infertility. In a British longitudinal survey of more than seventeen thousand women, one of the largest fertility studies ever conducted, 91 percent of the women eventually became pregnant after thirty-nine months.
After the French study was published, many prominent demographers disputed its results in a round of letters and articles in the professional literature. John Bongaarts, senior associate of the Population Council’s Center for Policy Studies, called the study “a poor basis for assessing the risk of female sterility” and largely invalid. Three statisticians from Princeton University’s Office of Population Research also debunked the study and warned it could lead to “needless anxiety” and “costly medical treatment.” Even the French research scientists were backing away from their own study. At a professional conference later that year, they told their colleagues that they never meant their findings to apply to all women. But neither their retreat nor their peers’ disparaging assessments attracted press attention.
Three years later, in February 1985, the U.S. National Center for Health Statistics unveiled the latest results of its nationwide fertility survey of eight thousand women. It found that American women between thirty and thirty-four faced only a 13.6 percent, not 40 percent, chance of being infertile. Women in this age group had a mere 3 percent higher risk of infertility than women in their early twenties. In fact, since 1965, infertility had declined slightly among women in their early-to mid-thirties—and even among women in their forties. Overall, the percentage of women unable to have babies had actually fallen—from 11.2 percent in 1965 to 8.5 percent in 1982.
As usual, this news made no media splashes. And in spite of the federal study’s findings, Yale medical professor Dr. Alan DeCherney, the lead author of the New England Journal’s sermonizing editorial, says he stands by his comments. Asked whether he has any second thoughts about the editorial’s message, he chuckles: “No, none at all. The editorial was meant to be provocative. I got a great response. I was on the ‘Today’ show.”
• • •
IN SEEKING the source of the “infertility epidemic,” the media and medical establishment considered only professional women, convinced that the answer was to be found in the rising wealth and independence of a middle-class female population. A New York Times columnist blamed feminism and the careerism it supposedly spawned for creating “the sisterhood of the infertile” among middle-class women. Writer Molly McKaughan admonished fellow career women, herself included, in Working Woman (and, later, in her book The Biological Clock) for the “menacing cloud” of infertility. Thanks largely to the women’s movement, she charged, we made this mistake: “We put our personal fulfillment first.”
At the same time, gynecologists began calling endometriosis, a uterine ailment that can cause infertility, the “career woman’s disease.” It afflicts women who are “intelligent, living with stress [and] determined to succeed at a role other than ‘mother’ early in life,” Niels Lauersen, a New York Medical College obstetrics professor at the time, asserted in the press. (In fact, epidemiologists find endometriosis no more prevalent among professional women than any other group.) Others warned of high miscarriage rates among career women. (In fact, professional women typically experience the lowest miscarriage rate.) Still others reminded women that if they waited, they would more likely have stillbirths or premature, sick, retarded, or abnormal babies. (In fact, a 1990 study of four thousand women found women over thirty-five no more likely than younger women to have stillbirths or premature or sick newborns; a 1986 study of more than six thousand women reached a similar conclusion. Women under thirty-five now give birth to children with Down syndrome at a higher rate than women over thirty-five.)
Exercising the newly gained right to a legal abortion became another favorite “cause” of infertility. Gynecologists warned their middle-class female patients that if they had “too many” abortions, they risked developing infertility problems later, or even becoming sterile. Several state and local governments even enacted laws requiring physicians to advise women that abortions could lead to later miscarriages, premature births, and infertility. Researchers expended an extraordinary amount of energy and federal funds in quest of supporting data. More than 150 epidemiological research efforts in the last twenty years searched for links between abortion and infertility. But, as a research team who conducted a worldwide review and analysis of the research literature concluded in 1983, only ten of these studies used reliable methods, and of those ten, only one found any relation between abortion and later pregnancy problems—and that study looked at a sample of Greek women who had undergone dangerous, illegal abortions. Legal abortion methods, the researchers wrote, “have no adverse effect on a woman’s subsequent ability to conceive.”
In reality, women’s quest for economic and educational equality has only improved reproductive health and fertility. Better education and bigger paychecks breed better nutrition, fitness, and health care, all important contributors to higher fecundity. Federal statistics bear out that college-educated and higher-income women have a lower infertility rate than their high school—educated and low-income counterparts.
The “infertility epidemic” among middle-class career women over thirty was a political program—and, for infertility specialists, a marketing tool—not a medical problem. The same White House that promoted the infertility threat allocated no funds toward preventing infertility—and, in fact, rebuffed all requests for aid. That the backlash’s spokesmen showed so little interest in the decade’s real infertility epidemics should have been a tipoff. The infertility rates of young black women tripled between 1965 and 1982. The infertility rates of young women of all races in their early twenties more than doubled. In fact, by the ’80s, women between twenty and twenty-four were suffering from 2 percent more infertility than women nearing thirty. Yet we heard little of this crisis and its causes—which had nothing to do with feminism or yuppie careerists.
This epidemic, in fact, could be traced in large part to the negligence of doctors and government officials, who were shockingly slow to combat the sexually transmitted disease of chlamydia; infection rates rose in the early ’80s and were highest among young women between the ages of fifteen and twenty-four. This illness, in turn, triggered the breakneck spread of pelvic inflammatory disease, which was responsible for a vast proportion of the infertility in the decade and afflicted an additional 1 million women each year. Chlamydia became the number-one sexually transmitted disease in the U.S., afflicting more than 4 million women and men in 1985, causing at least half of the pelvic inflammatory infections, and helping to quadruple life-threatening ectopic pregnancies between 1970 and 1983. By the mid-to late-’80s, as many as one in six young sexually active women were infected; infection rates ran as high as 35 percent in some inner-city clinics.
Yet chlamydia was one of the most poorly publicized, diagnosed, and treated illnesses in the country. Although the medical literature had documented catastrophic chlamydia rates for a decade, and although the disease was costing more than $1.5 billion a year to treat, it wasn’t until 1985 that the federal Centers for Disease Control even discussed drafting policy guidelines. The federal government provided no education programs on chlamydia, no monitoring, and didn’t even require doctors to report the disease. (By contrast, it does require doctors to report gonorrhea, which is half as prevalent.) And although chlamydia was simple to diagnose and easy to cure with basic antibiotics, few gynecologists even bothered to test for it. Nearly three-fourths of the cost of chlamydia infections, in fact, was caused by complications from lack of treatment.
Policymakers and the press in the ’80s also seemed uninterested in signs of another possible infertility epidemic. This one involved men. Men’s sperm count appeared to have dropped by more than half in thirty years, according to the few studies available. (Low sperm count is a principal cause of infertility.) The average man’s count, one researcher reported, had fallen from 200 million sperm per milliliter in the 1930s to between 40 and 70 million by the 1980s. The alarming depletion has many suspected sources: environmental toxins, occupational chemical hazards, excessive X-rays, drugs, tight underwear, even hot tubs. But the causes are murky because research in the area of male infertility is so scant. A 1988 congressional study on infertility concluded that, given the lack of information on male infertility, “efforts on prevention and treatment are largely guesswork.”
The government still does not include men in its national fertility survey. “Why don’t we do men?” William D. Mosher, lead demographer for the federal survey, repeats the question as if it’s the first time he’s heard it. “I don’t know. I mean, that would be another survey. You’d have to raise money for it. Resources aren’t unlimited.”
• • •
IF THE “infertility epidemic” was the first round of fire in the pronatal campaign of the ’80s, then the “birth dearth” was the second. At least the leaders of this campaign were more honest: they denounced liberated women for choosing to have fewer or no children. They didn’t pretend that they were just neutrally reporting statistics; they proudly admitted that they were seeking to manipulate female behavior. “Most of this small book is a speculation and provocation,” Ben Wattenberg freely concedes in his 1987 work, The Birth Dearth. “Will public attitudes change soon, thereby changing fertility behavior?” he asks. “I hope so. It is the root reason for writing this book.”
Instead of hounding women into the maternity ward with now-or-never threats, the birth dearth theorists tried appealing to society’s baser instincts—xenophobia, militarism, and bigotry, to name a few. If white educated middle-class women don’t start reproducing, the birth-dearth men warned, paupers, fools, and foreigners would—and America would soon be out of business. Harvard psychologist Richard Herrnstein predicted that the genius pool would shrink by nearly 60 percent and the population with IQs under seventy would swell by a comparable amount, because the “brighter” women were neglecting their reproductive duties to chase after college degrees and careers—and insisting on using birth control. “Sex comes first, the pains and costs of pregnancy and motherhood later,” he harrumphed. If present trends continue, he grimly advised, “it could swamp the effects of anything else we may do about our economic standing in the world.” The documentation he offered for this trend? Casual comments from some young students at Harvard who seemed “anxious” about having children, grumblings from some friends who wanted more grandchildren, and dialogue from movies like Baby Boom and Three Men and a Baby.
The birth dearth’s creator and chief cheerleader was Ben Wattenberg, a syndicated columnist and senior fellow at the American Enterprise Institute, who first introduced the birth dearth threat in 1986 in the conservative journal Public Opinion—and tirelessly promoted it in an endless round of speeches, radio talks, television appearances, and his own newspaper column.
His inflammatory tactics constituted a notable departure from the levelheaded approach he had advocated a decade earlier in his book The Real America, in which he chided population-boom theorists for spreading “souped-up scare rhetoric” and “alarmist fiction.” The fertility rate, he said, was actually in slow decline, which he saw then as a “quite salutary” trend, promising more jobs and a higher living standard. The birth dearth, he enthused then, “may well prove to be the single most important agent of a massive expansion and a massive economic upgrading” for the middle class.
Just ten years later, the fifty-three-year-old father of four was sounding all the alarms about this “scary” trend. “Will the world backslide?” he gasped in The Birth Dearth. “Could the Third World culture become dominant?” According to Wattenberg’s treatise—subtitled “What Happens When People in Free Countries Don’t Have Enough Babies”—the United States would lose its world power status, millions would be put out of work, multiplying minorities would create “ugly turbulence,” smaller tax bases would diminish the military’s nuclear weapons stockpiles, and a shrinking army would not be able “to deter potential Soviet expansionism.”
When Wattenberg got around to assigning blame, the women’s movement served as the prime scapegoat. For generating what he now characterized as a steep drop in the birthrate to “below replacement level,” he faulted women’s interest in postponing marriage and motherhood, women’s desire for advancing their education and careers, women’s insistence on the legalization of abortion, and “women’s liberation” in general. To solve the problem, he lectures, women should be urged to put their careers off until after they have babies. Nevertheless, he actually maintains, “I believe that The Birth Dearth sets out a substantially pro-feminist view.”
Wattenberg’s birth dearth slogan was quickly adopted by New Right leaders, conservative social theorists, and presidential candidates, who began alluding in ominous—and racist—tones to “cultural suicide” and “genetic suicide.” This threat became the subject of a plank in the political platforms of both Jack Kemp and Pat Robertson, who were also quick to link the fall of the birthrate with the rise in women’s rights. Allan Carlson, president of the conservative Rockford Institute, proposed that the best way to cure birth dearth was to get rid of the Equal Pay Act and federal laws banning sex discrimination in employment. At a 1985 American Enterprise Institute conference, Edward Luttwack went even further: he proposed that American policymakers might consider reactivating the pronatal initiatives of Vichy France; that Nazi-collaborationist government’s attack on abortion and promotion of total motherhood might have valuable application on today’s recalcitrant women. And at a seminar sponsored by Stanford University’s Hoover Institution, panelists deplored “the independence of women” for lowering the birthrate and charged that women who refused to have many children lacked “values.”
These men were as anxious to stop single black women from procreating as they were for married white women to start. The rate of illegitimate births to black women, especially black teenage girls, was reaching “epidemic” proportions, conservative social scientists intoned repeatedly in speeches and press interviews. The pronatalists’ use of the disease metaphor is unintentionally revealing: they considered it an “epidemic” when white women didn’t reproduce or when black women did. In the case of black women, their claims were simply wrong. Illegitimate births to both black women and black teenagers were actually declining in the ’80s; the only increase in out-of-wedlock births was among white women.
The birth dearth theorists were right that women have been choosing to limit family size in record numbers. They were wrong, however, when they said this reproductive restraint has sparked a perilous decline in the nation’s birthrate. The fertility rate has fallen from a high of 3.8 children per woman in 1957 to 1.8 children per woman in the 1980s. But that 1957 peak was the aberration. The national fertility rate has been declining gradually for the last several centuries; the ’80s rate simply marked a return to the status quo. Furthermore, the fertility rate didn’t even fall in the 1980s; it held steady at 1.8 children per woman—where it had been since 1976. And the U.S. population was growing by more than two million people a year—the fastest growth of any industrialized nation.
Wattenberg arrived at his doomsday scenarios by projecting a declining birthrate two centuries into the future. In other words, he was speculating on the number of children of women who weren’t even born—the equivalent of a demographer in preindustrial America theorizing about the reproductive behavior of an ’80s career woman. Projecting the growth rate of a current generation is tricky enough, as post—World War II social scientists discovered. They failed to predict the baby boom—and managed to underestimate that generation’s population by 62 million people.
In the backlash yearbook, two types of women were named most likely to break down: the unmarried and the gainfully employed. According to dozens of news features, advice books, and women’s health manuals, single women were suffering from “record” levels of depression and professional women were succumbing to “burnout”—a syndrome that supposedly caused a wide range of mental and physical illnesses from dizzy spells to heart attacks.
In the mid-’80s, several epidemiological mental health studies noted a rise in mental depression among baby boomers, a phenomenon that soon inspired popular-psychology writers to dub the era “The Age of Melancholy.” Casting about for an explanation for the generation’s gloom, therapists and journalists quickly fastened upon the women’s movement. If baby-boom women hadn’t received their independence, their theory went, then the single ones would be married and the careerists would be home with their children—in both cases, feeling calmer, healthier, and saner.
• • •
THE RISING mental distress of single women “is a phenomenon of this era, it really is,” psychologist Annette Baran asserted in a 1986 Los Angeles Times article, one of many on the subject. “I would suspect,” she said, that single women now represent “the great majority of any psychotherapist’s practice,” precisely “sixty-six percent,” her hunch told her. The author of the article agreed, declaring the “growing number”of single women in psychological torment “an epidemic of sorts.” A 1988 article in New York Woman issued the same verdict: Single women have “stampeded” therapists’ offices, a “virtual epidemic.” The magazine quoted psychologist Janice Lieberman, who said, “These women come into treatment convinced there’s something terribly wrong with them.” And, she assured us, there is: “Being single too long is traumatic.”
In fact, no one knew whether single women were more or less depressed in the ’80s; no epidemiological study had actually tracked changes in single women’s mental health. As psychological researcher Lynn L. Gigy, one of the few in her profession to study single women, has noted, social science still treats unmarried women like “statistical deviants.” They have been “virtually ignored in social theory and research.” But the lack of data hasn’t discouraged advice experts, who have been blaming single women for rising mental illness rates since at least the 19th century, when leading psychiatrists described the typical victim of neurasthenia as “a woman, generally single, or in some way not in a condition for performing her reproductive function.”
As it turns out, social scientists have established only one fact about single women’s mental health: employment improves it. The 1983 landmark “Lifeprints” study found poor employment, not poor marriage prospects, the leading cause of mental distress among single women. Researchers from the Institute for Social Research and the National Center for Health Statistics, reviewing two decades of federal data on women’s health, came up with similar results: “Of the three factors we examined [employment, marriage, children], employment has by far the strongest and most consistent tie to women’s good health.” Single women who worked, they found, were in far better mental and physical shape than married women, with or without children, who stayed home. Finally, in a rare longitudinal study that treated single women as a category, researchers Pauline Sears and Ann Barbee found that of the women they tracked, single women reported the greatest satisfaction with their lives—and single women who had worked most of their lives were the most satisfied of all.
While demographers haven’t charted historical changes in single women’s psychological status, they have collected a vast amount of data comparing the mental health of single and married women. None of it supports the thesis that single women are causing the “age of melancholy”: study after study shows single women enjoying far better mental health than their married sisters (and, in a not unrelated phenomenon, making more money). The warning issued by family sociologist Jessie Bernard in 1972 still holds true: “Marriage may be hazardous to women’s health.”
The psychological indicators are numerous and they all point in the same direction. Married women in these studies report about 20 percent more depression than single women and three times the rate of severe neurosis. Married women have more nervous breakdowns, nervousness, heart palpitations, and inertia. Still other afflictions disproportionately plague married women: insomnia, trembling hands, dizzy spells, nightmares, hypochondria, passivity, agoraphobia and other phobias, unhappiness with their physical appearance, and overwhelming feelings of guilt and shame. A twenty-five-year longitudinal study of college-educated women found that wives had the lowest self-esteem, felt the least attractive, reported the most loneliness, and considered themselves the least competent at almost every task—even child care. A 1980 study found single women were more assertive, independent, and proud of their accomplishments. The Mills Longitudinal Study, which tracked women for more than three decades, reported in 1990 that “traditional” married women ran a higher risk of developing mental and physical ailments in their lifetime than single women—from depression to migraines, from high blood pressure to colitis. A Cosmopolitan survey of 106,000 women found that not only do single women make more money than their married counterparts, they have better health and are more likely to have regular sex. Finally, when noted mental health researchers Gerald Klerman and Myrna Weissman reviewed all the depression literature on women and tested for factors ranging from genetics to PMS to birth control pills, they could find only two prime causes for female depression: low social status and marriage.
• • •
IF MENTALLY imbalanced single women weren’t causing “The Age of Melancholy,” then could it be worn-out career women? Given that employment improves women’s mental health, this would seem unlikely. But the “burnout” experts of the ’80s were ready to make a case for it anyway. “Women’s burnout has come to be a most prevalent condition in our modern culture,” psychologists Herbert Freudenberger and Gail North warned in Women’s Burnout, one of a raft of potboilers on this “ailment” to hit the bookstores in the decade. “More and more, I hear about women pushing themselves to the point of physical and/or psychological collapse,” Marjorie Hansen Shaevitz wrote in The Superwoman Syndrome. “A surprising number of female corporate executives walk around with a bottle of tranquilizers,” Dr. Daniel Crane alerted readers in Savvy. Burnout’s afflictions were legion. As The Type E Woman advised, “Working women are swelling the epidemiological ranks of ulcer cases, drug and alcohol abuse, depression, sexual dysfunction and a score of stress-induced physical ailments, including backache, headache, allergies, and recurrent viral infections and flu.” But that’s not all. Other experts added to this list heart attacks, strokes, hypertension, nervous breakdowns, suicides, and cancer. “Women are freeing themselves up to die like men,” asserted Dr. James Lynch, author of several burnout tomes, pointing to what he claimed was a rise in rates of drinking, smoking, heart disease, and suicide among career women.
The experts provided no evidence, just anecdotes—and periodic jabs at feminism, which they quickly identified as the burnout virus. “The women’s liberation movement started it” with “a full-scale female invasion” of the work force, Women Under Stress maintained, and now many misled women are belatedly discovering that “the toll in stress may not be worth the rewards.” The authors warned, “Sometimes women get so enthused with women’s liberation that they accept jobs for which they are not qualified.”
The message behind all this “advice”? Go home. “Although being a full-time homemaker has its own stresses,” Georgia Witkin-Lanoil wrote in The Female Stress Syndrome, “in some ways it is the easier side of the coin.”
Yet the actual evidence—dozens of comparative studies on working and nonworking women—all point the other way. Whether they are professional or blue-collar workers, working women experience less depression than housewives; and the more challenging the career, the better their mental and physical health. Women who have never worked have the highest levels of depression. Working women are less susceptible than housewives to mental disorders big and small—from suicides and nervous breakdowns to insomnia and nightmares. They are less nervous and passive, report less anxiety and take fewer psychotropic drugs than women who stay home. “Inactivity,” as a study based on the U.S. Health Interview Survey data concludes, “. . . may create the most stress.”
Career women in the ’80s were also not causing a female rise in heart attacks and high blood pressure. In fact, there was no such rise: heart disease deaths among women dropped 43 percent since 1963; and most of that decline has been since 1972, when women’s labor-force participation rate took off. The hypertension rate among women has likewise declined since the early 1970s. Only the lung cancer rate has increased, and that is the legacy not of feminism but the massive midcentury ad campaign to hook women on smoking. Since the ’70s, women’s smoking rate has dropped.
The importance of paid work to women’s self-esteem is basic and long-standing. Even in the “feminine mystique” ’50s, when married women were asked what gave them a sense of purpose and self-worth, two-thirds said their jobs; only one-third said homemaking. In the ’80s, 87 percent of women said it was their work that gave them personal satisfaction and a sense of accomplishment. In short, as one large-scale study concludes, “Women’s health is hurt by their lower [my emphasis] labor-force participation rates.”
By helping to widen women’s access to more and better employment, the women’s rights campaign couldn’t help but be beneficial to women’s mental outlook. A U.S. National Sample Survey study, conducted between 1957 and 1976, found vast improvements in women’s mental health, narrowing the gender differences in rates of psychological distress by nearly 40 percent. The famous 1980 Midtown Manhattan Longitudinal Study found that adult women’s rate of mental health impairment had fallen 50 to 60 percent since the early ’50s. Midtown Manhattan project director Leo Srole concluded that women’s increasing autonomy and economic strength had made the difference. The changes, he wrote, “are not mere chance coincidences of the play of history, but reflect a cause-and-effect connection between the partial emancipation of women from their 19th-century status of sexist servitude, and their 20th-century advances in subjective well-being.”
If anything threatened women’s emotional well-being in the ’80s, it was the backlash itself, which worked to undermine women’s social and economic status—the two pillars on which good mental health are built. As even one of the “burnout” manuals concedes, “There is a direct link between sexism and female stress.” How the current counterassault on women’s rights will affect women’s rate of mental illness, however, remains to be seen: because of the time lag in conducting epidemiological studies, we won’t know the actual numbers for some time.
• • •
WHO, THEN, was causing the baby boomers’ “Age of Melancholy”? In 1984, the National Institute of Mental Health unveiled the results of the most comprehensive U.S. mental health survey ever attempted, the Epidemiological Catchment Area (ECA) study, which drew data from five sites around the country and in Canada. Its key finding, largely ignored in the press: “The overall rates for all disorders for both sexes are now similar.”
Women have historically outnumbered men in their reports of depression by a three-to-one ratio. But the ECA data, collected between 1980 and 1983, indicated that the “depression gap” had shrunk to less than two-to-one. In fact, in some longitudinal reviews now, the depression gap barely even existed. In part, the narrowing depression gap reflected women’s brightening mental picture—but, even more so, it signaled a darkening outlook for men. Epidemiological researchers observed a notable increase especially in depressive disorders among men in their twenties and thirties. While women’s level of anxiety was declining, men’s was rising. While women’s suicide rate had peaked in 1960, men’s was climbing. The rates of attempted suicide for men and women were converging, too, as men’s rate increased more rapidly than women’s.
While the effects of the women’s movement may not have depressed women, they did seem to trouble many men. In a review of three decades of research literature on sex differences in mental health, social scientists Ronald C. Kessler and James A. McRae, Jr., with the University of Michigan’s Institute for Social Research, concluded, “It is likely that men are experiencing more rapidly role-related stresses than are women.” The role changes that women have embraced “are helping to close the male-female mental-health gap largely by increasing the distress of men.” While women’s improving mental health stems from their rising employment rate, the researchers said, at the same time “the increase in distress among men can be attributed, in part, to depression and loss of self-esteem related to the increasing tendency of women to take a job outside the home.” For many men in the ’80s, this effect was exacerbated by that other well-established threat to mental health—loss of economic status—as millions of traditional “male” jobs that once yielded a living wage evaporated under a restructuring economy. Observing the dramatic shifts in the mental-health sex ratios that were occurring in manufacturing communities, Jane Murphy, chief of psychiatric epidemiology at Massachusetts General Hospital, wrote in 1984: “Have changes in the occupational structure of this society created a situation that is, in some ways, better for the goose than for the gander . . .?” In fact, as Kessler says in an interview, researchers who focus on the female side of the mental health equation are likely missing the main event: “In the last thirty years, the sex difference [in mental illness] is getting smaller largely because men are getting worse.”
Numerous mental health reports published in the last decade support this assertion. A 1980 study finds husbands of working women reporting higher levels of depression than husbands of housewives. A 1982 study of 2,440 adults at the University of Michigan’s Survey Research Center finds depression and low self-esteem among married men closely associated with their wives’ employment. A 1986 analysis of the federal Quality of Employment Survey concludes that “dual earning may be experienced as a downward mobility for men and upward mobility for women.” Husbands of working women, the researchers found, had greater psychological distress, lower self-esteem, and greater depression than men wed to homemakers. “There lies behind the facade of egalitarian lifestyle pioneering an anxiety among men that cannot be cured by time alone,” they concluded. The fact is, they wrote, “that conventional standards of manhood remain more important in terms of personal evaluation than contemporary rhetoric of gender equality.”
A 1987 study of role-related stresses, conducted by a team of researchers from the University of Michigan, the University of Illinois, and Cornell University, makes the same connection and observes that men’s psychological well-being appears to be significantly threatened when their wives work. “Given that previous research on changing gender roles has concentrated on women to the neglect of men,” they wrote, “this result suggests that such an emphasis has been misleading and that serious effort is needed to understand the ways changing female roles affect the lives and attitudes of men.” This warning, however, went virtually unheeded in the press. When Newsweek produced its cover story on depression, it put a grim-faced woman on the cover—and, inside, all but two of the nine victims it displayed were female.
The anti-day care headlines practically shrieked in the ’80s: “MOMMY, DON’T LEAVE ME HERE!” THE DAY CARE PARENTS DON’T SEE. DAY CARE CAN BE DANGEROUS TO YOUR CHILD’S HEALTH. WHEN CHILD CARE BECOMES CHILD MOLESTING: IT HAPPENS MORE OFTEN THAN PARENTS LIKE TO THINK. CREEPING CHILD CARE . . . CREEPY.
The spokesmen of the New Right, of course, were most denunciatory, labeling day care “the Thalidomide of the ’80s.” Reagan’s men didn’t mince words either, like the top military official who proclaimed, “American mothers who work and send their children to faceless centers rather than stay home to take care of them are weakening the moral fiber of the Nation.” But the press, more subtly but just as persistently, painted devil’s horns both on mothers who use day care and day care workers themselves.
In 1984, a Newsweek feature warned of an “epidemic” of child abuse in child care facilities, based on allegations against directors at a few day care centers—the most celebrated of which were later found innocent in the courts. Just in case the threat had slipped women’s minds, two weeks later Newsweek was busy once more, demanding “What Price Day Care?” in a cover story. The cover picture featured a frightened, saucer-eyed child sucking his thumb. By way of edifying contrast, the eight-page treatment inside showcased a Good Mother—under the title “At Home by Choice.” The former bond seller had dropped her career to be home with her baby and offer wifely assistance to her husband’s career. “I had to admit I couldn’t do [everything],” the mother said, a view that clearly earned an approving nod from Newsweek. Still later, in a special issue devoted to the family, Newsweek ran another article on “the dark side of day care.” That story repeatedly alluded to “more and more evidence that child care may be hazardous to a youngster’s health,” but never got around to providing it. This campaign was one the press managed to conduct all by itself. Researchers were having a tough time linking day care with deviance. So the press circulated some antiquated “research” and ignored the rest.
At a press conference in the spring of 1988, the University of New Hampshire’s Family Research Laboratory released the largest and most comprehensive study ever on sexual abuse in day care centers—a three-year study examining the reported cases of sexual abuse at day care facilities across the country. One would have assumed from the swarm of front-page stories on this apparent threat that the researchers’ findings would rate as an important news event. But the New York Times’s response was typical: it noted the study’s release in a modest article on the same page as the classifieds. (Ironically, it ran on the same page as an even smaller story about a Wisconsin father beating his four-year-old son so brutally that the child had to be institutionalized for the rest of his life for brain injuries.) Why such little interest? The study concluded that there was no epidemic of child abuse at day care centers. In fact, if there was an abuse crisis anywhere, the study pointed out, it was at home—where the risk to children of molestation is almost twice as high as in day care. In 1985, there were nearly 101,000 reported cases of children sexually abused by family members (mostly fathers, stepfathers, or older brothers), compared with about 1,300 cases in day care. Children are far more likely to be beaten, too, at the family hearth, the researchers found; and the physical abuse at home tends to be of a longer duration, more severe and more traumatic than any violence children faced in day care centers. In 1986, 1,500 children died from abuse at home. “Day care is not an inherently high-risk locale for children, despite frightening stories in the media,” the Family Research Laboratory study’s authors concluded. “The risk of abuse is not sufficient reason to avoid day care in general or to justify parents’ withdrawing from the labor force.”
Research over the last two decades has consistently found that if day care has any long-term effect on children, it seems to make children slightly more gregarious and independent. Day care children also appear to be more broad-minded about sex roles; girls interviewed in day care centers are more likely to believe that housework and child rearing should be shared by both parents. A National Academy of Sciences panel in 1982 concluded that children suffer no ill effects in academic, social, or emotional development when mothers work.
Yet the day care “statistics” that received the most press in the ’80s were the ones based more on folklore than research. Illness, for example, was supposedly more pervasive in day care centers than in the home, according to media accounts. Yet, the actual studies on child care and illness indicate that while children in day care are initially prone to more illnesses, they soon build up immunities and actually get sick less often than kids at home. Day care’s threat to bonding between mother and child was another popular myth. But the research offers scant evidence of diminished bonds between mother and child—and suggests that children profit from exposure to a wider range of grown-ups, anyway. (No one ever worries, it seems, about day care’s threat to paternal bonding.)
With no compelling demographic evidence to support an attack on day care for toddlers, critics of day care turned their attention to infants. Three-year-old toddlers may survive day care, they argued, but newborns would surely suffer permanent damage. Their evidence, however, came from studies conducted on European children in wartime orphanages and war refugee camps—environments that were hardly the equivalent of contemporary day care centers, even the worst variety. One of the most commonly quoted studies in the press wasn’t even conducted on human beings. Psychologist Harry Harlow found that “infants” in day care suffer severe emotional distress. His subjects, however, were baby monkeys. And his “day care workers” weren’t even surrogate adult monkeys: the researchers used wire-mesh dummies.
Finally in 1986, it looked as if day care critics had some hard data they could use. Pennsylvania State University psychologist and social researcher Jay Belsky, a prominent supporter of day care, expressed some reservations about day care for infants. Up until this point, Belsky had said that his reviews of the child development literature yielded few if any significant differences between children raised at home and in day care. Then, in the September 1986 issue of the child care newsletter Zero to Three, Belsky proposed that placing children in day care for more than twenty hours a week in their first year of life may pose a “risk factor” that could lead to an “insecure” attachment to their mothers. The press and conservative politicians hurried to the scene. Soon Belsky found himself making the network rounds—“Today,” “CBS Morning News,” and “Donahue”—and fielding dozens of press calls a month. And, much to the liberal Belsky’s discomfort, “conservatives embraced me.” Right-wing scholars cited his findings. Conservative politicians sought out his Congressional testimony at child care hearings—and got furious when he failed to spout “what they wanted me to say.”
Belsky peppered his report on infant day care with qualifications, strongly cautioned against overreaction, and advised that he had only a “trickle,” “not a flood,” of evidence. He wrote that only a “relatively persuasive circumstantial [all italics are his] case can be made that early infant care may be associated with increased avoidance of mother, possibly to the point of greater insecurity in the attachment relationship.” And he added, “I cannot state strongly enough that there is sufficient evidence to lead a judicious scientist to doubt this line of reasoning.” Finally, in every press interview, as he recalls later, he stressed the many caveats and emphasized that his findings underscored the need for better funding and standards for child care centers, not grounds for eliminating day care. “I was not saying we shouldn’t have day care,” he says. “I was saying that we need good day care. Quality matters.” But his words “fell on deaf ears.” And once the misrepresentations of his work passed into the media, it seemed impossible to root them out. “What amazed me was the journalists just plagiarized each other’s newspaper stories. Very few of them actually read my article.”
What also got less attention in the press was the actual evidence Belsky used to support his tentative reassessment. He focused on four studies—any of which, as he himself conceded, “could be dismissed for a variety of scientific reasons.” The first study was based on one center that mostly served poor welfare mothers with unplanned pregnancies—and so it was impossible to say whether the children were having trouble because they went to day care or because they had such grim and impecunious home lives. Belsky said he had evidence from more middle-class populations, too, but the authors of the two key studies he used later maintained that he had misread their data. University of North Carolina psychologist Ron Haskins, author of one of the studies on the effects of day care on aggression, flatly stated in a subsequent issue of Zero to Three that “my results will not support these conclusions.” Belsky alluded to a final study to support his position that infants in day care might be “less compliant” when they get older. But he failed to mention the study’s follow-up review, in which the authors rather drastically revised their assessment. Later behavioral problems, the researchers wrote, “were not predicted by whether the toddler had been in day care or at home” after all. In response, Belsky says that it all depends on how one chooses to read the data in that study. Like so many of the “findings” in this politically charged field of research, he says, “It is all a question of, is the glass half full or half empty?”
Social scientists could supply plenty of research to show that one member of the American family, at least, is happier and more well adjusted when mom stays home and minds the children. But that person is dad—a finding of limited use to backlash publicists. Anyway, by the end of the decade the press was no longer even demanding hard data to make its case. By then the public was so steeped in the lore of the backlash that its spokesmen rarely bothered to round up the usual statistics. Who needed proof? Everybody already believed that the myths about ’80s women were true.