IF THE 1980s saw the traditional custodians of caloric intake—parents, school, church, and society—go on an extended vacation, the period also witnessed another change. This one took place among those charged with stimulating the nation's physical culture—those whose job it is to promulgate caloric expenditure. The reasons were, as the experts liked to say, "multifactorial," ranging from taxpayer parsimony to denial to outright political dogma. One thing was certain. Increasingly, when it came to public physical fitness—the kind of national venture that JFK so successfully advocated in the early 1960s—the nation was (often literally) out to lunch. Physical fitness? That was an individual pursuit, hardly something on which to waste scarce public resources, let alone actually promote.
As president of the President's Council on Physical Fitness and Sports, Arnold Schwarzenegger had found this out the hard way. Appointed in 1988 by President George Bush, the action movie star had taken up his charge, initially, with all of the fervor and passion one might expect from a man who'd literally remade himself in a cramped California gym only fifteen years before. Spending enormous amounts of time and his own money, Schwarzenegger had undertaken one of the most ambitious efforts ever at the council: the creation of a national youth fitness consortium. State by state, the Terminator had met with governors and local leaders to found small chapters of "fitness activists," who would then advocate for increased support for state and local physical fitness programs. It was, as Hollywood agents like to say when they get a star to agree to a project, a "way to put a motor in it," the "it" in this case being state-by-state fitness reforms. To highlight the achievements of his state councils, Schwarzenegger had also lobbied for a national fitness day, with colorful festivities planned at state and national parks. On his better days at council meetings Schwarzenegger would even lapse into gemütlich reverie, recalling his summers spent at outdoor sporting camps in his native Austria.
But as his term wore on, the gemütlich moments grew rarer and rarer, until the Terminator was a notably un-glücklich man. In almost every endeavor at the council he had encountered not just resistance to change but often outright insubordination. "I just don't understand and will not accept that it will take you six months to get me the minutes of the last meeting!" he barked at staff members at one board meeting. "What is it here?" There were other troubles. He suspected the American Alliance for Health, Physical Education, Recreation and Dance (AAHPERD), which the council paid to promote its famous fitness test, of not doing its job on the information front. And on his notion for a national sports festival he had got nothing but static. As John Cates, Schwarzenegger's PR adviser at the time, recalls, "We had thirteen lawyers telling us why we could not have a Great American Workout! The Forest Service people moaned about having so many people tromp around in the bushes. And the Parks people couldn't talk about anything but insurance liability." In other words, Washington, D.C., was treating the Terminator just as it would anyone else.
The last indignity came one day in late 1991. Schwarzenegger had successfully signed up governor after governor in his youth fitness campaign, and had come to Little Rock, Arkansas, he hoped, to sign up one more. As a warm-up he had arranged to address members of the Arkansas state assembly on the issue. The speech had gone well. Later that day he was scheduled to meet with the state's young governor to get him to join the crusade. So Schwarzenegger and his entourage arrived at the governor's suite, and began to ... wait. And wait. Soon Schwarzenegger, recalls Cates, "was really getting anxious." After an hour or so Deputy Governor Jim Guy Tucker appeared and promised that the governor would be there in "just ten minutes." But ten minutes came and ten minutes went. Much later, and after watching a parade of lesser politicos walk upstairs into the governor's office, Schwarzenegger left. Arkansas and Bill Clinton would be the only blank spot on Arnold's list of gubernatorial fitness czars.
Clinton's snub perfectly mirrored his own generation's feelings about public fitness programs. To them, PE, even when one was thinkin' about tomorrow, was just not that important.
Nowhere was this more apparent than in California, once considered the leader in physical fitness programs. From early on, the cult of the body was codified in the Golden State. In 1866, not long after the former Spanish colony became a state, the new legislature directed all public schools to pay "due attention ... to such physical exercise for the pupils as may be conducive to health and vigor of the body as well as the mind." In 1917 California became one of only a handful of states to require daily physical fitness education. In 1928 the State Board of Education was one of the first such bodies to require four years of PE for graduation, and for the next three decades, even as many school districts in other states waned in their support for public fitness, California went on extending its requirements to elementary and junior college students. With its predictably endless drills of running and jumping about in the brilliant sunlight, PE became a spartan rite of passage—a warm-up for the various other cults of the body that the typical Californian would inevitably encounter later in life.
Although the conventional history dates the decline of PE in California to Proposition 13, the law that no one voted for, its origins trace themselves to two forces emerging at least five years prior to passage of the 1978 measure. Both were emblematic of the baby boom, its values, and its priorities. The first was Title 9 of the federal Education Amendments, passed in 1972. These much needed provisions held that "no person ... shall, on the basis of sex, be excluded from participation in, be denied the benefits [of], or be subjected to discrimination under any education program or activity receiving federal financial assistance." Applied to physical education, where classes had long been conducted separately, and where boys' sports received the majority of support, the law was transformative. A year after its implementation, physical education in California went co-ed. Programs once reserved for boys now opened to girls. To accommodate them, PE staffs were split up and reassigned. And because equal facilities were mandated by the law, existing sports fields, equipment, and locker rooms had to be reapportioned as well. By 1978, when all schools were expected to be in complete compliance with the new law, remarkable progress had been made, with girls finally receiving many of the resources long denied on the basis of gender alone.
The progress came with a cost too. In boom times, those costs might have been absorbed by ever expanding school budgets. But the late 1970s were not exactly booming. Government spending was limited not just by economic trepidation, but by the "small is beautiful" philosophy of the state's brainy new governor, a former Jesuit seminarian named Jerry Brown. Brown was the New Age opposite to his father, Pat, who as governor in the early 1960s had put the government—and its huge budget—center stage in the education field. Now the inclination was to contract. Or, as the younger Brown put it, "to focus resources on small projects that might bring fundamental change." The state legislature began retrenching. In 1976, trying to save money while complying with Title 9, the state allowed individual school districts to exempt juniors and seniors from PE requirements. By 1977 a departmental survey found that almost all public schools had done so. It also reported a dangerous trend: "Staffing has been reduced and teaching methods changed as a direct result of the new programs." Such was the first of a hundred ultimately fatal paper cuts on the corpus of modern PE.
Betty Hennessy, a veteran PE teacher and later adviser to the Los Angeles County Office of Education, noticed the on-the-ground changes almost immediately. Where, in the past, unequal but abundant economic resources had allowed physical education instructors to get equipment and personnel support from district headquarters, now "teachers were on their own." With large classes, and the necessity of getting students in, suited up, and then showered and out the gym door, "the PE teacher became little more than a scorekeeper," she says. To relieve the strain, the state legislature passed a bill allowing non-PE teachers to coach. Although this was a bit like having a PE teacher with no science background tutor kids in chemistry, no one, at least at the state level, seemed to notice. By 1980 another Department of Education survey reported that only half of juniors and seniors were taking PE, and that while many schools were "successfully" implementing Title 9, "about 40 percent of schools ... perceive that it has caused program quality to decline."
The second pre–Proposition 13 trend at work was more personal: the so-called fitness boom of the '70s. Originating in the popularity of aerobics—long, slow running, mainly—and its various health benefits, personal fitness had been taken up passionately by many California adults. The rise of private gyms and celebrity exercise videos was a natural outgrowth. But unlike the old PE, where group participation and peak performance were goals, the underlying premises of the new fitness boom were individualistic and medical. One exercised for specific ends. Many were, of course, purely cosmetic ends. Others were health-based—one exercised to "reduce health risks," or to "feel better about oneself." Fitness, the new acolytes believed, was about self-empowerment, about autonomy, about self-definition. It was, like the Reagan revolution it took place within, all about throwing off the old bourgeois liberalism of the past, particularly its idealistic but— and everyone knew this —highly unattainable group goals.
John Cates, then on the physical education faculty at the University of California at San Diego, saw the writing on the wall—but the problem, as he saw it, was not the fitness boom, it was the PE establishment itself. He was particularly frustrated by organizations like the American Alliance for Health, Physical Education, Recreation and Dance (AAHPERD). "We as physical educators were not savvy enough to deal with the change politically," he says. "We had numbers and results, tied to reducing absenteeism and all that, but that case was never marshaled. We were not politically savvy people. I mean, Jane Fonda? Richard Simmons? Give me a break. What a joke. We—AAHPERD—could have made those [fitness] tapes. But the leadership said, no, let other people do it."
With public fitness now seriously endangered, the effects of Proposition 13 cut ever deeper. Overnight the new law sliced $6.8 billion from the state budget. Even with remedial legislation meant to soften Proposition 13's short-term impact, that meant a 25 percent reduction in the schools' share of taxes. Local school boards were now charged with making do. That usually meant making more cuts. By 1980 average PE class sizes had doubled. The percentage of seniors taking PE dipped again, to 43 percent. Enrollment in sports teams dropped in 88 percent of schools, and almost half of all schools eliminated at least one team entirely. In 1983 the legislature codified what had been a reality for nearly half a decade: Students now had to pass only two years of physical education. To this there was little parental opposition.
Which was understandable. For one, many boomers did not exactly harbor the fondest memories of PE, California style. Many recalled it as a time, perhaps the last, when they were unfavorably compared to other people—as in the time they were the last to be chosen to play on the popular kids' team, or the time when, flagging under the scorching sun, they had pooped out only halfway through the calisthenics, or the time they had clearly not won the Presidential Fitness Award, despite really trying at the pull-up bar. No, that wasn't for their child.
And fitness wasn't an important task for schools to perform anyway, was it? After all, there were more important priorities, especially in a nation that had now fallen behind Japan in productivity growth and job creation. Such was the general sentiment, especially after the 1983 report "A Nation at Risk." The study, which emphasized American children's lack of adequate science and math training and its impact on economic opportunity, had become a mantra for the back to basics movement, and in California (and, eventually, around the nation) that had meant anything but physical education. As a 1984 study by the California Department of Education concluded, "In a time of financial strain, declining academic test scores and strong pressure to go 'back to basics,' local school boards appear to have decided to reduce physical education.... PE teachers, budgets, enrollment and class size have been sacrificed in favor of 'higher' priorities." The sentiment was clearly not limited to the Golden State. By decade's end, Illinois was the only state to require daily physical fitness education.
What fitness opportunities remained for children grew increasingly class-based. In the nation's more affluent suburbs, where private gym membership by adults had been soaring, a new force emerged: sports clubs for children. Modeled in part on the old Pop Warner and Little League programs of the 1950s, the new clubs, most notably soccer, added a new twist: the notion that "everyone plays." To the boomer parent, psychically singed by the old PE, this was the place for Junior. The new soccer leagues were driven by the enlightened founders and executives of the American Youth Soccer Organization, or AYSO. Founded at an impromptu get-together at the Beverly Hilton in West Los Angeles, AYSO grew by leaps and bounds during the 1980s; between 1974 and 1989 membership increased from 35,000 to 500,000. Because it was essentially driven by parents who had the free time to cart kids to twice-weekly sessions, and who also had the free time to "volunteer" to referee and coach, AYSO was "essentially a suburban movement," as Lollie Keyes, its current communications director, says. "It really wasn't until later that we focused and found the support for inner-city leagues."
The same could be said for almost every other category of youth sports, or, for that matter, for any other opportunity to play—period: Wherever a chance to freely expend calories appeared, it was likely to be contingent upon parental time and money. Or parental residence: Parks and streets tended to be safer in suburbs. And inner cities, increasingly filled with less politically savvy new immigrants, were often shortchanged in parks and recreation spending, not to mention adequate neighborhood policing. All of this was reflected in a 1999 survey by the Daniel Yankelovich consultancy, citing lack of sidewalks and unsafe neighborhoods as "major barriers to fitness." The new unspoken truth was simple: In America, fitness was to be purchased, even if you were a child.
The realities and values of inner-city immigrant life also augured against investments in fitness. In Los Angeles, the Ellis Island of postwar America, new Latin American immigrants proved to be not much different from previous generations of poor immigrants to the United States (save, perhaps, that their proximity to the border rendered them economically more vulnerable to wave upon wave of wage-undermining newcomers). For one, most of their time was spent simply making ends meet, a process often made more draining by a lack of adequate public transportation, affordable housing, and health care. For another, they were not urbanites but rather urban villagers; like the 1950s generation of Italian Americans, they were and are likely to act upon Old World ideas about exercise and health—in essence, the less the better. Studies by University of Pennsylvania epidemiologists under Professor Shiriki Kumanyika, for example, showed that when new immigrants were asked whether rest was more important or better for health than exercise, a large portion "always says yes." The attitude was doubly corrosive: Among immigrant groups at the highest risk for hypertension and diabetes (see chapter 6), many respondents said that exercise "has the potential to do more harm than good."
Such attitudes have been reinforced by a growing knowledge gap about health matters. Consider a 2000 study of 1,929 Americans by American Data Sports Inc. Researchers asked interview-ees to agree or disagree with the statement "There are so many conflicting reports, I don't know if exercise is good or bad for me." Thirty-seven percent of those earning under $25,000 agreed, compared with about 14 percent making $50,000 to $75,000 and 12 percent of those making $75,000 or more. Only 46 percent of those with earnings under $25,000 agreed with the statement "I would definitely exercise more if I had the time," compared with 68 percent of those making $50,000 to $75,000 and 67 percent of those making $75,000 and up.
Still, in the 1980s, perhaps more than any other decade, the working class, the middle class, and the affluent shared one inclination: the willingness to use television as their predominant personal leisure time activity. Of course, the observation that Americans watch lots of TV instead of doing other things is hardly novel. But, truth be told, a scientifically rigorous study of that pattern was almost nonexistent until the late 1980s and early 1990s. It was then that Larry Tucker, an exercise physiologist at Brigham Young University, decided to study three interrelated trends. One, that most Americans are sedentary; two, that many feel they do not have the time to exercise; and three, that the average adult watches about four hours of television a day. To find out the extent of that association, and to see if one were causal of the other, Tucker studied the association between TV viewing duration and weekly exercise for 8825 men and women. The results indicated an even stronger association than experts had previously suspected: TV time was strongly and inversely associated with duration of weekly exercise. One example: Among those who reported little or no regular exercise, 9.4 percent viewed less than an hour of TV a day while 33 percent—the largest single group—reported watching three to four hours a day. As Tucker dryly concluded, "Adults who perceive they have too little time to exercise may be able to overcome this problem by watching less television."
But adults were doing more than just kicking back and having a laugh with the Cheers gang; more than ever before, they were using the tube as a baby-sitter. It was, after all, an increasingly child-oriented media. With cable expanding and VCR use almost universal, entertainment firms entered the children's "edu-tain-ment" niche with a vengeance, marketing a torrent of children's programs, videos, and games. So did McDonald's, which in 1985 initiated its so-called "tweens" advertising strategy to reach older kids and adolescents (see chapter 5). In the United States, all of this seemed quite natural—in a free market, new needs are created and then new needs are filled. TV was a pragmatic solution to the harried lives that so many new working couples faced. And who was to judge? Only a cynical, and rare, European would be willing to prick the happy bubble, as was the case in 1999, when the French exercise scholar Jean-François Gautier put it this way: "Children are naturally very active, but their parents are restraining them. Children are only allowed to be physically active if adults decide it is appropriate."
More than anything, though, American TV-viewing merged parent and child into one seamless inactivity bubble—a bubble filled with billion-dollar cues to eat, even when one was not hungry. You could argue whether that was morally right or heinously perverted, as did the occasional public TV special. But you couldn't deny the reality as experienced by every American family worth its potato chips. "Kids and dads watching twenty-three to twenty-eight hours a week of TV—that's a lot of sitting," Brigham Young's Larry Tucker observed. "And where there's a lot of sitting, there's lots of snacking."
And a lot of fat children. To find out what the pattern Tucker detected in adults was doing to their offspring, the Centers for Disease Control in 1994 studied the exercise, television-viewing, and weight gain patterns of 4063 children aged eight to fifteen. The results were stunning (if, in hindsight, predictable). Whether grouped by age, sex, or ethnicity, exercise rates were inversely correlated with TV time, which was in turn positively correlated with increasing body fat percentages. The more TV a child watched the less she exercised and the more likely she was to be either overweight or obese.
What was surprising, though, was the pronounced class and ethnic bent of the numbers. The poor, the black, and the brown not only tended to view more TV than their white counterparts, they also tended to exercise far less. The greatest disparity was between white girls and black girls; where 77.1 percent of the former reported at least three sessions per week of "playing or exercising enough to make me sweat or breathe hard," only 69.4 percent of black girls reported doing the same, with 72.6 percent of Mexican American girls so reporting. When it came to television-viewing, the numbers were even more disquieting. The percentage of white girls who reported watching four or more hours of TV a day was 15.6. The percentage of black girls: 43.1. Of Mexican American girls: 28.3.
Why was that? The CDC surveyed parents. Beyond the usual concerns about time, money, and the "need to rest," one rationale emerged among all parental groups: the concern about crime and how it acted as a barrier to some children becoming more physically active. About 46 percent of all U.S. adults believed that their neighborhoods were unsafe. Among the middle class, such sentiments fueled the growing inclinations to "bubble-wrap" all childhood activity, doubling up on safety precautions and delimiting spontaneous play. And parents in minority neighborhoods were twice as likely as white parents to report that their neighborhoods were dangerous. What the surveyed parents were implying was utterly reasonable: TV-viewing may be bad, but at least my kid won't get shot, molested, kidnapped, or jumped into a gang while doing it.
Yet the greater the TV time, the fatter the child. Cross-indexing the TV numbers from its 1994 study with skinfold tests (for body fatness) and calculations of body mass index, the CDC found that "boys and girls who watched four or more hours of television per day had the highest skinfold thicknesses and the highest BMIs; conversely, children who watched less than one hour of television a day had the lowest BMIs." This, the study concluded, was a "worrisome trend." No wonder that, between 1966 and 1994, obesity prevalence among youth jumped from 7 percent to 22 percent. Worse, there were huge increases in the percentage of fat children defined as morbidly obese or super-obese—bigger than 95 percent of their peers.
But what did that mean? For one thing, it meant the beginning of a lifetime of medical problems. Study after study had unequivocally indicated that becoming overweight and sedentary as a child or adolescent predicted being obese as an adult. Fat children became fat adults. Moreover, the risks of obesity in adulthood appear to be greater in persons who were overweight in childhood or adolescence. It also meant reduced physical fitness, particularly when it came to cardiovascular fitness. A study by the Amateur Athletic Union of the 1980–1989 period found large increases in the amount of time it took the average child to complete a standardized endurance run. The greatest slowdowns were found in the eight-to-nine-year-old category—exactly the age at which the young are more likely to gain weight anyway.
Numbers, studies, reports, and surveys. By the mid-1990s, they were all saying the same thing: Children were getting fatter, exercising less, eating more (and more often), and watching TV and playing Nintendo in ever greater amounts. Did any of this come as a great shock? No. But what did come as a shock—first in small awarenesses, then in still greater ones—was just how disabled—and just how socially disenfranchised—the young could become from being fat.
In California, the onetime model of physical culture in America, fat abounded. By the late 1990s, only one out of five students in public schools could pass the minimum standards in the state's physical fitness tests. And everywhere—but especially in the Mexican American community—childhood diabetes rates were soaring. So were the rates for a wide variety of other weight-related diseases, among them coronary heart disease, cardiovascular disease, bone disease, and a wide range of endocrine and metabolic disorders. The Spanish-language standard-bearer La Opinión often blares: DIABETES, EPIDEMIA EN LATINOS!
There were other, less predictable consequences too. A growing number of Latino children were showing up at their school nurse's office, either for their daily shot of insulin or for a number of other blood sugar regulating medications—so many that the L.A. school administration tried to talk the nurses into letting school secretaries administer the doses in the nurses' absence. (The nurses said no.) There were the victims of the newest form of childhood cruelty—the fat kids who would end up as the target of the daily dodgeball game, wherein balls were hurled so hard as to cause black eyes and bruised midsections, not to mention deep cuts in self-esteem.
And there were the changes occurring outside of school. Strange, weird, tragicomic changes. One of them was taking place in the sport of surfing, long the hallmark of the state's international image as a kingdom of perfect bodies. The change dawned on Steve Pezman, the longtime publisher of Surfing magazine, one summer day in the late 1980s. Pezman had headed out to Santa Monica Beach, the most popular of L.A.'s beaches and the destination for thousands of downtown and east side families every weekend. As was his custom, when he got there he sat down and took in the scene. Out on the ocean bobbed the usual lineup of young men and women, waiting for the best wave. They were very white, very lean, and, for the most part, blond. Far closer to the beach floated another lineup, also waiting for waves—essentially for the waves that had been too small or too unformed for the blond kids farther out. The young people in this lineup were very brown and, more often than not, rounder than their white counterparts.
As Pezman sat and watched, he realized what he was really seeing. One, that those in the group closer to shore were not just a little chubby, but downright fat. The other thing he noticed was that many of them could not—or did not—swim very well. Instead, they relied heavily on what had become a standard piece of equipment: the surf leash. The leash attaches the board to the surfer's ankle, so as to prevent the board from getting away after a wipeout and causing the surfer to have to swim after it. Wow, Pezman thought. "Not only had the sport segregated itself according to ability, like any other sport," he recalls. "But it had also segregated itself by body type and by reliance on a laborsaving technology. Unfit surfers! Fat surfers!
"Who would have thought?"
Who would have thought? Well, for one, the National Institutes of Health, which had, between 1977 and 1985, issued not one, not two, but three warnings about obesity and its unhealthful effects on both children and adults. To each of these the nation had yawned. Fortune magazine, in a typical screed, proclaimed that the real problem was not obesity but, rather, the NIH, which it alleged was using the issue as yet one more way for the government to intrude into "private" affairs. What could one do about such a problem anyway? And who might do it?
More than any one organization, the President's Council on Physical Fitness and Sports might. Founded in 1958 by President Dwight D. Eisenhower after a series of reports detailed the poor fitness of many American troops, the council had long occupied the national bully pulpit on all things PE. Under JFK, it had even asserted a hold on popular culture. Kennedy himself took a leading role in the issue, making much publicized fifty-mile hikes and writing articles on the subject for such popular magazines as Life and Look. At its core, fitness was to Kennedy a matter of national survival, both literally—a number of studies had shown that Soviet boys had pulled far ahead of American boys in many tests of strength and agility—and metaphorically. "All of us must consider our own responsibilities for the physical vigor of our children and of the young men and women of our communities," he once wrote. "We do not want our children to become a nation of spectators. Rather, we want each of them to be a participant in the vigorous life."
To motivate the nation, the council had turned to the tools of the newly emerging complex of public relations specialists, sports celebrities, and Madison Avenue survey takers. There were specialized fitness magazines for girls (Vim) and boys (Vigor). There was a council theme song, by Music Man composer Meredith Willson; its refrain was "Go You Chicken Fat Go!" It became a national hit. There was the council's charismatic chairman, the baseball great "Stan the Man" Musial. And there were the tests—those annual rites of pull-ups, sit-ups, shuttle runs, and long jumps so loved (or dreaded) by schoolchildren from Sacramento to Poughkeepsie.
All of this seemed to work—or at least to give the impression that it did. In a 1965 survey entitled "Closing the Muscle Gap," Musial wrote: "In 1958 the average 15-year-old American boy could run 600 yards in 2 minutes 19 seconds and do 45 sit-ups. Today's average 15-year-old can run 600 yards in 2 minutes and do 73 sit-ups." Although academics might quibble with the basis of such testing, few could argue with the basic upbeat message, not to mention the council's overall mission. Here were the denizens of Camelot, doing jumping jacks in the sun.
By 1985, however, when it confronted the results of its third national survey, the council had evolved into a more complicated beast. At its head sat its celebrity chairman, the football coach George Allen. A longtime friend of Republican presidents, Allen was obsessed with one idea: the creation of a national academy of fitness. Characteristically, he had thrown all of his energy, charm, and connections into the task, and spent most of his time on the road, raising funds, locating possible sites for the campus, and enlisting old gridiron buddies in the cause.
This left the running of the council increasingly to its executive director, a former San Diego PE teacher and fitness expert named Ash Hayes. Tall, rangy, and physically striking, Hayes was the embodiment of the postwar California fitness buff. Born and raised in Iowa, he had moved to California after a stint in the army during World War II. "My parents had moved there, and once I saw no one ever shoveled snow there, I decided to get my BA in San Diego," he recalled. "I turned into a beach lover." And a fitness lover. As an administrator in the San Diego school district, Hayes had found himself drawn to the growing field of physical fitness. He began to teach the subject. Then to coach. "The need for fitness was always very clear to me, really instinctive," he says. "Even as an undergraduate and as a young farmer and soldier before that, everything told me that the body was designed to be physically active. It was common sense." By 1981 Hayes was head of the health and physical education department of the San Diego City School District, and president of a number of national fitness and sports organizations.
By then he was also a regular in California Republican party politics. So when Casey Conrad, a longtime GOP activist friend and then the council's executive director, called on him for assistance, Hayes said yes. He first served as the council's state coordinator, then as co-director. In 1985 Conrad retired. Hayes assumed the directorship.
He also assumed a headache. Just that year the council had re-leased its third national fitness survey. As with the one it had done in 1975, results were lackluster. Studying the fitness abilities of eighteen thousand American boys and girls, the survey had concluded not only that there had been little general improvement in overall fitness levels, but that there also had been slippage, particularly among young girls. About 50 percent of them (compared to 30 percent of boys) could not run a mile in fewer than ten minutes. The same held true of other test items, from the 50-yard dash to the flexed arm hang. Among girls in particular, the rate of improvement was flat, with some declines in individual age group performance.
But for Hayes, that wasn't the worst of it. At almost the same time the results of a parallel study were published, this one conducted by the U.S. Public Health Service. The report looked at the ability of 8800 students to perform rigorous physical tests designed to assess overall health and fitness. The results showed that about half of American children were not getting enough exercise to develop healthy hearts and lungs. Even more alarming was what the service found out about fatness and American youth. Median skinfold sums were 2 to 3 millimeters thicker than a sample taken by the PHS in the mid-1960s. Kids were getting fatter.
Yet if his mission was growing larger by the day, Hayes's resources were shrinking, partly from Reagan-era budget cuts, partly from pure lack of interest by other arms of government. "My total budget was $1.5 million," Hayes recalls. "That, basically, was zero. We were constantly with our hat in hand, trying to find corporate sponsors for various projects." Some of those sponsors eventually included such unlikely partners as 7-Up and McDonald's, "but we never thought of that as a conflict of interest, because we never let them use the council seal in their advertising."
Despite the lack of resources, when it came to communicating the issue of fitness, the council already owned one potent weapon: its annual Presidential Fitness Awards and the tests that went with them. True, there were better, more scientific assessments of fitness. But none had the standing or reach into the average home and school in quite the way that the council's did. By 1985 the award had been won by some 8 million American children.
Since its inception, the test had been administered by the American Alliance for Health, Physical Education, Recreation and Dance (AAHPERD), the nation's leading organization of fitness professionals. It had been designed at a meeting of its research council in February 1957, when members—responding to Eisenhower's plea to develop some way to measure American youth's declining fitness level—came up with eight tests. These were the pull-up (for boys), modified pull-ups (for girls), sit-ups, the standing broad jump, the shuttle run, the 50-yard dash, the softball throw, and the 600-yard run. Over the next decade there would be minor tweaks to the regimen; in 1964 the modified pull-up for girls was replaced by the flexed arm hang, the softball throw was eliminated, and modified sit-ups (with flexed knees) were substituted for the conventional straight-legged ones.
But by and large the test proved amazingly durable. By 1975 some 65 million pupils had been tested. The president's council adopted the test as a basis for its own Presidential Fitness Award, given to any child who scored in the top 15 percent of his or her age group. It awarded a lucrative contract to AAHPERD to conduct the test. To recoup some of its costs, the council sold its award patches to individual school districts.
Yet even within AAHPERD, the test was controversial. Many of its own members had argued that pull-ups and 50-yard dashes and standing broad jumps had little to do with fitness and everything to do with measuring performance. This, they argued, was because the battery had been designed with largely military concerns in mind. In that context, the pull-up made sense—every soldier ought to be able to pull himself out of a foxhole. So did the broad jump, the flexed arm hang, the softball throw, and the 50-yard dash. A good soldier should be able to jump quickly out of harm's way, lob a grenade, hang from a window, or sprint toward the engagement line.
But what did those abilities have to do with fitness for everyday life? For two decades AAHPERD, despite the rising chorus of its own members, sidestepped the question, at least when it came to the tests for the council's Presidential Fitness Awards. If this was what the client wanted, that is what they would give the client. Such was the thinking until two trends came to undo it.
The first was the ascendance of aerobic exercise—jogging, as experienced by most Americans of the 1970s. The concept had been popularized by Kenneth Cooper, a former military physician who, after almost dying of a heart attack, had restored himself to top condition through what he liked to call "LSD—long slow distance" running. At the core of Cooper's exercise prescription—which he detailed in his 1968 bestseller Aerobics —was one key fact: that improvements in cardiovascular abilities—the ability to use and expend oxygen—came mainly from moderate increases in energy expenditure. In the past, conventional wisdom had held just the opposite. The old notion was that one would have to raise one's "Vmax2"—deciliters of oxygen expended per kilogram of body weight—to over 60 in order to force the body and its muscles to get stronger and to endure more. Using a treadmill, Cooper had demonstrated that such "training effects" actually happened at a much lower level of exercise, at, say, a Vmax2 of 40. In other words, it was better (and more efficient) to jog slowly for a half-hour than to run swiftly for ten minutes. Cooper also showed that body composition, specifically percentage of fat, played a big role in cardiovascular health, regardless of whether or not you could sprint 50 yards or jump 8 feet. In this light AAHPERD's—and the council's—preference for things like 600-yard runs looked highly unscientific, if not downright archaic.
The second force was the rise of a new generation of exercise physiologists, scientists who study exactly how the body responds to various forms of physical activity. Of these men and women, none was more aggressive—and intellectually pugnacious—than Charles "Chuck" Corbin. A Cooper protégé and a professor at Arizona State University, Corbin had long harbored reservations about the AAHPERD fitness test. As a young PE teacher he had administered it himself, year after year, only to observe something that he found deeply disquieting. "Basically, the same kids won it year after year," he recalls. "And eventually some of us began to say, 'Hey, wait a minute—the so-called fit kids under this test are the ones who already got the award patch, and the kids who aren't fit under this test, well, they just give up.' So kids came to hate PE. I became convinced that this was a bad thing—I saw that they grew up to be parents who hated PE too."
Then, in a series of studies conducted with his co-author Bob Pangrazi, also of ASU, Corbin was able to show that almost every item on the AAHPERD test battery correlated not with one's fitness levels, but with one's hereditary and environmental advantages. Moreover, the decades-long notion that only children who tested better than 85 percent of their peers deserved an achievement award didn't make scientific sense either. Corbin and Pangrazi saw significant fitness improvements in those who scored as low as the 50th percentile. Combined with Cooper's growing body of work documenting the importance of longer distances run more slowly, a new consensus began to emerge among younger AAHPERD members. "Many of us started to say, 'Why is activity important? What kinds of activity are important to adulthood that we were not teaching in schools?' We started to say, 'Hey, what's important to lifelong fitness? What things really made a difference?' We were saying that not everybody could be a sports star. And that, to be candid, was very counter to the ideals of the founders [of the council], who all thought that every kid would be able to get into the 85th percentile."
By 1980 Corbin and Pangrazi had convinced AAHPERD to develop a new test, this one designed to assess health-related fitness skills. The new one was Corbin and Cooper, distilled. Gone was almost every item on the traditional test. Instead, there were distance runs (one for nine minutes, one for twelve minutes, and one for the mile) for cardiovascular health, a sit-and-reach test to measure flexibility, modified sit-ups for trunk strength, and, truly radical, skinfold measurements for body fatness. Developed with Cooper at his Dallas Texas Institute, the new test would be called the "Fitnessgram."
Right away, the new test rankled the old-timers, who viewed it as a form of "dumbing down." As Hayes saw it, "if you ask less of people you will always get less from people—everything in my background told me that. Why was a pull-up so important? Ask any soldier who had to pull himself out of a foxhole, or any fireman who had to hang from the window of a burning building." John Cates, then an assistant to George Allen, also worried about the dumbing-down effect, but also says, "There was a whole self-esteem issue here, and not just of the variety of 'making it easier' so kids don't feel bad about not getting an award. The financial cutbacks in the schools had also done something else to PE classes—it forced very dissimilar kids, kids of very different physical abilities, into one class, where comparisons were inevitable." He adds: "But one thing was clear: The kids were getting worse and worse. Do you water this down? Do you lower standards or do you keep the old ones and possibly stigmatize some kids?" To this Corbin again responded with new data that showed—convincingly—that far from inspiring children to work harder, the old standards simply turned them off to exercise altogether. "The basic response of most kids was 'why bother?'"
After a few years, it was clear that the new test was becoming troublesome in another way: It began to confuse school districts, which could not distinguish it from the more traditional council test. In effect, AAHPERD had created a competitor to one of its most important clients.
To reconcile the two tests, AAHPERD and the council set up an advisory committee. For three years those favoring a health-based test battled over one key bone of contention: the council's insistence on maintaining the 85th percentile. Though it was something both sides felt deeply about, the science clearly favored the reformers. And so, by 1986, as the annual AAHPERD conference approached, the two groups reached for a consensus: In the new test there would be elements from both the old test and the new. They also agreed to consider a new awards scheme—one, as Corbin saw it, "that was based on science, and that rewarded process and improvement rather than the end product."
Then, just as both groups prepared to announce the new test, the council, led by Hayes, threw up a new barrier: the body composition test. It was, he said, "inappropriate." It might hurt kids' feelings. And it might make parents mad that someone was touching their child. Anyway, Hayes said, the council "wasn't ever in the business of making weight an issue."
But for the council, such opposition to the "weight issue" was something new. Ever since its founding, body weight had been a key element of its agenda. The council's theme song, after all, had been quite explicit about the goal of exercise—"Go You Chicken Fat Go!" In the 1970s the council had further focused on the issue of body weight and health by sponsoring a highly visible ad campaign, designed by Young and Rubicam and placed in leading general interest magazines. In one ad depicting a chubby boy eating an ice cream and surrounded by TVs, radios, telescopes, and model airplanes—symbols of sedentary behavior—the copy read: "We're so overdeveloped we're underdeveloped." Another showed a giant marshmallow and declared "Hey kid! If you see yourself in this picture, you need help," only to go on to say, "There's a little marshmallow in all of us. A little blob. A little cream puff. A little jelly belly." Another simply proclaimed: "There's no such thing as stylishly stout."
Although these were certainly not the way to approach the issue now, the reformers, led by Cooper and Corbin, insisted that weight was now even more relevant to health. There was a direct and well-documented link between excess body fat and all manner of heart disease, not to mention various bone and endocrine disorders. Knowing one's body composition was key to knowing how healthy one was and what one had to do to become healthier. "As we saw it—and as we presented it—body fat testing was no different than, say, getting a mammogram or a prostate exam," Corbin recalls. "What test isn't a little embarrassing?"
To this Hayes's allies countered with what was—for them, at least—an unlikely refrain. Putting too much emphasis on body weight, they said, could inadvertently promote anorexia. In an era when the disease had become a talk show staple, with fashion advertising the leading enemy, it was an emotional and effective debate tactic. But it was also scientifically dubious. Even the most generous epidemiological estimates put anorexia far down on the list of mainstream teenage woes.
By April 1986, when the two sides were to meet at the AAHPERD annual meeting, what was once a relatively collegial process had broken down. The council and its supporters had come out against the skinfold test and for inclusion of the shuttle run—two decisions the reformers could not abide. "We were basically shut out," Corbin recalls. "The council showed up and began handing out copies of the printed test, with our name on it! We never got a chance to present the new test." Cooper was furious. Learning of the shutout, he walked out of the conference, taking the Fitnessgram with him. As Corbin saw it: "We blew a great chance to put parents on notice that body composition should be one thing they ought to consider when they assess their kids' health."
And they had blown it during a decade when the caloric environment for children was growing ever more toxic.
Hayes and the old guard may have been wrong about the body fat test, but on the subject of health-based exercise prescriptions they may have been more on target than many in today's fitness establishment would like to admit. If their fear that expecting less from children would lead to diminished caloric expenditure—in effect a tacit endorsement of sloth—such a fear was arguably even more justified when it came to a new set of exercise recommendations for adults that began appearing in the early 1990s, at exactly the time when sedentary behavior was at an all-time high and supersized portions and snacking became the norm.
To understand how radical the new recommendations were, consider what had been the standard "exercise Rx" prior to the 1990s. Until then, almost every organization that was in the business of making public health recommendations agreed on several things. One, that exercise should consist of sustained activity—"15–60 minutes of continuous aerobic activity," at least three to five days a week, as the American College of Sports Medicine (ACSM) put it in its formal 1978 position paper. Implicit in this was a key assumption: that the more one exercised, the more benefits one got from that exercise. This the experts called the "dose-response effect." The second point of accord related to exercise intensity. To be effective, exercise should be moderate to vigorous, with a Vmax2—maximum oxygen uptake—of 50 to 85, with a preference for the upper end. As J. N. Morris, the reigning dean of exercise physiology, put it in 1980, "Adequate exercise means vigorous exercise."
But all through the 1980s, exercise levels among adults, which had been on the rise since the 1960s, began to plateau, then to fall. Experts attributed this to the modern lifestyle. Americans were working more hours, spending more time commuting, and, increasingly, working jobs that were not "sweat-friendly." The new American worker toiled not in a factory or even a giant corporation, but, rather, in the rising field of professional services. They had to dress nicely, or at least semi-nicely. They hardly had the time to change, exercise, shower, and get back to their desks at lunchtime. Or so the experts said.
Of course, there was another way to look at the American worker, and that was as a person with an increasingly flexible, project-oriented job. Certainly the personal computer had created vast new groups of people who telecommuted, worked from home, or—especially in the enterprising 1980s, when new business formation was at an all-time high—actually struck out on their own. One thing was certain: The typical American worker still had time for four hours of television every night.
Nevertheless, confronted with declining exercise numbers and alarming rates of cardiovascular disease, public health officials felt that something had to be done about the traditional exercise recommendations. Studies had shown that adults, like children, reacted badly to high expectations. In this view, the plateauing of their exercise rates was one big "why bother?" As in the field of physical education, a new consensus emerged among America's leading adult exercise scholars. It was time to "be more realistic" about exercise prescriptions.
Like Cooper and Corbin, these reformers—many of them using data from Cooper's own studies of adults who came to his clinic—had new science on their side. Big studies were showing two interesting trends that supported their views. One, that moderate, not vigorous, activity was key to reducing major health risks like heart attack and stroke. And two, that total accumulated caloric expenditure was just as good as sustained continuous exercise when it came to reducing risk of heart attack, stroke, hypertension, and a wide range of chronic diseases. On both these accounts, the two most important studies—one of Harvard alumni, the other of white business executives who had come to Cooper's Aerobics Institute for fitness consultation—agreed.
In the Harvard study, men who had the highest total number of minutes of daily activity—walking to work, taking the stairs instead of the elevator, participating in light sports like bowling and gardening—also tended to have the lowest rates of these diseases. (In fact—and this would tickle many a fat boy who hated sports in school—ex-varsity athletes retained a lower-risk profile only if they maintained a high physical activity rate as an adult.) From the point of view of public policy—and of people who wanted a more palatable exercise prescription—the Harvard study provided two key debating points. The first was this: The biggest improvements in health risk reduction occurred not at the higher end of the activity curve, but at the lower—in other words, not among alums who expended, say, 5000 calories a week, but among those who exercised off 1000 calories a week. (Those who expended fewer than 500 calories a week still carried the highest risk.) The second observation was quiet, but, in a way, revolutionary. In the past experts had believed that exercise and fitness were linked in a dose-response effect—more exercise meant more fitness—but the Harvard study showed a leveling-off effect. After about 2500 calories of activity per week, the reduction in risk from heart attack didn't continue to fall. "If there is a causal relationship," its authors concluded, "the figure depicts a plateau of benefit rather than a continuing gradient benefit." If you are trying to reduce your risk of heart attack—and only that—expending more than 2500 calories a week on moderate exercise is a waste of time. In fact, if you are sedentary, you can reduce risk with much less exercise than you thought you could. As the Stanford cardiology professor Robert DeBusk would conclude in a separate 1990 paper in the influential American Journal of Cardiology: "High intensity exercise affords little additional benefit."
The Cooper studies provided the reformers with more ammo. Looking at the treadmill tests of 13,000 largely upper-middle-class white men and women, researchers were able to pinpoint exactly where the greatest health benefits, or risk reductions, occurred. Once again, it was at the low end. People who exercised just a little more than people who didn't exercise at all got a bigger percentage reduction in health risks than did people who were already exercising moderately and who then began exercising vigorously. "A brisk walk of 30–60 minutes a day will be sufficient," the authors concluded.
A brisk walk of 30 to 60 minutes a day, however, was still challenging to most Americans. But now two key terms—moderate and cumulative—took on a life of their own. Increasingly, as medical and professional organizations revised their standards to "be more realistic," and as the government began funding studies of "user-friendly" (read: easier) exercise prescriptions, such organizations looked not to the initial recommendations of the Harvard and Cooper authors. Instead, they looked to the key revelations of those studies—or, rather, to their own institutional interpretation of those revelations.
The American Heart Association was the first to jump on the bandwagon. In a lengthy 1990 special report, it downplayed its old recommendations of 2000+ calories a week and instead called for a minimum of "700 calories a week on three or more nonconsecutive days." Walking, the authors advised, "appears to be just as beneficial as more vigorous activities." And more: "Some benefit is apparently derived from as little as 20 minutes of low-intensity exercise three times a week." They then went out of their way to denigrate more vigorous activity, saying simply that "there is no evidence of a health benefit at more than 2000 calories a week."
This was a slightly different line than the evidence suggested, but now intellectual parricide—not scientific precision—drove the effort. Morris be damned. Vigor be damned. Even the authors of the Cooper and Harvard studies joined in, in 1992 proclaiming that "the response to exercise training is primarily, if not exclusively, dependent upon the total energy expended in exercise and not intensity."
Enter now the American College of Sports Medicine. For years it had cagily viewed the reform movement, only slowly lowering its Vmax2 recommendations from 70 to 50 in 1986, and then to 40—60 in 1991. During that time the ACSM's composition had changed; more and more of its most vocal members were not people primarily concerned with exercise as a way to better one's performance in everyday life—the foxhole-digging, window-hanging cadres of the postwar period—but rather scholars who were mainly interested in reducing the risk of chronic diseases, particularly heart disease. In 1993 they seized control, and in a bold and highly publicized paper issued a new exercise manifesto. Backed by the CDC, they issued the following statement: "Every American adult should accumulate 30 minutes or more of moderate intensity physical activity over the course of most days of the week ... Activities that can contribute to the 30-minute total include walking upstairs (instead of taking the elevator), gardening, raking leaves, dancing, and walking part or all of the way to or from work ... One specific way is to walk two miles briskly."
This, of course, was a far cry from Morris's old "adequate exercise means vigorous exercise." But it was also a long trek from the reformers' original recommendation of 30 to 60 minutes a day of brisk walking. And it hardly grew from a sense of scientific certainty. After all, only six months before, one of the CDC-ACSM panel's most influential members, Stanford's Ralph Paffenbarger, Jr., had concluded that "what kinds of physical activity should be prescribed, how much, how intense, and for whom if optimal health and longevity are to be achieved [emphasis mine] remain unanswered questions that require further clarification."
What had happened? For one, the CDC, as one panel member recalls, "was under tremendous pressure to come up with more palatable recommendations." Behind this was the surgeon general's own Healthy People 2000 Initiative, which had set ambitious public health goals but had little in the way to measure any progress toward such goals. A new set of recommendations was critical to the initiative's success, or at least its bureaucratic success. If one defined minimum fitness too high—if one set the bar so as automatically to make the majority of Americans appear too sedentary—how could there be even the possibility of success by millennium's end, when the surgeon general hoped to have dramatically changed American health habits?
The other force majeure lay in the new consensus among the ACSM majority. Many of these men and women had spent the better part of their academic lives studying what they called "exercise compliance"—how, why, and under what circumstances people tended to stick with a regimen. Although these studies tended to focus on specific (and often very small) high-risk groups, their conclusions were nearly unanimous: People tended to stick with regimens that they did not see as overly demanding. In short, the issuers of the new doctrine of "moderate, accumulated" activity believed that the average American would react to their new standards with one great "Wow! I can do that!"
When it comes to exercise, however, human beings are, in general, not a very "Wow! I can do that!" bunch. They are, after all, genetically programmed to conserve energy, to find every opportunity they can to ... sit on their duffs. Moreover, the new recommendations came at a time of unparalleled opportunity both to be sedentary and to consume huge amounts of fatty calories on the cheap. In the early 1990s supersize had met Super Mario with a vengeance; the price of both had dropped so much as to induce price wars.
Considering such a context, two questions seem appropriate. One, was telling people they could get by with less exercise a good idea? And two, was it true, or at least were the assumptions behind the advice true? On both counts, the evidence suggests an answer of no.
One way to gauge the response of the average American to the new guidelines is to look at the way they were presented by the media. True, the media (thank God) do not exactly represent the way the average Jane thinks, but the modern media are nothing if not absolutely addicted to the latest health manifestos. If skepticism about them is not their lot, the media's acceptance is largely based on ignorance and wishful thinking; to paraphrase Mr. Dooley, the newspaper bosses—they like to sit around and eat a Big Mac too.
Consider what they wrote in the aftermath of the 1993 guidelines: "Still don't exercise? No sweat. A little at a time now called enough" (Chicago Tribune); "Gym workout? U.S. says walking, gardening will do, too" (Boston Globe); "Study says you don't have to sweat fitness routine" (Los Angeles Times); "If you can't run for health, a walk will do, experts say" (New York Times); and "A walk is as good as a workout" (Atlanta Constitution). TV, as usual, trumped print. In one famous piece by a Los Angeles network affiliate, viewers were told that "even seriously hunting for the channel changer can count toward your daily thirty!"
No one can say exactly how the average American interpreted such drivel, but what, given the permissiveness of the overall culture, would be the better bet: that they would use it as an excuse, as a way to get off the hook, or that they would say, "Wow! I can do that"? The point is that no one on the reform committee seems to have understood the way American culture digests any form of reduced expectations. In this case, the media had transformed what was once a mere prescription to reduce a lazy man's chance of getting a heart attack into a national prescription for fitness. In the wishful-thinking, reality-denying, boundary-hating world of modern America, this was manna from heaven—a Whopper with cheese from the CDC!
For all of which the reformers can be forgiven. No one, after all, can ever truly gauge what the popular media will do with any given piece of scientific information. Yet science does answer to its own. And in that respect the reformers have much to reconsider.
How wise was it to base a recommendation for all Americans on the experience of the rich? That is, essentially, what both of the key studies were. These populations of lawyers and business executives may have looked much like average Americans; their body weights, rates of various diseases, and dietary patterns may have been not that different from those of a lineman for the telephone company or a data processor for an insurance company. But their total life experiences were very different. What made thirty minutes of accumulated activity a prophylactic against heart disease for the rich—with their already highly buffered existence—would likely, one might surmise, be much more dilute for the middle class, even more so for the poor. This is because when the rich garden, even briskly, they are doing so with all the other health advantages that come with being rich. Their mini-dose of exercise is amplified by socioeconomics. Not so with the middle class, let alone the poor.
And what about moderate exercise over vigorous exercise, or accumulated activity over sustained, and the idea that most benefits accrue at the low end of activity increases? To what degree did those notions hold up? New reviews are increasingly calling them all into question.
Perhaps the most vexing arena of controversy involves what was the most radical part of the new recommendations—the notion that accumulated activity is as beneficial as sustained activity. This was the element of the reform plan that engendered such creative interpretations as "doing a few minutes of housework" or "intensely bowling." In the scientific literature it is known as "fractionalization of physical activity." To date, evidence for such as an exercise prescription (rather than as an observation of activity patterns of the rich) rests largely on one single study involving thirty-six subjects. In it, eighteen healthy men completing thirty minutes of exercise training a day were compared with eighteen men completing three daily ten-minute bouts. Both exercised at a moderate rate—at about 65 to 75 percent of their peak treadmill test heart rate. The authors, led by Stanford's Robert F. DeBusk, concluded that "multiple short bouts of moderate intensity exercise training significantly increase peak oxygen intake," thereby implying that multiple rounds were as good as sustained rounds.
But closer reading leaves one wondering about that conclusion. This is because, at each and every point of comparison, the sustained group performed better than the fractionated group. Peak oxygen intake of the sustained group improved by 4.4 points, as opposed to the fractionated group, which improved by only 2.4 points. Adherence to each respective regimen was the same—thus undermining another supposed advantage of the multiple-session doctrine. The sustained group tended to complete its training session more often than did the fractionated group. As the authors themselves stated, "multiple short bouts of exercise increased peak oxygen uptake 57 percent as much as a single long bout." In other words, a bit more than half as much as the sustained group. These were, to be sure, small differences, and it was clear that the fractionated exercisers were getting more benefit than previously thought. But that was a long way from emphasizing, as the authors did, that "high intensity exercise affords little additional benefit."
What about the general notion behind the recommendations—that health benefits of physical activity are linked principally to the total amount of activity performed? Again, the latest data suggest otherwise. In a more recent work, Paffenbarger found that Harvard alumni who took up moderately vigorous sports activity significantly reduced their mortality risk from all causes compared with those who did not engage in such activity. "In contrast," the scholar Paul Williams has noted in a recent Archives of Internal Medicine report, which reanalyzed Paffenbarger's data, "increasing the overall daily activity had no significant impact on overall mortality." Intensity trumped total accumulated activity.
Even more recently, Williams, of the Lawrence Berkeley National Laboratory, produced a stunning series of papers that, in toto, undermine every single assumption of the 1993 recommendations. In a study of 8283 male recreational runners, he revived the old, rejected notion of a dose-response effect. As he put it, "Our data suggest that substantial health benefits occur at exercise levels that exceed current minimum guidelines and do not exhibit a point of diminishing return." In July 2000 Williams eviscerated the new doctrine again, this time by performing a meta-analysis of twenty-three fitness studies representing 1,325,004 person-years of follow-up. The result showed that the risks of heart disease decreased linearly with increasing amounts of physical activity—a clear dose-response effect. "Formulating physical activity recommendations on the basis of fitness studies," like the Cooper and Harvard projects, he concluded in the ACSM's own journal, "may inappropriately demote the status of physical fitness as a risk factor while exaggerating the public health benefits of moderate amounts of physical activity [Williams's emphasis]."
It was, of course, easy to dismiss a lone voice in the wind, which is how Williams has been greeted by the reformers. But the snickering turned stone serious in mid-2001, when the ACSM published the findings of a symposium on the subject of "dose response issues concerning physical activity and health." Looking at a number of its own studies over the years, the panel found that "overall, there is a consistent inverse dose-response relationship between physical activity and both the incidence and mortality rates from all cardiovascular and coronary heart disease." It also notes that the dose-response relationship held true for prevention of type 2 diabetes, colon cancer, and obesity.
Slowly—and quietly—the reformers have begun to recognize their errors. The ACSM itself recently published a third position statement calling for a larger volume of activity performed at higher intensities than the 1993 statement. An even more recent study, this one on diet, lifestyle, and type 2 diabetes by the Harvard School of Public Health, goes the extra distance as well. Noting that "current strategies have not been very successful, and the prevalence of obesity continues to increase," the study repeatedly clarifies and amplifies what is meant by adequate physical activity—"vigorous sports, jogging, brisk walking, heavy gardening, heavy housework—vigorous enough to build up a sweat."
The most recent round of "Dietary Guidelines" meetings also called the conventional wisdom to account. Noting that members of the Weight Registry, the only large database that tracks people who have lost weight and kept it off for three years or more, average 2825 calories of exercise a week, compared with the current 1000-calorie recommendation by the American College of Sports Medicine, one prominent member declared, "Are we being aggressive enough or are we simply setting guidelines that we hope will be more appealing to people who have not been successful?"
Canadian health authorities, which have long followed the U.S. lead, have been bolder; adults, they advise, should get sixty minutes of physical activity every day. Why? Because, as the ACSM's journal put it, "the assumption [is] that most people interpret the public health message in terms of predominantly light intensity activities, thus the necessity to recommend a larger daily volume."
Translation: People are lazy, so it does not pay to give them an out when it comes to exercise. It is better to ask for more—not less.
Yet asking for more has become anathema to health policy makers in the realm of fitness. Consider, for example, the strange story of the nation's weight control guidelines.
The guidelines are promulgated every five years by a small, elite group of nutrition scholars who meet at the USDA's South Agriculture building on Independence Avenue. There they discuss what might be the government's single most important public health action—issuance of the agency's twice-a-decade "Dietary Guidelines for Americans." With spiffy graphics and a multimillion-dollar publicity budget, the guidelines are supposed to communicate the state of the art in nutritional science and health recommendations. Functionally, the guidelines also serve as something else. Almost shamanically they act as the national conscience on matters of food, exercise, and weight control. Their incessant repetition on TV, on radio, in schools, and in popular fitness forums sets the mood for the nation on such issues, ratcheting up and down the guilt levels on various dietary behaviors.
By 1990 the weight control recommendations of the Dietary Guidelines Committee had already been loosened once. In 1980 the guidelines had advised Americans to "maintain an ideal weight"—a clear, unequivocal message that anyone who could read one of those omnipresent weight-for-height charts could understand. By 1985, in the middle of the supersize revolution, the advice was altered to the more vague "maintain a desirable weight," the better not to impose unrealistic goals upon an increasingly touchy populace. In 1990, even as obesity rates spiraled upward, the committee wanted not only to loosen the weight guidelines again, it also wanted to do something it had never done before. It wanted to tell Americans that it was okay to gain significant amounts of weight as they got older.
The impetus had come from Dr. Reubin Andres, a remarkable man with a peculiar agenda. The chief of the metabolism section of the National Institute on Aging, Andres had a long and deep track record in the area of gerontology, diabetes research, and public health policy. (By the mid-1990s he would also enter the annals of medical history as the inventor of the euglycemic clamp, to date the best way to measure insulin secretion and sensitivity in human beings.) In the 1980s Andres had become obsessed with the issue of weight guidelines for the elderly. For four decades, he argued, the nation had hewn to an unnecessarily strict weight-for-height chart set down by the Metropolitan Life Insurance Company. Those standards, he said, were not only unrealistic but also unscientific; they reflected only the experience of people who could afford life insurance, a largely white, affluent, and middle-aged cohort that no longer represented the increasingly diverse country.
To prove his point, Andres performed a statistical reanalysis of what was then Metropolitan Life's most recent data, published in 1979. Also known as the Build Study, the data had become the basis for new weight-for-height recommendations issued in 1983. Andres turned a new lens on the data: What if one broke the data up into age groupings, then asked, essentially, "At what weight-for-height ranges does minimum mortality occur in each age bracket?" Andres and a few colleagues used the question to guide a reworking of the Met Life numbers. They came up with a surprising revelation. As Andres read the revised data, the Metropolitan recommendations were "too low." It was better—that is, less risky—to be fatter—up to fifteen pounds fatter—once one turned forty. This was because "the Metropolitan Life tables have erred, apparently in an effort to simplify the weight recommendations, by not entering age as a variable."
Andres then assessed twenty-three weight studies of other populations, ranging from Japanese men in Hawaii to American Indians. "We compared the body mass indices associated with the lowest mortality from these studies with the body mass indices of the Metropolitan tables," he wrote. Again, the results seemed revolutionary. Higher weights were associated with lower rates of death, particularly among persons over age forty. The recommended weights were thus "too restrictive." A forty-year-old could thus be up to 20 percent fatter than previously thought and still be at minimum added risk from weight-related death. In December 1985 Andres published his findings in the prestigious Annals of Internal Medicine. He then embarked on an extensive lobbying effort to change the USDA's weight guidelines, speaking frequently at gatherings of public health experts, advocates for the elderly, and various special-interest groups.
Andres was certainly on to something. The goal of crafting weight guidelines that more closely reflect America's racial and ethnic diversity was a righteous one. For decades, even conservative scholars of actuarial data knew that their subject pool, like Cooper's data on rich executive exercisers, was unrepresentative of the national experience. The problem was getting good data on those populations—data that were both statistically and medically sound. Unfortunately, Andres had erred on both those counts—erred badly. Yet for two years he went unchallenged, his conclusions slowly but surely taking hold in the national consciousness. Anti-diet and fat rights groups cited him regularly in discussions of why being fat wasn't really a problem. Feminists concerned about anorexia took heart in the notion that the good fight was not against fat but against "unrealistic leanness." It was okay to gain weight.
Then in 1987 four scholars from Harvard University's School of Public Health, led by Walter Willett and Meir Stampfer, dropped a bomb on Andres's research. Trained in epidemiology as well as diabetes and obesity, the quartet closely examined twenty-five of the major prospective studies on body weight and longevity, including the 1979 Metropolitan Life Build Study, the cornerstone of Andres's work. If Andres had been surprised by his reworking of the numbers, the Harvardites were downright frightened by their own. In each and every study they found biases that were so severe and substantial that "failure to address any one issue will lead to an underestimate of excess mortality associated with being overweight." The biases led, they concluded in the Journal of the American Medical Association, to a "systematic underestimate of the impact of obesity on premature mortality." It was not okay to gain weight as one aged.
These were fighting words. But Willett and Stampfer had done their homework. Perhaps the most egregious flaw in most of the studies was their authors' failure to control for cigarette smoking, which is an independent risk factor that is more prevalent among the lean than the fat. (Smoking inhibits caloric intake and increases metabolic rates and energy expenditure.) Thus, to get an accurate picture of the added risk of premature death from excess weight, one must "deduct" the effect of smoking. If the statistician does not do this, Willett and Stampfer argued, one comes away with an artificially high mortality rate in lean subjects. That makes being heavier look less risky when it is actually more so.
This was not mere academic nitpicking. Controlling for independent risk factors is a widely accepted—indeed required—protocol in modern epidemiology. Willett and Stampfer had done just that. The results: "After controlling for smoking," they wrote, "the risk of death ... increased by two percent for each pound of excess weight for ages 50 to 62, and by one percent per extra pound for ages 30 to 49." The same conclusion was reached after reanalyzing an American Cancer Society survey of 750,000 men and women: There was no basis for recommending more lenient weight guidelines. In fact, the numbers suggested just the opposite: Weight guidelines needed to be stricter. Stating the obvious in the face of denial and wishful thinking, Willett and Stampfer noted that "few in the general U.S. population are at an increased risk of death from excessive leanness."
By the time Willett and Stampfer had published their work, however, the "Andres thesis," as it became known, had gathered speed and weight. The notion that excessive leanness was the problem and that overly severe weight guidelines were unfair played to the decade's overwrought identity politics, to concerns about gender, race, ethnicity, and age. In the academy and on the street, people heard what they wanted to hear, and what they wanted to hear was that it was okay to be fatter. And by the time the USDA's Dietary Guidelines Committee met in 1989, what the people wanted to hear had fused with the professional agenda of some of the nation's leading public health scholars.
The personification of that fusion was Dr. C. Wayne Callaway, an esteemed Washington, D.C., physician and public health expert. Callaway had been appointed by the Dietary Guidelines Committee to spearhead an inquiry into weight guidelines. Easy-going, witty, charming, and agile in the logic department, Callaway quickly made his charge a forum for his own inclinations on a wide range of weight-related issues. This was not unusual; most appointees to most public bodies do the same thing, sometimes overtly, sometimes not so. And for the most part, Callaway was on target. It was Callaway who argued for and won one of the most important changes in the guidelines—the inclusion of fat distribution patterns as a key determinate of weight-related risk. As he liked to say, "I can line up ten people, all of the same height and weight, and the fat deposition patterns will be all over the place. What the science shows is that the ones who look more like a pear—who carry their excess weight on their hips—are not as unhealthy as those who look like an apple—the ones who carry the excess fat on their belly." For the first time, Americans were instructed exactly how to calculate their waist-to-hip ratio—an important piece of information when determining whether one should lose weight or not.
But the waist-to-hip ratio also illustrated Callaway's one weakness: a tendency to want to salve too many special constituencies. The ratio was not only medically important, he argued in committee meetings. It was also socially just. Using simplistic weight-for-height tables, he said, "lets men off the hook too easily" (because they carry their excess weight in their belly) while simultaneously discriminating against women (who tend to carry their excess weight in their hips). To make them both use the same table caused women to worry too much and gave men "too much balm." Callaway's understanding of women as a group needing, in some areas, its own health guidelines was sound, but from here his tendency to placate constituencies began to separate him from the data.
There was, for instance, his concern about excessive dieting, a legitimate (albeit epidemiologically small) issue that colored other, more substantive concerns about obesity. In discussing one section about weight control, he interjected, to the surprise of his colleagues, that the committee should leave out the statement "One thing is definite. To lose weight, you must take in fewer calories than you burn." To Callaway, such a statement was "authoritarian." He went on: "What is hidden in that is blaming the victim. There are thousands and thousands of people who are chronically dieting, and if they take in fewer calories, it doesn't help them." At this even his usually sympathetic colleague University of California at Davis nutrition scholar Barbara Schnee-man interjected: "But that concept still has to be conveyed to people, that ultimately it is caloric balance that will determine weight loss!"
Anorexia and bulimia, also legitimate (and also epidemiologically small) health issues, were also accorded undue emphasis. "Because if we look at certain subsegments of the population," Callaway went on, "as has been done for instance in affluent suburban school systems ... fourth-grade girls are already dieting and defining themselves as being overweight. So if we come back to this thing about the potential for harm, I think we really need to balance that and almost give it equal balance."
Then came the issue of age. As Callaway saw it, "By the time a woman gets to age sixty-five, only about 10 percent of women are at the quote, ideal body weight." Rather than seeing this as more evidence that Americans were growing fatter, Callaway declared it an issue of inequality. "So, again, we have this age discrimination," he said. "So again, we're using a standard which doesn't make sense to the elderly population." The answer, he said, was to revise the weight guidelines upward—a historic first in the annals of the committee.
But unlike his advocacy of waist-to-hip ratios, Callaway's age-adjusted tables rested on a single—and very shaky—leg: Reubin Andres's 1985 study. Willett and Stampfer tried to get their concerns across, but since they were not members of the committee, "our views did not get a fair shake," Stampfer says. Willett recalls the situation somewhat more bitterly. "As far as I am concerned," he says, that decision "was one of the worst cases of backroom dealing that I have ever seen." The committee, he says, refused to look at the smoking data, despite the then growing evidence that Andres had been wrong.
Instead, in November of 1990, the committee announced its new guidelines. As the New York Times put it, "The guideline on weight suggests that people over thirty-five can be heavier than young adults without risk to health." Andres and Callaway had triumphed. It was okay to get fatter as one aged.
For the next five years, Willett, Stampfer, and a broad swath of the nutrition community labored for better data on the subject of age and weight. Other groups in the United States and abroad, appalled by the committee's action, published new data on the age-weight link as well. Almost all came to similar conclusions: For healthy people, male or female, it is almost always better to avoid weight gain—at any age, for any reason. So convincing was the evidence that, when the committee reconvened in 1995 (sans Callaway), it unanimously voted to rescind the age guidelines. "Based on published data, there appears to be no justification for the establishment of a cut point that increases with age," the new committee wrote in a terse note. "Although the nadir of mortality curves increases with age in several studies, these studies have failed to control for a history of smoking, which appears to affect mortality at all ages." Again, it was not okay to gain weight as one got older.
Yet for five years, such was the governmental advice that Americans, experiencing the biggest increases in obesity rates ever, seem very much to have taken to heart. And waist.
Given the debacle of the early 1990s reforms, one would imagine that the American exercise establishment might think twice about proclaiming new public health messages that sanctioned sloth, gluttony, or denial. But about that one would be wrong; they did not think twice. Instead, the brightest lights of their leadership embarked upon another crusade, this one to convince the American public that they should not focus on fat at all—that they should forget about dieting and losing weight and instead learn how to be "fit and fat."
The gladiator of the crusade is Steven Blair, a brilliant Texas epidemiologist, director of the Cooper Institute, and himself a leading proponent of the health-based fitness recommendations of the 1980s. For two decades, Blair has been at the primordial center of the debate about fitness. It has also been something of a personal issue for him. He is, as he likes to say, "Fat, fit, and bald — and none of those things are likely to change."
For years Blair did try to change; in the 1980s he followed a strict diet—the one recommended by the AMA—but to little avail. Like some obese people, his body is in thrall to a stronger genetic inclination to retain excess weight. Unlike most obese people, Blair's response to his birthright has been to get tougher — he is a marathoner, triathlete, and vigorous sportsman. He has run, by his own estimate, more than 80,000 miles over the past thirty years. With his confident, engaging manner, mile-long vita, and persuasive debate style, Blair is his own best advertisement for his fit and fat campaign. And campaign he does. "We've got to get rid of this focus on weight," he likes to say at every media interview. "There's a misdirected focus on weight and weight loss—the focus is all wrong. It's fitness that's the key." Or: "Let's throw away all the scales. Let's stop talking about weight." At times he goes even further, proclaiming that "you can stay overweight and obese if you are fit and be just as healthy, in terms of mortality risk, as a lean fit person."
As usual, the scientific basis for Blair's case rests on studies of clients of the Cooper Institute—white, affluent, male professionals who had come to the center for a medical exam between 1970 and 1989. In Blair's clinical measure of their health, the key variable was fitness as assessed on a treadmill test. The test starts at a speed of eighty-eight meters a minute at zero elevation, which is increased to 2 percent elevation for the second minute, then 1 percent each minute to twenty-five minutes; after that researchers turn up the speed every minute until the test is halted when the subject becomes exhausted. Since total time on a treadmill correlates strongly with individual fitness levels, Blair was then able to assign participants to different, age-group-specific fitness categories—low fit, moderate fit, and high fit. He next calculated in BMI—a weight-for-height index based on health outcomes rather than on "what is normal"—thereby creating three distinct groups to study: normal weight men with low, medium, and high fitness rates; overweight men with low, medium, and high fitness rates; and obese men with low, medium, and high fitness rates.
To find out what all this meant, Blair then figured in the rate of death and related risks that took place within this group over the years. What he found was important. Death rates were inversely related to fitness status. While it wasn't too surprising that high-fit, normal-weight men had death rates 61 percent lower than low-fit men, it was notable that the risk reduction held up when applied to fat men who were in the fit—or high treadmill time—category. The conclusion, in its invariable unexciting academese, was that "inverse gradients of mortality across fitness groups were similar for obese and non-obese men." Blair, however, spun it like this: Fat men who were fit lived longer than slim men who were not fit.
This Blair used to attack U.S. weight guidelines, which he regarded as too restrictive. (The 1995 guidelines suggested that Americans maintain a healthy weight, preferably a BMI of 25 or under.) To sharpen his epidemiological blade, Blair did another study. This time he calculated in risk from cardiovascular disease. The results were again revealing. Fit but overweight men displayed a similar rate of mortality as physically fit men of normal weight. Almost as important, he proclaimed, fit overweight men had a lower risk for cardiovascular disease than unfit normal-weight men. This, of course, was a bit like comparing apples and oranges (or, more apt, apples and bananas); no one, after all, had ever said that being unfit and skinny was a good public health goal. But Blair saw fit to spin it like this: "The health benefits of normal weights appear to be limited to men who have moderate or high levels of cardiorespiratory fitness. These data suggest that the 1995 U.S. weight guidelines may be misleading...." And again: "We do emphasize that increasing fitness may be more important than maintaining healthy weights."
The media translation was predictable. As the Associated Press (and many others) slugged it: "Study finds obese exercisers outlive thin people who don't." A book came out that was entitled You Don't Have to Be Thin to Win. The New York Times even went so far as to say that Blair had "dispelled" a "myth" that fat people could not be fit.
But that was never really the myth, and that was certainly not why body weight guidelines promoted leanness. Body weight guidelines—and the entire infrastructure of promoting weight loss—lay in long, deep, and convincing science that body weight is inversely related to health. Over and over, studies show: The fatter you are, the more likely you are to be sick, feel sick, and die young. Blair's own data are a case in point. Taking out the fitness variable and looking at body weight only, Blair admitted: "Men with a BMI of >30 were generally less physically fit and had more unfavorable risk factors than men in the lower BMI groups." Lower weight men had higher good cholesterol, lower bad cholesterol, and higher treadmill times than fatter men. "The highest death rate," he added, "was observed among those men in the highest BMI category and correspondingly lower death rates were observed in each subsequently lower BMI category." And when one looks at the difference between low fit men in all categories—which one might think would be useful since most obese people are not fit—Blair's upbeat message fades: Normal weight nonfit men had an age-adjusted death rate (the number of excess deaths in the studied group) of 52.1; unfit fat men had the higher rate of 62.1. More: Unfit lean men were half as likely to have a history of hypertension than unfit fat men. In the real world, even according to Blairism, the fat are more likely to die early—and to live precariously—than the lean.
Now look at the fat fit vs. the lean fit in Blair's population. In almost every category, it is better—far better—to be lean. Consider treadmill time. The data are unequivocal: As a person gets fatter, even if he is getting technically fitter, he is also less likely to perform as well on the treadmill test as his leaner brothers. Blair admitted: "Men who were normal weight and physically fit had the longest average treadmill time."
But the single most important fact—again detailed by Blair himself—is this: The fat are always less likely to be fit than are the lean. The absolute numbers bear this out. Of Blair's total universe of people, 8100 of the lean were fit, 6000 of the overweight were fit, and only 3307 of the obese were fit. But that did not make the editorial—or cultural—cut either. Neither did the fact that those 3307 almost certainly had to work a lot harder to get that fitness. They could likely do that only because they were, as a group, much richer than most Americans. Remember, Blair's real message, almost always lost on its readers, is largely one of class: Yes, you can be fit and fat if you are rich, white, and male. As, again, were all members of the Cooper population.
It would take a hard heart to say it is wrong to tell fat people that they can become fitter by exercising more. They can become fitter. There is also nothing wrong—and everything right—with preaching a doctrine of self-acceptance to go along with that advice. One should not hate oneself because one is fat. But one should not be led to delusion. Weight matters. It always matters. If one is obese, losing weight is key to obtaining optimal health.
There may, however, be something downright cruel about implying that "anyone" can be fit and fat, especially when the principal examples of that are rich white people who have the time, money, and energy to train—not for ten minutes three times a day—but for marathons. Marathons!
Consider a typical example, inevitably trotted out by Blair for some poor general-interest reporter who needs an example of how one can be fit and fat.
His name is David Alexander. Over the past seventeen years he has finished 276 triathlons in 37 countries. He trains so much that he sleeps only four and one half hours a night in order to do so. In a week, Alexander will swim 5 miles, run 30, and cycle 200, and on top of that might compete in not one but two triathlons. Alexander is also, at 5'8" and 260 pounds, "a big boy," he likes to say, "and I'm always going to be big, but I'm healthy." Only much later on in the story do we find out why he is healthy. Alexander is the co-owner of an oil company. There he inhabits an office, we are told, where he sits "surrounded by the antique maps he collects." As is the case of most Americans, for Dave Alexander, fitness is purchased.
But is he really fit? What of the illnesses that derive from fatness that have nothing to do with cardiovascular health? What about type 2 diabetes? On that count the most recent scientific literature is sobering and clear: Alexander is much more likely to get it than he would were he leaner. As a study by Harvard's Departments of Epidemiology and Nutrition and Schools of Public Health concluded in 2001: "The most important risk factor for type 2 diabetes was the body mass index ... Even a body mass index at the high end of the normal range was associated with a substantially higher risk [than a lower body mass]." How substantial? "More than 61 percent of all cases could be directly attributed to overweight." Although some studies have shown that exercise can somewhat mitigate those risks in fat people, the overwhelming consensus among diabetes experts is perhaps best summed up by a quotation from the director of a New York medical program trying to treat the disease. "Bring me a fat man," this physician told the Times, "and I'll show you a diabetic, or someone who will become one."
Excess abdominal fat cells are troublesome in and of themselves for another reason: They are, metabolically, the laboratory of so-called Syndrome X. The syndrome, first identified by the Stanford endocrinologist Gerald Reaven, acts as the precursor to both type 2 and, eventually, full-blown insulin-dependent diabetes. Excess weight is implicated in its progression. This is because, in at least 30 percent of all Americans, insulin-resistant fat cells in the gut produce excess fatty acids, which wreak havoc by attacking the body's vital sugar- and fat-processing functions. The more insulin-resistant fat cells, the more destructive fatty acids. This in turn results in everything from hyperinsulinemia (leading to diabetes) to excess blood fats (leading to artery-clogging) to constricted blood flow (leading to hypertension). Although particularly rampant among the poor and recently modernized peoples of the world (and of those in the United States, as described more fully in chapter 6), the syndrome knows no economic barrier when it comes to fat. Fat cells are its engine, fuel-maker, and distribution network.
As a person ages, excess weight becomes problematic for another reason: bone disease, which it can both cause and complicate. Osteoarthritis of the knee is a case in point. Being heavy drives the progression of this painful disease. A pound of extra body weight places from two to four pounds of extra stress on the knees and hips, even during routine movement, let alone the stress of marathon-like exercise regimes. In the arthritic knee, which takes the majority of the pounding, that stress causes the cartilage to wear away, letting exposed bone surfaces grind against one another. That brings even more swelling, pain, and difficulty in moving about in general.
And that, however the epidemiologists cut it, just ain't fit.
But then, by millennium's end, most Americans were not fit. They were exercising less, eating more—and, thanks to the permissive culture they had created—not feeling very bad about it, thank you very much. It was, after all, a comfortable world, one where a bit of housework sufficed for exercise, where it was okay to gain weight as one aged, where it was healthy to be fat, where the medical consequences of their behavior seemed remote. Even though those consequences were exploding right under their noses.