We must resist old age, Scipio and Laelius; we must compensate for its deficiencies by careful planning. We must take up defenses against old age as against a disease, taking due thought for good health, following a program of moderate exercise, eating and drinking enough to rebuild our bodies, but not to overload them.
—CICERO
“Rubbish!”
Silence.
“Rubbish, I say!”
Silence.
The speaker, Josh Mitteldorf, having quietly taken in the one-word critique issuing from somewhere in the hotel conference room, commenced again. He was trying to convince a gathering of the Caloric Restriction Society, gathered for their second international conference in Tucson, Arizona, that death wasn’t really such a bad thing, that nature evolved the genetic program for apoptosis—programmed cell death—for a reason: to keep the planet from getting overcrowded. “I kept asking myself,” said Mitteldorf, a slim, tousle-haired environmental scientist, yoga teacher, and philosopher, “why would evolution create an organism with a program to kill cells? And a lot of the answer came from looking around at the world today—at obesity, overpopulation, pollution, consumerism—and that all led me to this quest to understand why nature made us to die.” He had recently published a paper in Science about it, and …
“Rubbish!”
This time there was an audible ripple of protest through the crowd, directed toward the source of this indecorous behavior, a tall, skinny man wearing a T-shirt with the words END AGING NOW! His name was Michael Rae, only twenty-nine and who was, along with his equally young girlfriend, April Smith, quickly emerging as the new poster children for the CR Society, an organization long saddled with the dour image of some of its older practitioners. Rae, who had already charmed everyone by openly flossing his teeth during an earlier presentation—the better to reduce his inflammatory burden—was having none of Mitteldorf’s ideas, and instead of waiting for the formal question-and-answer period, bounded up to the podium, cranked the microphone his way, and began to recite data to the contrary. “Can someone let me use their computer—I want to look up a reference?” he said. He then went on, with a barrage of … data. “So how can you possibly make this argument?” Rae said, winding up, to an apparently unperturbed Mitteldorf. (There had been a yoga class that morning.) “How can you say that?”
I had come to Tucson at the invitation of Lisa Walford, a former president of the Caloric Restriction (CR) Society and the daughter of one of the organization’s founders, the late Roy Walford. Tucson held some special significance both for the society and for Walford. About an hour’s ride out into the desert stood the site of one of the first, albeit unintentional, sites of human caloric restriction—Biosphere 2. The biosphere was an attempt to live for two years in a fully contained, self-supporting environment. It ended in a rash of controversy, partly because the dome failed to produce enough food or air for its eight inhabitants. Roy Walford, one of them, had never been the same after, and many quietly suspected that his recent death, at age seventy-nine of Lou Gehrig’s disease, to be a direct result of the biosphere experience. The scientific program of the CR conference included a number of highly placed scientists who had been friends with Walford, here partly to respect that, partly because, when you think about it, CR people are the only lab animals that can talk.
“You can learn all kinds of things from them,” said Edward Masoro of the University of Texas; he had been studying CR in rats for decades, “but there is nothing like this conference.” “These are people with a huge commitment—think about it, they are willfully suffering, in my mind, for a payoff we don’t really know will work in humans,” Steve Spindler, a cell biologist from UC Riverside, told me later. “Usually they are eating thirty to forty percent less than they normally would—every day. These are not crazy cult members either. Yes, they get passionate, but they are the most scientifically well-versed people I have ever met. They are amazing.”
Luigi Fontana, a professor of medicine at Washington University School of Medicine, chimed in over a bland meal of couscous and steamed vegetables later: “These are wonderful people—beautiful people!” He smiled and went on to outline his plans to draw their blood and study their arteries and brains. “I mean, where—where!—can a medical scientist ever hope to have anything better!”
And where, I thought, can a journalist hope to sight more fish in the same barrel? For it is true that, among CR people, there are plenty of odd characters: the vitamin marketer who eats so much beta-carotene that his palms look slightly orange; the mathematics professor who tries to get so many of his limited calories from raw vegetables that a meal with him takes two hours and a dozen trips to the salad bar; the expatriate electronics executive who normally lives in Japan and who prefers to eat his dinner off a pharmacist’s scale, gram by gram; the perfectly healthy advertising executive who measures blood sugar before and after every meal; his wife, who believes that eating after two in the afternoon is “something we should just get over.” There are plenty of odd events as well: the six a.m. meditation breakfast, in which one meditates on a repast of five blueberries and three potato chips; the sometimes mind-numbing scientific panels on coenzyme Q10, l-carnitine, PBN, and, of course, protein versus carbohydrate consumption. All of this had led, in the past few years, to a rash of magazine and newspaper articles, inevitably describing the strange aspects of the society and, not surprisingly, using it to vent any variant of modern identity politics, from feminism (there were almost no female members) to fatty-ism (it was a “pro-anorexia front”) to foodie-ism. “I don’t know what was worse, the accusation that we were somehow for anorexia or that we didn’t know anything about cooking,” April Smith said, a little sarcastically. “I mean, I take some pride in my halibut!”
Yet the more time I spent with CR people, the more I came to see them as utterly logical (perhaps supralogical, given the consumption-oriented culture we all live in today). For mainly a series of unremarkable reasons—usually set in motion by a health crisis and an early realization that they are mortal—they each have evolved a kind of longevity phenotype—a set of traits—of their own. In a sense, they represent one version of what people might be like in a world of extended life spans and ratcheted-down consumption. They are, literally, cool: Shake a hand and you’ll notice. They can be a little grumpy, too, but that is just those who are relatively new to it. The rest, as Lisa Walford, who grew up in the household of the world’s most famous calorie restrictor and is now one of LA’s top yoga teachers, explained, “just have a kind of damped-down affect. It is not depressed. It is just … different.” She paused for a moment. “You’ll want to compare it to the mouse, of course, but what I find is people who are just not as revved up as we expect people to be these days.”
They are also not very sexual, a subject to which we will return later.
All of which invites the question: Where did we get CR in the first place?
Not long ago, Steven N. Austad, one of the nation’s foremost experts on the evolutionary biology of aging, was rummaging around in the dusty archives of Clive McCay, the Cornell University scholar credited with discovering, in 1935, the well-known antiaging effects of caloric restriction. Among the yellowing photos, old laboratory notes, and studies on nutrition was McCay’s journal. On several pages McCay had jotted down tips for healthful eating, a few aphorisms, and a piece of recommended reading: a book called La Vita Sobria: How to Live a Long Life, by a sixteenth-century Italian, Alvise “Luigi” Cornaro.
Alvise “Luigi” Cornaro, sixteenth-century author of La Vita Sobria, which influenced modern antiaging science.
Right away Austad, whom I’d interviewed and who knew of my own interests, sent me a copy. “Here it is,” Steve wrote. “I see he’s reading your friend here.” In his usual low-key way, Austad had made an important find. He had linked the most important science about life extension of the twentieth century to its distant philosophical origins in the sixteenth.
Cornaro, a Renaissance humanist and businessman little known outside of academic circles, had been on my mind for nearly a decade. I first came across his treatise while researching a book about obesity and was struck by the modern arc of his health narrative. Here was the classic case of a man falling ill because of his wayward behavior, then regaining his health through renewed attention to diet and vigorous living, and then—the modern part—prescribing his regimen to every friend who would listen, first with incessant chatter at the dinner table, then via a self-help book. Cornaro was our kind of guy: the original billionaire philanthropist turned health nut.
Or was there more? As I slowly picked through translated Italian documents and visited his home in Padua and his farmlands just outside Venice, something else began to emerge. The more I looked, the more I saw that Cornaro’s circumstances, in terms of bodily aging, were very much like our own. True, they included madrigals and commedia and busty wenches and ermine stoles, the Renaissance version of MTV and Internet porn, but if you parsed his physiological world through modern scientific lenses, you came away with a prototype for contemporary aging—and for contemporary antiging.
Born in Venice in 1484, Cornaro spent much of his early life apprenticed to a wealthy uncle in Padua, who put him through school and college and, perhaps more important, taught him how to work the ropes of the Venetian bureaucracy for financial succor. At the time, Venice and her empire had entered a period of almost constant warfare with the Holy Roman Empire; her preeminence in international trade suffered huge blows. Cornaro—young, ambitious, a fast study—took this all in. The future, he came to believe, lay not in overseas conquest, but in radical agricultural investment at home. Inheriting some fetid marshland on the mainland from his uncle, young Luigi drained it and planted rice, wheat, and vegetables. He must have done something right. The swamps burst with plenty, and he made a fortune—a fortune that launched him into a round of partying that didn’t end until he turned thirty-five. He came to his physician debauched and depressed; he was told he’d likely not make it to his next birthday.
His symptoms were telling. Cornaro had “a continuous unquenchable thirst.” He could not tolerate sweet fruits. Or pastry. He was also gouty—probably from too much fatty meat. Today, we would likely say that he had type 2 diabetes. (His case pre-dates that of J. S. Bach, long thought, albeit uncertainly, to be the first named type 2 sufferer.) Cornaro also complained of seasonal fevers, likely from low-grade malaria contracted while surveying his swamplands. In the Galenic parlance of his day, Cornaro considered himself “choleric”—eternally hot and bothered.
His physician, perhaps having parsed some of the period’s reissued classical medicine texts, gave him an unusual prescription. He had to learn to eat what agreed with him, but—most important—to stop eating before he was satisfied. Much of the advice was couched in moral terms, with great emphasis put on the notions of temperance and sobriety, the divine counters to gluttony and the way of all evil. (This was before postmodernism, when you could still speak about such matters.) But as Cornaro himself discerned it, the advice was the essence of reason as well: “I accustomed myself to the habit of never fully satisfying my appetite, either with eating or drinking—always leaving the table well able to take more. In this I acted according to the old proverb: ‘Not to satiate oneself with food is the science of health.’ ”
Within a few days he felt better, and he began to experiment. Fruit did not agree with him, and despite the prestige accorded to melon and cherries and apples and peaches on the rich man’s Renaissance table, he simply stopped eating them. The same with pastries. Meats like goat and mutton, in small amounts, he found agreeable. Pane padovano, the coarse, whole-grain country bread of the Paduan countryside, he found a delight. But Cornaro’s mainstay was his “dear” panatella, or, as it is known today, panado—a brothy soup, usually derived from capon and cooked with small bits of Paduan bread and, sometimes, an egg. With it he drank two glasses of wine— “the milk of the aged,” he called it—a day, bringing his total daily intake to somewhere between 1,500 and 1,700 calories. Thus did a man who never heard of blood sugar or insulin, let alone diabetes, put together a relatively low-sugar, high-protein, calorie-light diet. Cornaro’s Way might suit many modern type 2 diabetics just fine.
The results of his regimen were striking. “In less than a year, I was entirely freed from all the ills which had been so deeply rooted in my system as to have become almost incurable.” A man transformed, Cornaro could now mount his horse “and other things” unaided. He took up anew his vast designs for transforming Venice (he devised an influential plan to block off the lagoon from the ocean and build an island theater across from St. Mark’s, which later critics considered heresy), and he began writing influential tracts on architecture, water management, and farming. His own garden became the scene of one of the liveliest and most influential salons of the northern Renaissance. The great language master (and literary inamorata of Lucretia Borgia) Cardinal Bembo partied, apparently soberly, with him. Jacobo Tintoretto came and painted Cornaro’s portrait (which now hangs in the Pitti Palace). Giorgio Vasari interviewed Cornaro. Even the young Andreas Palladio, later the preeminent architect of the northern Renaissance, absorbed Cornaro’s enthusiasm for classical architecture. The old man’s Casa and Loggia Cornaro—filled with racy murals and tableaus right out of Nero’s Golden Room—anticipated Palladio’s revival of classical Roman architecture by more than a dozen years.
Moreover, Luigi Cornaro was aging with grace. “I am healthy, cheerful and contented,” he wrote in 1558. “My sleep is sweet and peaceful and moreover, all my faculties are in a condition as perfect as they ever were. My mind is more than ever keen and clear, my judgment sound, my memory tenacious, my heart full of life.” He was happy, enjoying his days spent with his eleven grandchildren, who lived in his home. With them he often sang aloud. “My voice—that which is wont to be the first thing in man to fail—is so strong and sonorous that, in consequence, I am obliged to sing aloud my morning and evening prayers, which I had been accustomed to say in a low and hushed tone. Oh, how glorious will have been this life of mine! O divine temperance!” When he wrote those words he was seventy-six—more than twice the age he was when given his death sentence forty years before.
In terms of life extension and aging, what exactly had Cornaro done? Clearly he had stumbled onto something. Today we would call it calorie restriction, or, simply, CR, which we know works in extending the life span of mice, rats, fruit flies, earthworms, yeast, and, many hope, humans. But how does it work? Of all the proposed mechanisms now in vogue, Cornaro’s Way goes to the heart of the two most vibrant theories. One holds that CR works by constantly, but mildly, stressing the body with faminelike cues, thereby pushing cells to stop putting limited energy into reproduction and growth and instead invest that energy in maintenance and repair, usually repair of damage from routine metabolism; you can think of this as the “super-repair” pathway. The other main theory holds that CR works by ratcheting down the amount of insulin and sugar that runs around in the bloodstream, thereby preventing diabetes—itself a form of accelerated aging—and cardiovascular diseases and a number of age-associated cancers. Think of this as the “sugar signal” pathway. These routes are far from mutually exclusive, with one often affecting the other. Cornaro himself had his own idea about why it worked; in Galenic terms, “eating little” stanched the slow but inevitable loss of “radical moisture,” something that every person possessed in but a fixed portion.
Yet, ultimately, it did not matter why CR worked. Cornaro, surrounded by beautiful singing grandchildren, a garden full of trendy intellectuals drinking divine milk, had tripped headfirst down a physiological pathway only now being deliberately parsed by the leading minds of American science. O divine temperance!
And so: in 1558, he published La Vita Sobria, with new editions following in 1562 and 1564. Contemporaries wrote paeans to him and to it; his friend and language scholar Sperone Speroni even wrote an Erasmus-like gloss on it, “In Praise of Gluttony.” (If it all took place now, Cornaro’s home would probably be featured in something like Simply Rinasciemento! magazine.) And then, in 1566, at the age of eighty-three, Alvise Cornaro died; according to one friend, there was no pain, no regret, no tears. The old man, sitting in his little bed, placidly drifted off, “as serene as a beautiful sunset on an unclouded day.”
In short, his survival curve had been rectangularized.
But what of Cornaro’s Way? Its author had departed, but the book—it had stuck. La Vita Sobria quickly traveled the corridors of power in Europe—to France, Holland, Spain, Germany, and England. And then across the Atlantic. In 1793, the Reverend Parson Weems, arguably George Washington’s most important hagiographer (he literally invented the myth of the cherry tree) and an enterprising upstart publisher, packaged a translation of La Vita with an essay on health from Benjamin Franklin; it became a postcolonial must-read when he convinced the new president to blurb the book, now retitled The Immortal Mentor. So did Cornaro’s Way enter the health literature of the New World.
In 1903, Thomas Edison blurbed a tony, gold-edged reissue.
In 1920, Henry Ford, the ultimate American millionaire health nut, gave it to friends as a Christmas present.
Just outside Burlington, Vermont, flows the Farmington River, which sends its burbling brooks coursing through the bucolic New England countryside, and with them, the aptly named brook trout, or Salvelinus fontinalis. They are a beautiful fish, legendary for a good fight and, after a cold morning spent in hip waders, an even better fry. They were also, scientifically speaking, America’s first, albeit involuntary, practitioners of caloric restriction. Today, you can get pretty close to these longevity pioneers: In noisy 2008, you can still stand in the exact spot where modern antiaging science was born, the place Cornaro met the Connecticut Yankee. That spot, just off Route 4, is the Burlington Trout Hatchery. Wind your way through the circular pools and on into the wooden hatchery building, and you will find a series of troughs, meant for processing young trout, or fingerlings, before release. It was here, sometime in August 1926, that the biochemist Clive McCay and his assistant, F. C. Bing, reveled in the wonders of nature, science, trout farming—and life extension.
McCay was a young man, having just earned his PhD from UC Berkeley, but already he was a piece of work. An Indiana farm boy, orphaned as a teenager, McCay came to college with calloused hands, having worked summers as an itinerant wheat shearer. Long, tall, craggy faced, and handsome in a Nordic way, he was also the classic fast study. As a boy he inhaled the natural sciences, and one day, having gotten his hands on a government bulletin about nutrition, “I learned about calories,” he recalled later. “My sister says there was never a calm meal thereafter because I always sat down and counted the calories in the potatoes and the bread.” McCay spoke quickly and was always moving—“Indian-like,” as one friend put it—hiking ridges and hollows, botanizing strange and exotic stands of grass and weeds, and fording dark, mysterious streams. And there were McCay’s omnipresent companions—terriers! beagles! retrievers!—which seemed to follow him, playfully nipping at his heels, partly in pure canine joy, partly in stoic mammalian reverence for the master, who seemed always to know the way home.
He could well have become a veterinarian, were it not for the fact that, in the mid-1920s, one of the hottest subjects in the sciences was nutrition. As a postdoc, Clive McCay took a fellowship at Yale to study animal nutrition with the great L. B. Mendel, credited with isolating vitamins A and B, among other things. There, in 1926, McCay met F. C. Bing, a young Yalie fascinated with, as he liked to call it, “the vitamin sciences.” The vitamin sciences were popular because, after years of frustration, there was a tangible payoff to it all. That same year, the medical researchers Ming and Minot discovered that you could cure pernicious anemia, a seemingly intractable problem, by having a patient eat large amounts of liver. The result of the discovery was an explosion in nutrition research—and a huge increase in demand for raw liver, traditionally part of the feed for hatchery trout. McCay’s job as a postdoc was to find alternate, cheaper feed mixtures for Vermont’s nascent attempt to restock its barren streams with trout. Liver was simply too dear.
Right away, as Bing later recounted, “the principal observation we made was that brook trout, in addition to vitamins, require a substance which is present in raw liver and to some small extent in dried skim milk.” McCay, attuned to the need for a vitamin-era name, dubbed it Factor H. But how much was enough? He experimented with various mixtures: 10 percent liver, he found, would keep the little trout alive, but 15 percent was needed for them to reach their maximum size. There was another thing. Reporting his results to McCay, Bing one day noted an odd pattern of death among the fingerlings. “Most of the fish in the group receiving only 10 percent protein, an amount that prevented them from growing, were still alive. They seemed to be in fairly good condition. On the other hand, most of the fish that received more protein and had grown steadily all summer, had died off.” McCay was intrigued. Growth … and death.
“What do you think caused this?” he asked Bing.
Sitting on the ground outside the hatchery, the yellow-breasted chats chirping away, the pair chewed over their latest book learning: Rats stunted in one famous experiment appeared notably youthful; in their own mentor Mendel’s lab, stunted rats on deficient diets for a long time could resume almost normal growth upon commencement of regular feeding; in a competitor’s lab, slowly growing rats on a “poor diet” lived longer and healthier than well-fed rats on “state-of-the art” diets. Everything the young men knew about life span and nutrition seemed in flux. They fell to philosophizing and speculation. Hadn’t one German philosopher compared a newborn’s metabolism and growth with that of a wound-up spring? “This force is continuously used up as the spring unwinds and when the spring at last runs down, death occurs.” The pair came back to the fingerlings unwinding in the hatchery. “They behave,” McCay and Bing later wrote in the Journal of Nutrition, “as if there is something in them which is gradually consumed during growth so that if the animals are kept from growing, they live longer.” Growth—it somehow hastened death. In McCay, the agile young faun of Jazz Age vitamin science, the observation prompted a radical thesis: Retarded growth and development were fundamentally tied to the life-extending effects of dietary restriction.
That afternoon, McCay told Bing that he knew what he was going to do with his career. He “was going to work on longevity,” and he was going to do it with … rats.
Rats were something that Cornell University had, and McCay, taking a post there in 1927, immediately fell to playing with their diets. The dominant lab animal of the time—mice would not topple them for another thirty years—was the white rat. It was a reliable animal model, fecund, predictable, and pedigreed; a scientist could know its lineage. But as McCay began to peruse twenty years of background data about rattus, he discovered an amazing fact: almost no one knew how long they lived in the lab, let alone outdoors. After any experiment, leftover animals were simply killed, or “disposed.” About this, McCay complained and complained, until he was shut down with a challenge by Mendel: “You’re young—you do the life span studies!” Commencing in the early 1930s, McCay did just that.
Looking at old data first, McCay calculated the average life span of the lab rat as somewhere between five hundred and six hundred days. He then designed a series of experiments to see how dietary restriction would alter growth, maturity, and life span. In his most important experiment, he took 106 weaned rats and divided them into three groups. Group I was fed all the food they desired. Group II was calorically restricted and forced to mature slowly, with feed that contained all essential vitamins and minerals. Group III was allowed to grow normally for two weeks after weaning, then forced to develop slowly with the restricted diet. By splitting the groups so, McCay hoped to figure out one of the big concerns of Great Depression nutritionists: Could growth, interrupted by lowered caloric intake, be refueled later in life? Would a slow-growing mammal—rat, cow, pig, perhaps even a human—ever reach the same size as one that matured quickly?
Although the answers to McCay’s questions were somewhat predictable—slow growers never reached the size of fast growers—the health and longevity data were counterintuitive. The restricted animals lived longer—lots longer—and they had a reduced rate of disease. In reporting this, McCay became a crafty, modern communicator. Even before his scholarly journal articles on the experiment appeared, he was spinning it in the science monthlies. Writing in the September 1934 issue of Scientific Monthly, McCay noted: “Earlier experiments in our laboratory and [at] Columbia have shown that the mean life of a male rat is between five and six hundred days: if the average man lives fifty to sixty years, about ten days in the life of a male rat equals a year in the life of a man. In the present experiment the mean length of life was 509 days for the normally-growing male rats, equivalent to a mean age of 51 years for man. The rats of the two retarded-growth groups have exceeded mean ages of 780 and 870 days, respectively, equivalent to 80 to 90 years for man.” Then, almost as an afterthought, he added: “These data indicate that the potential life span of an animal is unknown and may be far greater than we anticipated.” The potential life span of an animal is unknown and may be far greater than expected. Again and again for nearly a decade, McCay returned to these words. Maximum 1 ife span—in human terms always thought to be somewhere around the Bible’s 120 years—was not fixed. So what was the maximum life span? Why, by 1935 there were rats in his lab that had lived more than thirteen hundred days, and everyone knew what that meant in human years …
The insight seemed to transform McCay. The man once known for his nature boy profile began wearing a red bow tie, and referring to himself, as was quietly fashionable at the time, as “a bit of a Bolshevik.” He was constantly speaking at this scientific conference or that, his tuft of unruly hair and handsomely rough visage looming over his peers, who enjoyed the upstart’s gentle laugh and out-of-the-ordinary observations. His “growth versus longevity” thesis also dovetailed nicely with the era’s prevailing theory of aging, also known as the “rate of living” thesis. To promote his line of thinking—this in the dead of the Great Depression—McCay gave interviews to popular magazines and took up the new media of the radio. His weekly show went into all kinds of detail about diet, health, and aging. He talked about how studies of nineteenth-century British prisoners showed that those who had been given the more meager rations tended to live the longest; about obesity, fat, and sugar and their effect on health. He began quoting the wisdom of the ancients and near ancient, bringing to his Cornell-area audience the words of people like the mystical thirteenth-century monk Roger Bacon, an early longevity advocate, and, of course, Luigi Cornaro.
But the more McCay talked, the more he realized his true interests lay not in animals, but in humans. (He liked to quote Bacon’s injunction that “the cure of diseases requires temporary medicines but longevity is to be procured by diet.”) Life span was not fixed by some law of nature, and “therefore,” he said in one radio show, “a bright future belongs to the man who directs his efforts to stretching life span. We have learned to keep most of our children from dying but we have not made much progress toward giving men and women a healthier middle age. We do not wish to prolong the suffering that goes with feeble old age; we want to extend the prime of life when most of us live and enjoy living. I believe it can be done if we give this problem a sufficient amount of thought.”
In the Hollywood script version of McCay’s later years, he’d have likely pursued this semi-immortalist dream, slowly working his way up the mammalian chain, testing his ideas about dietary restriction on dogs and pigs and sheep and monkeys, parsing its effects and refining his theories until one day, in the great scientific breakthrough that changes the world and leaves its discoverer revered (and comfortable in tweed), he triumphs. But life rarely unfolds like a Hollywood script (and even when it does, it is more likely Antonioni than Capra). So it was for Clive McCay, who spent much of World War II studying not life span extension but the nutritional requirements for American overseas troops. It was a fruitful endeavor in itself, and when he returned, he had a list of research questions: Is drinking coffee and soda pop good for you? What kind of nutrition builds the best bone? Should drinking water be treated with fluoride? These would keep him ever busy in the lab and, more and more, in the test kitchen.
McCay’s obsession in later life was building the perfect loaf of bread, an endeavor that succeeded remarkably well. By the mid-1960s, when he died, you could see that success inside the nation’s small but proliferating congeries of health food stores. There, perhaps offered by a vivacious grandmother in a pink smock and sensible shoes, was a loaf of something called Cornell Bread. It was a tasty, if dense, loaf, full of whey and soy and wheat germ and, of course, powdered milk. If you wanted, you could easily make it yourself, because the recipe was inside a little book, usually on the rack next to something by Gypsy Boots or Jack LaLanne or Euell Gibbons.
The book was by a man named Clive McCay and his wife, Jeanette. If you opened it, you would encounter a strange thing—strange, at least, for a cookbook. For there, right on the fourth page, popped a big photo—of a congenial-looking, craggy-faced man in a lab coat, merrily tending two furry white animals known to most people as … rats!
One of the consequences of McCay’s abandoned dietary restriction trials was to leave behind a deeply incomplete understanding of how CR worked to prolong life. McCay’s idea—that CR involved something having to do with a mysterious Factor H that burned up during growth but was preserved by slowing down growth—served to attach the theory at once to the concrete and the mystic. If McCay had known what science would eventually come to know about B12, the element in raw liver and dried milk that was essential for growth, well, then, he would have changed his theory; it was not just some nutrient that caused prolongation of life span, or everybody getting adequate vitamins would become centenarians. On the other hand, if CR’s effect was somehow tied to the slowing down of growth and development, well, that process was so fundamentally governed as to be unfathomable; he didn’t have any way to locate genetic mechanisms, as we do today. And so CR, tied to McCay’s ideas about its workings, was stuck. Growth—somehow it led to death. Such was the notion that professional gerontology inherited in its most formative years in America.
Gerontology—the modern study of the aging process—and geriatrics—the treatment of age-related diseases—also bore a burden. It was the burden of quackery. Early twentieth-century medicine, like late twentieth-century medicine, was filled with endless snake oil nostrums for aging and longevity. When McCay did his first CR experiments on rats, for example, entrepreneurs were still hawking rejuvenation compounds based on ground-up goat testicles and pituitaries from human cadavers. That legacy, rightly, birthed a culture of caution in the world of gerontology. You could see it whenever a professor or a scientist began talking “longevity” or “life span extension.” Discussions of tenure suddenly grew terse. The invitations to fancy conferences dried up. The department chair—he was no longer introducing you to his lovely wife, who was, come to think of it, far too interested in all that rejuvenation talk.
Instead, the next thirty years saw gerontology focus almost completely on theories of aging. Specifically, why do we age? Among them were elaborations of the “rate of living” theory, itself proposed as early as 1928; it originally held that humans were born with a fixed amount of vital energy that was not replaceable and was burned up in the normal metabolic processes of living. (The theory’s real value lay not in its restatement of Cornaro’s radical moisture idea but in the connection between life span and metabolism.) There was the “mutation accumulation” theory (1952), which proposed an evolutionary basis for aging; here the British scholar Peter Medawar asserted that aging occurs because natural selection exerts its greatest force on the young organism—its only goal being reproduction—and, consequently, lets slide many unhealthy genetic mutations that do not show up until later in life. When organisms, from fruit flies to man, began living in safer environments (homes, glass tubes, cages) allowing them to avoid predation and to grow older chronologically, these deferred or accumulated mutations kicked in and cause what we call aging. A theory that pushed that observation further was known as “antagonistic pleiotropy,” which made you sound smart just by pronouncing it; it asserted that the same evolutionary adaptations that made a young organism fit to grow and reproduce could become unhealthy later in life. Think, for example, of a gene that allows you to store fat, thus surviving a famine but, in an environment of plenty, gives you type 2 diabetes. The “disposable soma” theory, another powerful evolutionary model, held that, because organisms evolved to favor early reproduction, they tended to apportion energy to fertility and growth and shorted the maintenance and repair needed later in life, hence, aging. One more theory is the “glycation-glyco-oxidation thesis.” An important permutation of the old “rate of living,” or “wear and tear” model, the “glycation thesis” held that the very process of glucose and protein metabolism that takes place in normal cell tissue caused aging by creating stiff molecular “cross-links” in everything from eye lenses to arteries. Many call it the “caramelization” theory, referring to the observable manifestation of the theory one can see when braising, say, a pot roast, although one might think twice of telling dining companions that you are serving glycolated bovine tissue tonight.
Two theories, however, served another unintended effect: They pushed CR off the research agenda by the sheer force of their proponents.
Perhaps the most practically influential theory arrived from a man inspired not by rats or diet or aging or even biology, but by basic chemical processes. His name was Denham Harman, the founder of the free radical theory of aging. Today, as antioxidants are touted by everyone from Colgate Palmolive to Coca-Cola, it is difficult to believe that there was ever a time when science didn’t “know” that free radicals—unpaired electrons that cause damaging chain reactions with other atoms and molecules—were the prime source of stress on the human cell and hence the engine of aging and disease. But the free radical theory of aging is a relatively new arrival. Harman came to it because, as he told it, he’d always been fascinated with the chemical side of medical science, and by chemistry in general. As a UC Berkeley undergraduate he spent several summers working for Shell Oil during World War II. There he landed in a fairly small department of the petroleum behemoth, the Reaction Kinetics Department, which studied reactive agents in organic chemicals. Reaction Kinetics wanted to put new compounds together and see if the reaction was commercially usable. One of Harman’s discoveries would become the Shell No-Pest Strip. “I became interested in biology in part because I did not understand why some compounds with similar structures were effective, and others were not.”
At about the same time, aging struck him as a possible specialty. An older student, thirty-one, applying to medical school, he was startled at being told, by admissions people, that he was already “too old—we want people who can practice for a long time.” He recalls coming home one day, to his apartment in Berkeley, and seeing his vivacious eighty-year-old neighbor carry on with her business as if she were thirty years younger. He wondered: what really constitutes oldness? Gerontology, the medical subject, was in the air, as was life extension. Once, he recalled, his wife gave him an article on the subject in the Ladies Home Journal, titled “Tomorrow You May Be Younger.” It was written by the science editor of the New York Times. “I thought it was extremely interesting and that chemists could be useful in medicine.” Stanford Medical School, perhaps impressed with his pluck and drive, admitted him.
In 1954, Harman was working at the Donner Laboratory at UC Berkeley. He had just completed his MD, and, for a short period as a postdoc, he had a relatively undemanding schedule. He could, if he wanted, just sit at his desk and read journal articles all day. What he now knew about the human body began to come up against what he knew about chemistry. He contemplated aging as one primary flashpoint between the two. How did chemistry and aging interact? He began to make notes. As he did, his mind, as he recalled it, spun in two directions. Moving one way, he contemplated what he knew about the evolution of life, how it arose spontaneously 4 billion years ago when ionizing radiation from the primordial sun turned water and methane and hydrogen and ammonia into amino acids and nucleotides—the basic building blocks of life—through molecular and atomic reactions. Then he spun in the other direction: Everything eventually dies. Could there be a common mechanism? “I felt there had to be some common, some basic cause which is killing everything. We all go through this cycle of birth, aging and death. Everything. Not just you and I and other human beings. Bacteria, everything—nothing lasts forever. I figured there had to be some basic cause that was subject to genetic changes, because we all know that both genetics and environment have some influence.”
If Harman had been at Berkeley ten years later, of course, he would have simply fired up another doobie and waited for all the heavy thinking to go away, but this was 1954, and a young postdoc with a young family had no such diverting options. There was just the older neighbor lady, who somehow was aging really well, and the experience from Shell Oil, where he’d had his first exposure to reactive molecules. Then, around November 9, 1954, after four months of rumination and reading, Harman had an insight: “All of the sudden, the phrase ‘free radicals’ crossed my mind. You know, just out of the blue…. And what I had was a profound sense of relief. I saw something finally.” He jotted it all down in his journal.
What Harman saw that day, and what he propounded and reiterated over a fifty-year career (at this writing, he is ninety-four and still publishing), was this: “Free radicals cause random damage, and, depending on the type of radical, they can cause all kinds of damage from day one.” Free radicals were simply the unpaired, rogue electrons that formed when cells burned fuel. They could cause the physical deterioration that accompanies passing chronological time. Of course, it would all be much more complex than that—there were all kinds of agents the human cell produced on its own to counter such damage (it makes its own antioxidants, for one)—but the basic idea was there: When cells burn energy, their “exhaust” harms human tissue, and that is aging. Through the 1960s Harman documented the basic molecular reactions—the long, dreary bench science that goes with all good theorizing—but there it was, free radicals. In the early 1970s, Harman refined the theory to take into account the free radicals produced by every cell’s powerhouse, the mitochondria, when they convert nutrients into ATP, the basic energy currency of cellular activity; such free radicals could cause damage to DNA inside the mitochondria, and that could cause aging too—not to mention cancer and all manner of mutation disorders. Harman showed how consumption of antioxidants by lab animals increased life span. An industry was born.
But Harman was not a great believer in caloric restriction. Something about it did not feel right to him. He had nothing against expansion of maximum human life span. It was just that humans could not do so via CR, he reasoned, because, while lowered caloric intake likely led to reduced free radicals, it also led to less ATP, and that left humans far too hungry to voluntarily stick to such a diet. Instead, Harman focused almost exclusively on dietary antioxidant supplements. A number appeared promising and seemed to work in rodents. Humans have been taking them for decades now, with little evidence of their effect, either positive or negative. Harman still does not subscribe to CR.
Leonard Hayflick, a driven young microbiologist, conjured the other great aging theory of the postwar period. Hayflick was—and is—a blustery fellow, full of a sense of mission and proud of his penchant for “taking on orthodoxy,” as he put it later. He was also talented, especially with the petri dish. His passion was bacteria in cell cultures. In the late 1950s, it was a red-hot discipline. Cell culture science—the ability to sample and propagate living cells in a medium of glucose and nutrients—had been around since the late nineteenth century, but as Hayflick rose through the ranks of young scientists at Pennsylvania’s famed Wistar Institute, cell culture techniques grew ever more refined. The search for vaccines, which used those techniques extensively, flourished.
In that pursuit, Hayflick tripped on a central scientific dogma of the period. This was the belief that all in vitro cells—cells grown under glass—were, technically, immortal. If they were kept in the correct nutritional and environmental conditions, they would continue to divide and repopulate ad infinitum. Hayflick, playing around with various microbacteria, detected a flaw in that theory when he began actually counting the number of times a population of cells doubled. There seemed to be a limit—fifty doublings—no matter how good the conditions were. He tried to publish the results, but no one believed him. Then, in an act of laboratory chutzpah that would become legend, Hayflick played a kind of scientific trick: to all of the doubters of his findings—some of the biggest players in cell biology—he sent copies of the same line of cells and simply asked the recipient to telephone him when and if the line began to die after fifty divisions. Every single one grudgingly did, and so was born what is now known as the Hayflick limit. “I knew I had something then.”
But why did the cells stop dividing? Fortunately for Hayflick, by now the normal human chromosome number had been determined, and cytogenetics—the ability to type a cell’s gene patterns—was rising. Hayflick had his research partner perform the analysis, and the finding was remarkable. As Hayflick recalled, “The cells had stopped dividing not because of any accident or ignorance about the [petri dish] culture media, but because of some internal clock.” That internal clock, he would eventually deduce, came from the slow but steady wearing away, cell division after cell division, of the ends of chromosomes, dubbed telomeres. Telomeres are a kind of protective cap to chromosomes, and, as Hayflick’s theory evolved, came to be viewed as a kind of timekeeper to healthy aging. If you could somehow prevent their erosion, you would stanch the inevitable molecular disharmony that flowed from an uncapped chromosome—aka, disease, impairment, death. Telomere aging came to be viewed as a cell’s internal meter, a signpost on its way to death.
For cell biologists studying aging, the Hayflick limit and telomere shortening were powerful theoretical tools. They helped shift much of the search for causes of aging from extracellular events—things that happened outside the body—to intracellular events, things that happened in the cell itself. It was a profound change, but it also led to some serious dead ends. Hayflick’s observations were, after all, in vitro—in glass—not in vivo—in a living organism. Also: not all tissue cells in the human body divide—the heart and the brain, for example—but the heart and the brain did age. How did you explain that? There were huge holes in the theory; although the limit was upheld in many in vitro tests, there were also lots of cells that did not stop dividing at fifty; some stopped at two hundred; others showed little sign of ever slowing down.
There was also the philosophical impact of Hayflickism. If, as Hayflick told it, all cells were fated to age and die, how could anyone in their right mind propose any way to extend maximum life span or slow aging? “We wrongly assume that aging is a disease,” he liked to say. “Aging is just a series of changes that make the body more susceptible to disease.” In this, Hayflick became the scientific ideologue for much of postwar gerontology. If saying that aging was a disease represented ageism to the social scientists and “gray power” activists, it was a scientific heresy to Hayflick, a throwback to “the dark arts,” as he put it. If he closed his eyes and imagined what the world would look like if a pill existed to end aging, or even to slow it down? Why, he would explain, it was not pretty. The rich would not only dominate economically, but demographically. Tyrants, he wrote, would live forever. And, perhaps most symbolic of his generation, Hayflick believed that a longevity pill would deprive people of the “joys of aging,” of following the sun in one’s RV and not having to worry about the kids. And the kids! He wondered: What if children decided to age normally while parents did not? “Humans’ ability to tamper with the aging process will produce societal dislocations and effects on human institutions that will be monumental.”
Of course, almost every one of Len Hayflick’s critiques rang with some truth, but to some scientists interested in longevity, Hayflick’s limit began to feel, well, limiting. “I don’t know why Len and that gang are so intent on stopping people from talking about things like life span extension and other strategies,” said Steve Spindler, a cell biologist working on CR at UC Riverside. “It’s like he had one good idea and spent the rest of his life defending it. What good does it do?” It all began to grate. “I found it to be an unworthy hypothesis. Studying cells in a culture cannot give you an organismal [or whole organism] understanding,” said Edward Masoro, a professor of physiology at the University of Texas. “Hayflick was going around implying that his theory was, essentially, aging under glass. I rejected it as a hypothesis.”
In science, of course, it is one thing to reject someone else’s hypothesis, and quite another to propose and test one of your own.
Edward Masoro evoked the quintessence of modern laboratory science. Or at least the popular image of it. A short, compact man with a bristly crew cut and horn-rimmed glasses that dominated his scrunched-up visage, Masoro was a master of mammalian physiology—specifically of the rat and rat metabolism. He knew everything about rattus, and with such specificity that he once wrote an entire book about how the little rodent metabolized fats. He had done his graduate work in physiology at UC Berkeley in the 1950s, and by the late 1960s, when he first began encountering “gerontology types” at various scientific conferences, he had finally found a home at the University of Texas in San Antonio. There, he became known for his ever-evolving courses on mammalian physiology, and for his excitable teaching style; Masoro once got so caught up in a lecture about rat metabolism that he fell off the stage while talking. At the end of a lecture, people were always unwinding him from his microphone cord. In other forums, he had no qualms about openly challenging colleagues on their theories. His pet peeve was often framed by some form of the exclamation: “But that has almost nothing to do with the actual animal’s physiology!”
The gerontology bug bit when Masoro heard a man named Morris Ross talk about caloric restriction and the retardation of aging. He was impressed with the careful quality of Ross’s work, and he grew intrigued with the subject—or, more precisely, the general ignorance of the subject. “The more I heard, the more it became apparent to me that neither I nor anyone else knew anything about aging.” He also liked the fact that he might bring a fresh perspective—that of metabolism—to what seemed like a settled, or at least ignored, subject. “If you went to the Gerontology Society of America conventions and talked caloric restriction, everyone put it down because ‘everybody knew’ that it was from delayed development, and of course you couldn’t do that to people,” Masoro recalled. “And all the nutritionists ‘knew’ that the CR effect was from the loss of fat.”
To deduce exactly how CR worked to extend life span, Masoro devised a meticulous series of experiments to test a variety of proposed CR mechanisms. They were not the kind of sexy, big-picture experimental pieces that lay readers of twenty-first-century science have come to expect, but, rather, the tedious, step-by-step work of a master laboratory scientist whose research only gets looked at when everyone else is trying to figure out how to design their own experiments. First Masoro set out to define how a normal rat ages: How did skeletal muscle change with age? What happened to fat deposits with aging? How did aging affect the rat’s digestive process and breathing? Having established normal aging parameters for the animal, he then began a series of classic caloric restriction experiments. One group of rats in any given experiment would get to eat as much as they wanted—usually referred to as ad lib—and one group would be restricted to 60 percent of their usual intake. A key component of the regimen was to ensure that the restricted mice received all essential nutrients and vitamins, the goal always being “undernutrition without malnutrition.” They were, in a sense, Cornaro rats.
Right away, the tests began revealing all kinds of health and longevity benefits. The rats on CR not only displayed a longer average life span—by up to 40 percent—but they also had a longer maximum life span compared with controls. This was a fundamental point. As McCay had noted forty-five years before, CR was not altering life expectancy, it was altering something fundamental—maximum life span, thought to be fixed. CR in Masoro’s rats also consistently delayed or prevented the onset of almost all of the classic diseases of aging, from kidney and liver disease to heart disease and sarcopenia, or loss of muscle mass. CR prevented the age-related decline in insulin sensitivity and glucose metabolism, the cause of adult- onset diabetes.
In rapid succession, Masoro’s tests destroyed the old assumptions about how and why CR extended life span. Was it because CR reduced an animal’s overall metabolic rate—the number of calories burned per unit of body mass—thereby reducing wear and tear on the body? No. Instead, Masoro showed that “the food-restricted rats consumed a greater number of calories per gram of body weight during their lifetimes than did the rats fed ad lib, yet they lived longer.” (It was hard here not to think of one of Cornaro’s more pithy epithets: “To eat for long, eat little”) Was it because CR, as McCay had held, interrupted their growth and somehow slowed down the aging process? No. Masoro showed this by initiating CR in adult rats, then compared them with controls and with rats restricted right after weaning. The results were unequivocal: The adult restricted rats benefited almost as much as the rats restricted from youth. Was it because the CR rats lost weight? No, in fact, the CR rats that lost the least weight lived the longest. Was it because of some dietary change—in the percentage of fat or protein or minerals in their feed? No. Again, the key finding was that CR’s life-span-extending effects—and one couldn’t help but recall Cornaro’s most basic injunction—came purely from reduced calories, or energy, consumed during the lifetime of the animal. It wasn’t what they ate, it was how much.
So, if it wasn’t any of that, what was it? Between the lines, Masoro detected two consistent trends: One was CR’s impact on sugar metabolism and insulin signaling. Animals on CR showed little of the traditional decline in insulin sensitivity associated with aging. They maintained much lower blood sugar counts. Perhaps as a result of that, they also showed far less glyco-oxidation in vital tissues—the “carmelization” that led to eye disease and arterial stiffening. But what was the mechanism behind that? A growing suspect was stress. Somehow, Masoro and his colleague Steven Austad theorized, the mild daily stress of CR triggered what biologists have dubbed “hormesis”—a beneficial action from something usually considered detrimental. In this case, the mild stress from a voluntary lack of food shifts an organism’s energy use to repair and maintenance of tissue and away from growth and reproduction—a kind of famine response—thereby slowing down the onset of aging. It was all about how the body used fuel. Some called it “energetics.” As if to further underscore the stress thesis, Masoro showed that CR was consistently associated with moderate daily increases in stress hormones called glucocorticoids.
As Masoro digested his data, he came to some interesting metaconclusions, something he generally didn’t do. The more he looked at aging, disease, and CR, the more he came to disdain the old ideological dividing line over “aging as a disease versus aging as a distinct and discrete process.” At bottom, it was all totally arbitrary, he said. After all, a neoplasia, or tumor growth, can also be seen as a loss of homeostasis, or physiological balance. Bone loss is a universal process in all mammals, but when it becomes intense, it is called osteoporosis. “It is totally arbitrary,” Masoro said. But is calling aging a disease a form of ageism? He had no truck with that line, either. “Aging is something that is bad. There is no such thing as successful aging. It is all aging and it is all bad. You cannot mix up science and sociology.”
But science and sociology are fundamentally linked, especially in a society where medicine and consumerism have fused. The question for Masoro soon became: What do you do with all that CR data? On that, Masoro remained an intransigent rodent physiologist. You could not extrapolate it to humans. “I always thought CR was a bad idea for humans,” he said. “Tell them to do that, and they might do anything.”
Roy Walford, the era’s other great figure in CR research, had few qualms about caloric restriction for humans. Like Masoro, he too had parsed CR’s various effects and mechanisms. But there any resemblance ended.
Walford was an imposing figure, with a Fu Manchu mustache and a perennially shaved head long before it was fashionable. He was constantly traveling to faraway places. Once, he traversed some of the remotest regions of India, carrying with him a bag full of rectal thermometers and temperature gauges to measure the subnormal temperatures of long-living mountain yogis. He was outwardly political, covering the 1968 Paris student riots for the Los Angeles Free Press. He had a penchant for theatre—he was a member of a mime troupe—and for art—he painted and sketched and even did video art. He rode a motorcycle and wore a leather jacket and could be seen, any given Sunday, roller-skating along the Venice Beach boardwalk, not far from where he lived, his bald pate shining in the bright California sun.
Walford was obsessed with aging and death. From an early age, he took to quoting from the story of Faust; his daughter, Lisa, recalls that “anyone who knew my father knew what the central theme was: Faust, and the unfair bargain between life and death.” As early as 1941, at age seventeen, Walford was protesting it all. In an article in his high school’s literary magazine he titled “Conquest of the Future,” he complained that “elders have received positively no gain from science concerning expectant life span … but death is not a necessary adjunct of living matter.” It was hardly surprising that he became a physician and a research pathologist. He wanted to know why people got old and died—and how it could be stopped, or substantially delayed.
His research palate mirrored his personality. Walford’s core interest lay in the role of the immune system in aging, but he seemed to know few bounds for exploring the topic. If Masoro was married to the rat, Walford was always dating new and exotic research species. There was his work on “life span, chronologic disease patterns, and age-related changes in relative spleen weights for the Mongolian gerbil;” his articles about the “effect of temperature-transfer on growth of laboratory populations of a South American annual fish Cynolebias bellottii;” and, of course, the family-fun favorite, “life span and lymphoma-incidence of mice injected at birth with spleen cells across a weak histocompatibility locus.” As a longtime colleague put it, “Roy was intensely creative in his explorations—he didn’t know limits—or at least he was not afraid of them.”
Beyond the imposing outward demeanor—perhaps made more imposing by his subdued manner—one other thing was clear: Roy Walford was an outstanding scientist. In a brilliant series of experiments using genetically uniform mice, Walford, usually teamed with fellow UCLA professor Richard Weindruch, fleshed out a broad range of CR effects: how, even when begun in mouse adulthood, CR slowed down or prevented age-related cancers; how it prevented the loss of immune function; how it preserved liver cells and even increased the ability of such cells to function; how it slowed down the loss of gamma crystallens in the mouse eye lens, a key cause of age-related vision problems in humans as well; how CR improved learning and motor skills in aged mice; and, strangely, how it failed to prevent age-related neurochemical buildup. The more Walford looked, the more he saw the connection to human aging. It was exciting. In 1982, writing in the journal Clinical Geriatric Medicine about life extension in mice, he took the proverbial leap out of his old Faustian universe: “With a fairly high order of probability, the same might be obtained in humans. There is no reason to insist that maximum life span in humans is irretrievably fixed.”
He took up CR himself, and began experimenting—cautiously—with vitamin supplements. He assembled an “optimal nutrient” diet and menu for humans who practiced CR, including, among other things, “the perfect CR muffin.” He wrote a book titled Maximum Life Span. He wrote another called The 120-Year Diet. There were experiments on himself and friends; he practiced a one-day fast, one-day moderate eating regimen on CR. (One of his best friends, the prominent USC gerontologist Caleb Finch, once joked that “whenever I saw Roy, it just so happened that that day was his eating day.”) His consumption of marijuana was legendary, which might be justified because of his interest in its hypothermic, antiaging effects, but one hopes that it was mainly because he had a lot of fun with it. He immersed himself in yoga. All of this Walford did while maintaining a vigorous, competitive laboratory known worldwide for creativity and scientific rigor. Increasingly he cleaved to “systems biology,” the study of large-scale interrelationships and ecological dynamics. And so, in 1992, it was hardly surprising that, when a rich Texas oilman proposed to build a self-sustaining sealed environment called Biosphere 2 in the Arizona desert, Roy Walford would be one of the “terranauts.” To Walford, Biosphere 2 was akin to going on Darwin’s Beagle. You did not know what you were going to find out, he later said, “because nature was going to ask the questions.”
Although huge sums of money were spent on the three-acre complex to make it self-sustaining, Biosphere 2 never really worked. Life inside was often nasty, brutish, and bitchy. Coaxing the environment to produce enough food for the crew of eight required seventy-hour workweeks, and even then it was not enough. The crew was forced to practice CR. Worse, Biosphere’s artificial lung, down in its vast steel-and-concrete basement, did not produce enough air; the crew was chronically airsick, or hypoxic. They lost weight fast, something Walford knew was bad—it released a flood of toxins from quickly shrinking fat cells. No one was having a particularly good time, despite the media’s portrayal of it as a kind of Shangri-la in the desert.
When the crew emerged in 1994, it soon became clear that something had changed in Roy Walford. He seemed … wound up. “They started calling him ‘the barking Pekinese of med sciences,’” a former UCLA colleague recalls. At science conferences, he was antagonistic to those who disagreed with him. “It was very unpleasant when we would meet, because I inevitably was the guy the media called to question whether CR should be done by humans, and of course I didn’t agree with him. He was always very angry and confrontational with me,” recalls Ed Masoro. “We ended up hating each other.” A few years later, Walford himself began noticing other changes. His way of walking seemed stilted, labored. He grew disoriented. Eventually he got himself checked out. The diagnosis was devastating. He had all the symptoms of ALS, a form of Parkinsonism known as Lou Gehrig’s disease.
In the four years preceding his death, Walford vigorously milked the Biosphere data for everything he could. It had been, after all, a completely unplanned CR experiment on humans. Studying the crew’s blood tests, he found that the low-fat, nutrient-dense diet in Biosphere 2 “significantly lowered blood glucose, total leukocyte count, cholesterol, and blood pressure.” Just as in his mice. He also detected strange blood reactions to the chronic hypoxia, or low oxygen levels, that the crew experienced; their blood was akin to that displayed by animals that go into hibernation. This, he theorized, may have been because of the crew’s low-calorie diet. Walford was so absorbed by the Biosphere data, and produced so much from it, that, even after he died in 2004 of respiratory failure, publications continued to print new work by him. One of his last articles was in the journal Movement Disorders. Its concern was reflected in its title, “Atypical Parkinsonism and Motor Neuron Syndrome in a Biosphere 2 Participant: A Possible Complication of Chronic Hypoxia and Carbon Monoxide Toxicity?”
The next year, at the CR Society conference in Tucson, Arizona, a number of members drove out to see Biosphere 2, the experiment that may have killed the man who paved their way to a longer, healthier life.
They found that it had been transformed into a tourist attraction.
The next time I saw Michael Rae, he was not spouting “rubbish” to the speaker at the podium, but, rather, embracing his girlfriend, April Smith, also a CR practitioner. We were at CR Three, held in the small banquet room of a Mexican restaurant in downtown San Antonio. Smith had just given a presentation about how the media had come to portray the CR Society—negatively—and she was intent on spinning it all the other way. She took to talking about one of her favorite subjects—Michael Rae. “I mean, I was attracted to Michael for a very basic, superficial reason,” she told the audience. “I think skinny guys are just so hot. But the more I got into CR, and Michael’s experience of it, the more I saw it had a lot of unexpected benefits. Like the end of a monthly menstrual period—what is wrong with that? And also something that Michael talks to me about all the time, which is that, before CR, when he met a girl, the sum total of his thoughts were, you know, ‘I wonder if she’ll sleep with me? I want to have sex with her. Will she have sex with me? Sex with her would be great I bet.’ And ad infinitum. But that, after CR, his thinking is totally different. More along the lines of, you know, ‘she might be an interesting person.’ There’s no comparison, you know.” Had Rae shifted himself into a state of hormesis, when the low but constant voluntary stress of CR makes you less likely to copulate and more likely to age slowly?
The audience in the banquet room roused. These are the kinds of things CR people love to hear about, because they are bits of a puzzle about whether the science of CR in mice matches the experience of CR in people. It is a kind of mouse-o-centrism, or, for lack of a better term, “mousomorphism.” Mice on CR have greatly reduced fertility, which seems to track with most CR practitioners’ experience of reduced libido; it is a basic evolutionary, or life history, trade-off. (“It is just not that important anymore,” one longtime restrictor told me.) Mice on CR have slightly impaired wound healing, which seems to hold true for many CR people, although usually only in the beginning of their restriction. The mice have a lowered body temperature; ditto CR people. CR people often complain about a sensitive bum because of the lack of padding down there; mice do not complain. One of the things that does not seem to track, as far as I have seen, is outward display of energy. A calorie-restricted mouse at two years of age, compared with a non-CR mouse of the same age, displays tons of movement and energy. CR people, who claim huge boosts in energy reserves, do not show it. If anything, they are, as Lisa Walford told me, dampened down.
David Fisher, a British member of CR, presents with a classic case. I had met him in Arizona and had found him, as most Americans find the British, charming and engaged, even a little wry. But that was only after hanging around him for some time. To the outsider he would appear just as Lisa Walford said—dampened down. And, like so many members of CR, he evoked a slightly naive quality when it came to discussing his early awareness of mortality and aging. “I became aware of aging and mortality as a child and began to dread them. I was always fascinated by science and assumed that one day aging would be cured, but that this may be in hundreds of years, perhaps when the body could be rebuilt molecule by molecule.” He went on: Sometime during the late 1980s, he read about McCay’s experiments and, seeing that the aging process was more immediately malleable, took up CR. “People overemphasize how hard it is,” he told me when asked about it. “It’s only the first five years that are uncomfortable.” He did not smile when he said that. He currently practices a kind of caveman CR, he said. Lots of nuts and berries and some fish and meat. He is about fifty-three—and looks, perhaps, five years the younger, albeit a somewhat strained five years younger.
There is also the sense among some that CR is a refuge from a world spinning out of control—a place where one can feel some sense of certainty. One man in his thirties, who I will call Kevin because he does not like to appear in the media, explained his conversion to CR this way: “It eventually dawned on me that if I could live long enough for the technology to be developed, I could escape this life of quiet desperation in a way that didn’t involve dying. I could have my new brain programmed to avoid, for example, libido—a source of much suffering; and I could have time to gather enough resources to not be dependent on, say, having a job. Thus I decided to try a little bit harder to eat less. I still never said, ‘OK, now I’m on CR.’ I just tried to prepare the most nutrient dense foods I could, and then eat less of them.” So the religious mind comes into play as well. Of overt religiosity there is little among CR people, although Michael Rae practices a form of ancient, bare-bones Christianity that requires dedication to a larger mission in life.
Mainly the cult of science prevails. Stick with any CR conference long enough and you will likely hear about every objection to CR that you can imagine—this because the CR Society seems to invite debate and challenge. It seemed to me a healthy inclination, one that, say, the AMA might try occasionally. On just one afternoon the Society members were told, among other things, that “the CR peace dividend,” or benefit in humans, will be small—six to seven years at most—if you extrapolated from the mouse data to two known human caloric extremes, Okinawan war survivors and Sumo wrestlers, then plotted that line on a graphic of caloric intake per unit of body weight (to which the main murmured response was, “I’ll take it!”); that CR’s much-vaunted effect on free radical damage might actually not matter very much when it comes to actual aging rates, which is sort of like being told you really didn’t have to eat all that spinach in the first place; that studies of CR members’ carotid arteries showed a much slower progression of arterial stiffening than in non-CR people, which had everybody vaguely fingering their throats and smiling; that the growing knowledge of how mice and men differed on CR—CR people, for example, do not show reduced IGF-1 signaling, while CR mice did—was “a troubling finding,” as John Holloszy, perhaps the dean of human CR research, said; and that gene studies of livers from mice that started CR very late in life showed activation of several known extended life span genes—which made me look around for Michael Rae and April Smith, to see if they had heard that and run out to gorge on chips and boink like rabbits. To all of that, the audience listened intently, quietly, and then followed with detailed, nuanced, and utterly bloodless questions. They wanted to know the truth. It was impressive.
But once in a while, a strange thing happens among some CR folk. They begin to think that, besides standing in for lab animals, they can also stand in for scientists.
I caught a glimpse of this during an endless weekend spent at the Tarrytown Sheraton in the summer of 2007. I was there at the invitation of Paul McGlothin, the chief scientific officer of the CR Society, and his wife, Meredith Averill. The meeting, McGlothin told me, would be about “the future of CR.” I had a hint of the agenda. McGlothin had been sending out flyers on the Internet, advertising a “Glucose Control Workshop.” He sent me some suggested reading references as well. The list was heavy on two basic ideas: hormesis, and the growing body of knowledge linking it with insulin signaling. The topic was doubly hot because two scientists, Leonard Guarente and David Sinclair, had shown that CR triggered the gene product sirtuin, which led to enhanced tissue-maintenance, and, consequently, life span extension. In yeast. The pair had also made a compound from red wine, called resveratrol, that seemed to do what CR does. It seemed to work for obese mice on a high-fat diet, a model—let’s face it—for a third of the U.S. population. Maybe one could get the life extension action without giving up food and sex and a round ass. So the talk in the CR community was all about “CR mimetics”—of the long hoped-for CR pill. All of this had led McGlothin to proclaim to me, in one triumphal phone call, that “the future of CR is all about cell signaling!” That the science is far from the try-this-at-home conclusiveness that the FDA, let alone your own doctor, might accept? McGlothin had an answer: “You just have to find the right doctor!”
A former professional clarinet player turned ad man, McGlothin, whose voice and elastic features conjure a kind of low-key Don Knotts, had been practicing CR for fourteen years when, he said, he’d grown weary with “the old image, you know, the guy in the dark corner office with his wheat germ who’ll live a hundred years and hate every minute of it.” As he spoke, he began to physically “perk himself up” in front of me, as though to prove what he was saying. “That’s not us! That’s not me! I mean, I run a superedgy ad agency, superedgy, and a lot of my employees practice CR too. I am positive that a fifty-nine-year-old guy like me could never be performing so well without it. And that’s the way with most people in CR. The image is out of whack, big-time!” (As David Harrison, one of the leading mouse longevity researchers, later noted to me on my blog, “At 59, he should have energy, whether he is on CR or not! He is only 59!”) There was something stilted and practiced, poorly, about McGlothin’s little diatribe, but I let it go.
As if to further prove his point, McGlothin had thrown the workshop net wide to recruit new members. It seemed to have worked. Although about half of the attendees were classic, pencil-thin restrictors, including charming David Fisher and dark libido “Kevin,” the rest were Joe and Jane Averages, several with the requisite doughy American physiognomy. Two had outright paunches. They were newcomers. There was a tall fellow named Eddy from Brooklyn, who was there because he had “just finished chemo, and I’m gonna rebuild myself from the ground up, and this seems like a way to do it.” There was a gauzily clad woman named Julia from Seattle who had suffered a “total digestive collapse” after picking up a parasite in India. And there was a tiny expatriate Sony executive named Dave from Tokyo who explained, as he ate his dinner, gram by gram from a portable scale, that “I just had a close relative die of cancer, and if there’s anything I can do to prevent it happening to me, then so be it. I’ve been doing this for a year now and thought I’d come to get a full indoctrination of state-of-the-art CR.”
On Saturday morning, the indoctrination, McGlothin style, commenced. He focused on two elements. The first was what he called tight glycemic control. “You’ve got to control your blood sugar and insulin,” he said, encouraging everyone at the conference table to pick up their new (drug-company-provided) glucose meters, which came in their conference goody bags. “You’ve got to keep your blood sugar from slamming up”—he tapped the desk and pointed to a chart projected on a screen—“and down! You’ve got to keep it in a narrow range, because if you don’t, you start sending a message to your body to make too much of a lot of bad actors, things like IGF-1 and TNF, things that are markers of aging and chronic disease.” And as if to leave the Atkins option slammed closed, he advocated limiting the amount of meat we eat. It was vegans, after all, who had the lowest IGF-1 counts.
It was morning, and breakfast (boiled sweet potatoes and green bean mash and fruit) awaited, but before “feasting,” McGlothin insisted that everyone first take a blood sugar reading to get a “baseline.” This resulted in lots of fumbling with the finger-pricking equipment (“Oww! Three hits and still none. I must be anemic!” “Fuck! Baseline this!”) After having everyone record the number that appeared on their meters, McGlothin instructed the group to eat a “tease meal”—a few chunks of sweet potato only—to provoke an insulin response; the idea was to have your insulin up and ready to “smooth out” your blood sugar when you finally eat your “real” breakfast a while later.
Then, just as several attendees looked as if they were about to sprinkle salt and pepper on their chair cushions and dig in, McGlothin and Averill popped up out of their seats. “Time for a walk,” he chirped, doffing a floppy canvas hat. “It’s a way to make sure you’re setting yourself up for good cell signaling. Sometimes Merrill just jumps rope with weights on her back for a few minutes, but we can just walk.” Everyone walked, then ate, kind of.
Lifestyle makes up the second element of McGlothin’s “New CR Way.” We were shown the correct way to prepare for sleep—at least three hours in dim light before sleeping—meditation techniques, supplements that raise cognitive performance. There was a long discussion of resveratrol, and, totally straight, “how do you do your own home blood testing?” But none of us was quite prepared for what McGlothin proclaimed as “the next phase” of CR’s evolution. After the Sunday morning breakfast “tease,” we found out. “I call it the CR daily fast,” McGlothin proudly told a few slightly puzzled newbies, one of whom muttered, “I thought we were already fasting!” The essence of it, McGlothin went on, was simple: People should consider going for a one-hour walk instead of dinner every day. That, he said as he displayed another round of slides, would result in all kinds of positive cell signaling benefits, ranging from memory improvements to, of course, better glucose control. “Merrill and I got the idea a few years ago, when we went for a long walk after our main meal at lunchtime and then just didn’t eat the last meal of the day. It felt great. And we sort of looked at each other and said, gee, you mean for forty years we ate dinner instead of doing this? Why did we do that?”
The room was silent.
McGlothin and Averill beamed idiotically.
Suddenly, I couldn’t take it anymore. Because you are an effing human being! I wanted to yell. Sitting there, I wondered if McGlothin had gone completely bonkers, if someone had stolen a couple of cards from his deck, if his elevator didn’t go to the top floor anymore. Perhaps it was because I knew that a number of the things he was saying were just wrong, or so equivocal as to be so. He was recommending a lowered -protein, as well as low-cal and low-carb, diet. This he justified on the basis that protein restriction was found to be better than caloric restriction in reducing insulin-like growth factor 1 levels in Chinese famine survivors, which everybody just knew was the way to go because transgenic mice with low IGF-1 live the longest. Besides the fact that such a regimen might make you wish you were in a Chinese famine, the advice runs counter to just about every single CR experiment to date. People, like regular mice, need adequate and regular protein; it is better to eat a little more than too little of it. Similarly, humans, unlike mice, need IGF-1 for a broad range of things, maintaining heart and brain cells, for one, and fighting infection and injury, for another. In fact, in the limited human data that exists on CR in humans, hadn’t we learned that IGF-1 is not lowered? What did he have to say about that? Why, McGlothin didn’t even drink wine, the divine milk of the aged, for God’s sake. I caught him by the arm on the way back from another tease-walk, or whatever, and asked him about that. “It’s not worth it,” he said, his rubbery countenance a little tighter. “I had a neighbor who practiced CR. His only vice was booze. He died at age seventy-nine from pancreatic cancer.”
Oh.
Now McGlothin brought the room to a hush and asked everyone to take off their shoes to “feel rooted in now.” He explained how meditation was the fourth new component of his CR Way, and then led everyone, eyes shut, in a guided visualization of their internal body. “And now, let’s just thank our pancreas,” he intoned solemnly, “because with the tease, the walk, and the meditation, it has been producing insulin and getting it to the right level.” He went on to the other organs. Each … and … every … organ.
Sitting there, thanking my colon, I couldn’t help but wonder if this is what Roy Walford, a cosmopolitan Renaissance man with a great sense of conviviality, had in mind when he started CR. It couldn’t be. Had Ed Masoro been right when he declared that CR was not for people, who “if you tell them to do that … they’ll do anything”? I thanked my left ventricle and rubbed my eyes, praying for an escape. There was silence.
Then, from outside the conference room, came sounds of life. The non-CR world had roused itself. Food platters banged and juice glasses tinkled. The smell of bacon wafted about. At the Sheraton Tarrytown, Sunday brunch had commenced.
“Morty?” drifted in one Brooklyn voice. “Do you want the French toast or what?”
And that, really, is the question, isn’t it? Do you want the French toast, or not? Do you want your extended life to be a life, or not? There had to be a better way than the cold way. The hungry way. The flat-ass no-sex way. And sure enough, American medicine and American business were cooking up something really hot.
The Six Stages of the Life Course
Silhouette de l’homme régressif
Silhouette de l’homme progressif