In a 2016 interview about his memoir, Bruce Springsteen, age sixty-six, was asked by fifty-six-year-old New Yorker editor David Remnick, “Why now?”
Springsteen let out a long breath, making an “oof” sound, and chuckled. “I wanted to do it before I forgot everything, you know.”
Remnick laughed heartily. The audience watching the live interview cheered and clapped.
“So it’s getting a little edgy with some of that,” Springsteen added, “so I thought now was the time.”
When that interview took place, Springsteen was coming off a sold-out tour, playing exceptionally long sets—over three hours of continuous, highly physical singing and cavorting. Night after night, he played in cities around the world. When his book came out a month later, it topped bestseller lists, and Springsteen launched a new tour, or rather two tours: one book, one concert. By virtue of either his age or stamina, a case could be made for Springsteen as still decidedly middle-aged, but the artist himself clearly felt that whatever old was had begun for him, and he saw, or thought he could see, where it was headed. Somehow neither he nor the New Yorker editor, ten years his junior, recognized the irony of positioning Springsteen partway down a nefarious spiral when the career details they were discussing suggested not just a new high point but a remarkable addition to his artistic skill set. After decades of renown as a musician, he was now also recognized as a talented writer, a fact that introduced new options and opportunities for his future.
A writer doesn’t have to jump up and down or dance along a stage and into an adoring crowd. Then again, not all musicians do that either. Springsteen could sit at a piano, or on a chair cradling his guitar, or with just a microphone and a small spotlight, the audience’s entire focus on his face, words, song. That would not be a traditional Springsteen concert, but would it be worse, or just different? Would it tarnish his legacy and shrink his audience, or expand it, showing range and adaptability? He’s had ballad albums before (Tunnel of Love). The point is Springsteen has options, as many people do, though his are significantly different from most people’s. A different sort of concert, perhaps playing a modified or different sort of music, is just one of Springsteen’s options. He also could sit at home with a mouse and keyboard, or a pen and paper, or a voice recorder, or an assistant taking dictation, and he could write. Such transitions are often framed as devolution, but that’s only the case if the frame is constructed from static expectations. Build it instead with an understanding of the human life cycle, and it looks more like evolution: a gradual process in which something develops into a different form.
If not quite three score and ten, Springsteen was certainly within the long-accepted territory of “old.” For two thousand to three thousand years, from the time of Socrates and the Athenian Empire in the west, and much earlier in the Middle East and Asia, old age has been defined as beginning around age sixty or seventy. In the United States, sixty-five became the federal demarcation line between middle and old age with the launch of the Social Security program in 1935. The group that developed the program, the President’s Committee on Economic Security, chose sixty-five partly because it was consistent with data on prevailing retirement ages at the time and partly because it was the age already selected by half the existing state pension systems (the other half used seventy). Although retirement norms, longevity, and actuarial outcomes have changed since the 1930s, sixty-five has endured in many minds as either a strict divide or a marker of having entered the transition zone headed toward old.
For most people, early, middle, and advanced old age are significantly different. In our current conceptualization of old, physical degradations and lost options are its sine qua non. That’s why, until those things become overwhelming, many people don’t think of themselves as old, even when most younger people would swiftly and definitively put them in that category. When people arrive at the stereotypical version of old, they sometimes no longer feel like themselves, although for most of us the transition to old happens gradually over decades beginning at age twenty. The changes are both positive and negative, though we tend to focus on the latter. Those losses and diminutions are imperceptible at first, then easy to disregard, then possible to work around, and, finally, blatant.
Springsteen signaled that he was aware of the negative changes in his own mind and body. Once you reach a certain age, it’s hard not to ask: Will my mind go first, or my body? Will they both go, or will I get lucky? When will it happen and how quickly?
Aging begins at birth. In childhood, the changes are dramatic. In those early decades, the fact that living and aging are synonymous is lost, couched first in the language of child development and then forgotten in the busyness and social milestones of young adulthood. After a friend moved to another state, I didn’t see her infant for nine months, at which point I found myself with a toddler, not a baby. Stages of child development are predictable and universal across cultures, except in cases of grave illness or disability. As we move through the life span, the boundaries between stages get blurred. Although people debate whether life begins at conception or birth, childhood starts with a big breath upon emergence from the womb, its beginning uniform. Its end is less clear. A ten-year-old is always a child, but eighteen-year-olds can be teenagers or young adults, depending on their behavior. Some people achieve physical, emotional, and intellectual maturity in their teens, others in their twenties. Females tend to get there before males. Still, most people become adults in the same several-years-long window of time.
With the arrival of the twenties, development seems to abate, taking on the imperceptible pace of hair growth or melting glaciers. The changes that defined us as we moved from infant to kid and teen to adult appear to stop. But unseen or not noticed are not the same as not happening. Changing continues throughout life—physically, functionally, and psychologically. At some point, we cross into the territory of “middle age” and discover aging isn’t just a characteristic of that mythical land called old. Sometimes the evolution is welcome, bringing a greater comfort with self, a deep-seated confidence and greater security about what is and has been. At the same time, accumulating physical changes collude in ways that can complicate, distress, and impoverish. A person’s identity can feel challenged.
Even in the decades when change seems slow, almost irrelevant, it is present, significant, ongoing. In my thirties, I had the straight white teeth of a person fortunate enough to have had braces in her teens and dentistry throughout life. By my early forties, my little front bottom teeth began to overlap as if so much time had elapsed that they’d forgotten their training at the hands of metal braces, headgear, neck gear, rubber bands, and retainers. As they overlapped, I saw along their edges the imprimatur of decades of morning coffee, the occasional glass of red wine, and the erosion of daily eating and drinking. Yet my dentist says my teeth look great. She can tell I faithfully brush and floss. What she really means, I know, is that they look great for someone in her fifties, not that they look as good as they once did or great in the absolute sense. At some point the caveat, the conditional clause, goes unspoken.
At the age of apparent aging, the once distant land called “old” no longer seems foreign or exotic to me. Daily, my joints offer protests. Sometimes one has a solo; more often there’s a noisy blur of voices, the new background music accompanying my every movement. I regularly switch among my three pairs of multifocal glasses, each with a different function. I have a faulty gene, a history of cancer, and seven visible surgical scars, and am now missing several nonessential body parts. These days, when something goes wrong in my body, I don’t just consider how it might be fixed; I worry that fixing it won’t be possible and that my new debility will not only endure but beget a cascade of injuries and additional disabilities. In my head, I hear the childhood song about how the foot bone’s connected to the leg bone, the leg bone’s connected to the hip bone, and so on. Although it’s not yet clear how it will go down, I can now imagine me = old, even if I still sometimes register my relentless progress toward citizenship in that vast territory with surprise.
Those physical changes are real but tell only part of the story. For me, the rest of the saga goes something like this: though I have yet to take up permanent residency in old age, I have acquired an intimate familiarity with its culture and customs, and I’m looking forward to it. I imagine its early years, and if I’m lucky, decades, much like the best parts of midlife: the solid sense of who I am and how I want to spend my time, the decreased volume of the sorts of ambitions easily confused with the hollow vanity of social recognition, the greater time and energy for generosity and attention to others, the confidence to stick to my convictions, the exciting new goals and profound sense of life satisfaction. Similar sentiments1 are found with aging the world over.
It may be that after the great celebrations of childhood milestones, we feel surprised and uneasy about our quieter progression through later turning points. A friend in his late thirties thought it absurd that his peers didn’t want him to refer to himself as middle-aged when he so obviously was. I looked at him and agreed; he’s far from old, and also clearly no longer young—he’s somewhere in between. Out the other end of adulthood, my mother says aging isn’t really that bad until you hit eighty, then there’s a nosedive. She said this as we had dinner at the assisted living facility where she moved because of my now-deceased father’s needs, not her own. Seconds later, frustrated because we hadn’t been brought water, she jumped up, grabbed our glasses, and darted across the dining room to fill them. She wasn’t her old self, but she didn’t seem to me like a person in a steep downward plunge. Yet, to her, a threshold had been passed into a territory of greater risk and vulnerability.
By far, the least fixed dividing line separates adulthood and old age. With good health and good luck, some people don’t seem to be or see themselves as making that transition until their late seventies and occasionally later still. By contrast, major stressors such as homelessness, poverty, or incarceration can cause accelerated aging, making others “old” in their fifties, with cellular changes and risks of chronic disease and death akin to those of more fortunate people many decades their senior. And still, use of the word old for fifty-somethings requires quotation marks. We define age as a definite place in life’s chronology, other times as a bio-psycho-social state, and mostly as an amalgam of the two. Using that logic, a frail seventy-two-year-old is called old while a marathon-running seventy-two-year-old executive is not. In reality, both are old, and even if the executive continues her current activities in her eighties, she’ll be “old.”
Because aging is a long, stealthy process, a person’s arrival at old age is less a switch thrown than a series of ill-defined thresholds crossed, the transition often first noted by others. Most people over thirty, and certainly those forty and older, will recall the first time “mister” or “lady,” “ma’am” or “sir,” meant them. As our third decade of life cedes to our fourth, aging seems to accelerate. By the time the fifth decade fades into the sixth, the resulting accumulation of physical changes that define adult development transitions from inconsequential to subtly manifest: the crow’s-feet or balding pate and tricky right knee, the friends with cancer, the talk among your peers about ill and dying older relatives. By the ebbing of the sixth decade, if not sooner, the changes are undeniable. Not long after that, they transition to conspicuous, each decade seemingly more profoundly marked than its predecessor. On a daily basis, nothing seems to change, but look back a year or five or ten, and the transformation is pronounced.
There have always been old people. Egyptian hieroglyphs from 2800 B.C. depict a bent person leaning on a staff. For over nine hundred years beginning in 775 B.C., the Greeks put forth an array of theories about aging. As their ruins attest, the ancient Greeks had systems, roadways, and efficient processes for sewage removal. Hygiene was good and most hard labor was done by slaves. Aristotle may well have noticed that the slaves, with their long hours of physical work, poor access to food, and constant exposure to the elements, aged more quickly than the citizens in his circle. He suggested that aging occurred because of the loss of pneuma, an internal heat or vital spirit that was gradually consumed over time. Because there was a finite amount, older people had less, which made them more vulnerable to disease, and although slaves spent theirs more quickly than scholars, everyone eventually ran out.
For most of human history, people didn’t expect to grow old,2 and those who did often outlived their children. Because old people made up just a small fraction of the population in societies rich in children and younger adults, there was little point in considering them when building houses, making laws, designing cities, developing a workforce, or training doctors. Now most people born in developed countries can expect to be old, and there are more old people than at any time in human history. Old age also lasts longer and includes many more healthy years. Unprecedented numbers of us are or will be doing in old age most of the things younger people do, though sometimes in different ways, as well as many other things that aren’t possible earlier in life or in shorter life spans.
In societies that identified themselves by their traditions, their past, and their religion, “the elderly, closer by birth to the sacred past and by death to divine and ancestral sources3 of power,” had prestige and a clear, important social position. Today, when the past is viewed as irrelevant and death is more often seen as an ending or abyss than a chance to be with God, being old lacks both those charms. Even middle age is dreaded. Lydia Davis captured this sentiment perfectly in a one-line short story4 called “Fear of Ageing”:
At 28,
she longs to be 24 again.
Meanwhile, in my fifties, I find the idea of returning to twenty-four horrific. I don’t miss the stress or insecurity or posturing, all those things that at the time often felt—deceptively—like potential and strength and opportunity.
Old age has boundaries and landmarks that are both real and subject to interpretation. We reach no longer young decades before becoming old, and what different people and cultures count as a long time varies widely.
Like pornography, we know advanced old age when we see it. But the exact inflection point between middle age and old age is hard to pin down. It might even be impossible, both in an individual life and for our species, given the plethora of biological markers and their unpredictable behavior and interactions. Nor is culture the only other notable piece in that elusive equation. The traits that signal emergence from the liminal zone where adult gives way to old vary in the eyes of beholders. Diagnosed with cancer at age sixty, my mother resigned herself to dying, saying it was okay because she was old and had had a good life. A quarter century later, she looks back on her thoughts then, amazed at how both her perspective and old age itself have changed in the intervening decades.
PERVERSIONS
Stocky and not quite six feet tall, Clarence Williams Sr. was a recently retired seventy-two-year-old attorney who always had a book in his hand or lap. One week he’d been active and healthy, and the next he was my patient on our hospital’s cancer service. Although he didn’t have the worst kind of cancer, in 1992 all the treatments we had on offer earned the word brutal as one of their descriptors.
I looked forward to seeing Clarence on morning rounds and in the afternoons when I needed to give him test results or check how he was handling his many treatments and their side effects. He was brave and kind in many small but important ways, not the least of which was his attitude toward me, one simultaneously avuncular and respectful, even though I was a novice doctor, young, and female—three states that put off some patients. I’d like to think his generosity of character is why I remember him so well all these years later, but I suspect it’s because of what we put him through.
The oncologists started Clarence on chemotherapy within hours of his arrival. I gave him medications for nausea and pain, antibiotics to protect him from infections, and diuretics to remove the excess fluid that built up all over his body. Most days, after his labs came back, I ordered infusions of potassium and phosphorus; some of the treatment drugs, as well as some of the medications for the side effects of the treatment drugs, led to loss of essential elements. The kidneys, those small, paired organs tucked under the rib cage on the lower back, serve as the body’s garbage disposal system. When properly functioning, they remove toxins and waste from the blood and excrete them out with the urine, sending cleaned blood back into the body. If you imagine the kidneys as filters, the effect of the chemo was to enlarge the holes in the mesh such that certain molecules like potassium slid through. With low potassium levels, people have fatigue and painful muscle cramps, and their heart can slow to a life-threatening rate. With lethal potential, various minerals poured out of him, and I poured them back in, trying to keep up.
Meanwhile, ulcers formed in Clarence’s mouth and intestines, causing them to bleed. Despite the medications, he had nausea, diarrhea, and pain. His skin blistered and peeled. Antinausea drugs weren’t as good then as they are now, and he vomited so often that we used intravenous fluids to keep him hydrated. As days turned into weeks, his eyes dulled, his glasses became smudged, and his skin looked more tan-gray than brown-black. Despite his bloated body, the bed seemed to engulf him.
That was when the cancer doctors decided he needed a colonoscopy. They wanted to see how his intestinal lining was holding up and how much more chemotherapy they could give him. As reasons for ordering a test go, this was a pretty good one, since it would provide information to guide our next steps. As the team intern, my job was to make the test happen. The problem was that by this point in his treatment Clarence had trouble sitting up, and he wasn’t eating or drinking much of anything. He needed the help of an aide or two to get to the bathroom only five yards from his bed. To clean out his colon for the test, he would need to drink four liters of a liquid that looked clear but made people gag, then endure hours of running to the toilet. I looked at him, and I looked at the huge plastic container of bowel cleanser, and I thought: This won’t work.
The oncology fellow came to the same conclusion. His solution was to order a feeding tube through which the liquid could be injected directly into Clarence’s stomach. On the surface, this was a good plan. Usually a patient had to drink sixteen eight-ounce glasses of the prep liquid, one every ten minutes for nearly three hours. Clarence sometimes took a few sips of juice or bites of soft food, but even as the worst of the side effects from his first round of chemo subsided, his decimated appetite and ongoing throat discomfort made drinking large quantities of anything impossible. With the tube, we could put the bowel cleanser straight into his body without him having to drink it. Such tubes are used fairly routinely in hospitals. I had already inserted several and cared for many patients who had them. I understood their uses and benefits, and I hated them. To get to the stomach, the long, hollow cylinder of flexible plastic first had to be inserted up through Clarence’s nostril, then make a 180-degree turn and drop down the back of his throat. There, it would need to enter the right opening, the one for his esophagus, rather than the adjacent one that led to his trachea. In Clarence’s case, this was particularly important, since the lungs are a place where you definitely do not want to put four liters of fluid. Most people find both getting the tube in and the reality of having it in their nose and throat quite uncomfortable. Still, sometimes it goes in quickly and easily enough.
Sometimes. Not surprisingly, many patients also hate these tubes. During insertion, the tube doesn’t know it’s supposed to make a downward turn and often tries to keep going up, digging into the soft tissues in the back of the patient’s nose and throat. Even when it goes in smoothly, people often gag or hover on the brink of gagging. People who are confused just pull it out … unless their arms are tied to the bedrails, in which case their existence is reduced to something that looks suspiciously like torture. There is, after all, a plastic rod where it doesn’t belong, so the body says: No. That was Clarence’s body’s reaction and also mine when I saw what the tube was doing to him.
Worse still, the tube insertion was just the beginning. Clarence’s nostril itched, swelled, bled, and dripped. His eyes watered. He had a choking sensation and searing pain as the tube pushed against the chemo-ravaged back of his throat. He was torn between wanting to swallow the irritant and not wanting to swallow ever again because it hurt so much. When a nurse began pushing the fluid through the tube to his stomach, his belly bloated and churned. His nausea worsened. He retched. She slowed down but kept going. An hour later, the fecal urgency began. If you’re fairly heathy and recently turned fifty, earning yourself a screening colonoscopy, such urgency is manageable. But when you’re seventy-four and have been in the hospital for weeks, when your cells have been under siege from chemotherapy, when your muscles have shrunk from disuse, and when, because of all this and more, your cancer and its treatment are getting the best of you, well, then, even getting to a commode placed beside your bed can seem about as possible as running a marathon.
When the pressure built inside Clarence’s belly, he pressed his call button. When no one came, he called out with his weakened voice. And then? Rarely does anyone come quickly in a hospital. Everyone has too much to do, and nurses and aides can’t abandon whomever they are caring for at that moment unless someone else’s life is in jeopardy.
Clarence knew what was about to happen, and he hated it. He considered getting up, but knew he would fall, and also that if he hurt himself badly enough by falling, the chemo and all his suffering would have been for nothing. So instead he lay there as a warmth seeped around his lower torso. His chemo-raw skin stung. He closed his eyes, though less from the discomfort as from his shame. Only a small child shits himself, he kept thinking, and if that was where he was at, then he had come full circle. At least, that was how it looked to me when I checked on him. His eyes and expression said he knew his life was almost over, and this was how it was going to end—alone, miserable, and undignified.
In medicine, a colonoscopy is a “minor” procedure. Although major procedures are major for everyone, the same is not true of those labeled minor. That designation, based on the procedure’s difficulty for both doctor and patient, does not take into consideration the particulars of the patient receiving it. It also encourages the use of such procedures in people and circumstances where they do more harm than good,5 most often in the very sick or very old. When doctors do discuss the risks and benefits with patients, too often it’s to meet a legal requirement rather than to inform, inquire, and collaboratively conclude. The focus, other than getting it over and done with, tends to be on side effects and adverse incidents. We don’t have language or record-keeping mechanisms for the experiential facts of procedures, the in-the-moment trauma and its subsequent distress.
Clarence Williams entered the hospital as a “young-old” person who clearly enjoyed his life. In his few weeks on our cancer service, he became a prototypical old man, sickly and frail, his coming death writ on both body and spirit. Witnessing this horrified me, but like so many of us I said nothing and continued doing my job. Sometimes he and I would simply look at each other for a few moments after completing our usual tasks and conversations. In those moments, we discussed all that was never said aloud in a wordless universal language unrecognized by my profession.
That month, I watched the oncologists save many lives. All ages and cancer types. I also watched them ruin many lives. All ages and cancer types. That is medicine, and that is life.
Clarence’s existential suffering never came up on rounds. Instead, we talked about his chemo cycles and potassium level, his symptoms and next procedure. Eventually, we began talking about when he might leave the hospital, headed most likely to a nursing home, given his weakness and poor prognosis. We hoped that he would gain weeks or months of life from the chemo—an unlikely scenario, seeing how sick it had made him, though we couldn’t know for sure, since the regimen hadn’t been studied in patients his age. We didn’t discuss the obvious: that he was unlikely to spend any gained time feeling well and doing the things he enjoyed. Maybe he would have wanted the treatment anyway, as some people do and in case it helped. This was years before we knew that some people, particularly older adults, live both longer and better without treatment for certain clearly fatal cancers. Or maybe Clarence wanted the chemo and then, once it was clear how bad it would be for him, changed his mind but couldn’t find a way to get off the high-speed treatment train. Perhaps it didn’t occur to him to ask about other options.
It didn’t occur to me to offer any. Partly this was because it wasn’t my place; the oncologist was his primary doctor and I was his lowest assistant. But the real reason was more fundamental. I didn’t know what the alternatives were or how to arrange them. On the cancer service, we learned only about chemo and radiotherapy. Although age is the greatest risk factor for cancer, and cancer is the second most common cause of death, geriatrics and palliative care were not part of oncology training programs. With few exceptions, they still aren’t. A doctor is unlikely to make assessments and recommendations she doesn’t know how or when to make.
It is our right as Americans to demand care that makes no sense. To insist that our bodies be crushed, disfigured, and disrespected, that what once was sacrosanct be intentionally and systematically desecrated. That American right asks doctors to do the impossible, the ugly, the appalling. Some enjoy the sport of their procedures and expertise; for others, motivated by a desire to heal and help, doing such things erodes the softness of them, leaving wounds they cover with callus and corruption. It is the war zones of our body politic and the vast wastelands of our health care system that allow us to commit such travesties and label them care.
The woman behind me in exercise class was pretty, maybe even beautiful, one of those women whose face looks better at eighty than mine did at twenty. Yes, I could tell she was well into old age—dyed hair, plastic surgery, and makeup notwithstanding. But she looked good, and I became even more impressed as she lifted weights and did planks and push-ups, squats and crunches. I noticed she couldn’t fully straighten her arms or legs and thought: She’s so fit and still she has contractures. The youngest she could be, I decided, was late seventies; more likely, she was in her early eighties. I wondered if she’d taken up exercise late, or whether tendons hardened and shortened in some people despite regular exercise. But I also had a moment of shock about forty minutes into the workout when we lay down on our mats. As her hair flowed away from her forehead, pulled by gravity toward the floor beneath, the contrast between the lustrous blonde-brown of it and her translucent skin seemed wrong. Worse than wrong, it was disturbing. Without the hair’s protective framing, I saw where her skin had been pulled and tucked and how it fought with itself, surgical residua pulling one way and gravity another. Suddenly, she didn’t look pretty. She looked like a mannequin in a horror film. At some point, when you take one thing and try to make it another, you run the risk of the grotesque. Probably they didn’t tell her about this risk; maybe she didn’t care. Almost everyone values the present more than the future.
An Internet search of the term anti-aging yields over forty-six million hits. The first of many items that come up are lists of tips, secrets and routines (some “recommended by doctors”), beauty products, and clinics that promise to help minimize the impact of aging on skin, body, and mind. The most frequently used words include prevent, reversing, and corrective, followed by age spots, hormones, and wrinkles, though younger-looking, refreshed, lively, and robust are also popular. Much of this language is borrowed from science—smart marketing that lends legitimacy and an aura of truth, rigor, and objectivity to what is mostly cosmetics. It also reinforces, overtly and insidiously, the idea that aging—even though we are all doing it all our lives—is bad, that old is ugly, and that evolution over a lifetime is evidence of failure. They offer the hope6 of an old age absent all that leaves us feeling unattractive and all that we fear.
In the twenty-first century, many scientists have concluded that tackling human health one disease at a time makes little sense. Incredibly, even if we cured all of today’s big killers—cancer, heart disease, dementia, and diabetes, to name just a few—we would only gain a few extra years of life. Our parts would still wear down and out. (As Oedipus said, “Only for the gods / Is there never old age or death! / All other things almighty time confounds.”) According to this relatively new “geroscience hypothesis,”7 since aging is so closely linked to illness, debility, and death, the best way to address those problems is by interrupting the aging process8 itself. That approach could allow simultaneous prevention (or, more likely, delay) and treatment of multiple aging-related diseases and functional impairments, from osteoporosis to diabetes, heart dysfunction, and frailty. In the pipeline already are treatments like resilience therapies for high-risk older patients, making them less frail and vulnerable to disease, and medications that would remove inflammatory protein-producing cells that harm nearby tissues. The goal of most such treatments is to increase our healthy years, or “health span,” rather than our life span. Of course, some people would like to do both.
The search for eternal youth dates back to at least to 3000 B.C. in Babylon, when Gilgamesh stated that a long life could be achieved by pleasing the gods with prayer, heroism, and sacrifice. Ancient Chinese emperors sought an elixir of youth, and ancient Hindu writings, the Vedas, hinted at alchemy that offered not just the promise of ongoing vitality but an actual return to youthfulness. In Europe, the idea gained and waned in popularity over centuries. In the fifth century B.C. Herodotus wrote of a people who all lived to 120 years old9 and claimed their secret was bathing in a particular fountain. In medieval times, a Golden Age or Place of eternal youth was sometimes presented as having once existed and other times as still existing but hidden, so it or the secret to it needed to be discovered.
Others focused less on youth and more on longevity. In thirteenth-century England, Roger Bacon drew on ancient texts and Christian beliefs in the natural immortality of humans before the Fall to posit that proper behavior could extend the human life span to 150 years. If future generations continued the same beneficial practices, he also suggested, human lives might reach three, four, or five hundred years. The same themes recurred over time: seeking youth, living longer, and restoring (sexual) “vitality.” Approaches often echoed earlier periods and beliefs about aging. A long-standing view, derived from Galen’s waning vital force theory, held that an element or humor—breath, blood, semen—from the young could be used to improve the health, energy, or beauty of the old. Invoking that reasoning, some recommended living or sleeping with the young to draw warmth from their proximal bodies. (The latter option may have been popular for reasons other than health …)
In 1888 Charles-Édouard Brown-Séquard, a famed French physician, claimed to have rejuvenated his septuagenarian self with injection of extracts of animal testes. Around the same time, Élie Metchnikoff, a Nobel Prize–winning Russian father of modern immunology, believed hormone injections were among the essentials for the prolongation of life. In 1907 his book The Prolongation of Life made hormone injections popular10 in many countries, most prominently Germany and the United States. Serge Voronoff, while disdained by his French medical colleagues in the first half of the twentieth century, achieved great public popularity for his work on glandular grafts and injections of monkey hormones for rejuvenation of older people.
In the twenty-first century, it’s the specific techniques we plan to use to achieve longevity that have changed, not the goal itself or even many of the scientific strategies.
In organisms from yeast and roundworms to mice and nonhuman primates, caloric restriction11 has markedly improved health and extended the life span. It lowers body fat, delays immune system changes, improves DNA repair capacity, and much more. In one article, in addition to the usual graphs, there are photographs of two sets of monkeys, both twenty-seven years old. The monkeys fed a usual diet look old, with wrinkles, sunken faces, and lost muscle mass and hair, while their calorie-restricted peers appear young and healthy. The restricted ones also had better blood sugar and cholesterol levels, and lived longer. At age thirty, fewer than a quarter of the control monkeys were alive, compared with 70 percent of the calorie-restricted ones. The long-lived (human) Okinawans, with their twelve-hundred-calorie-a-day diet, suggest something similar happens in humans. Some people are giving it a try. An international calorie restriction society claims thousands of members, although its physician founder died at age seventy-nine. Preliminary human studies haven’t lasted long enough to affect longevity but show positive hormonal changes,12 such as lower insulin levels and higher maintenance levels of the steroid hormone DHEA, similar to those seen in the calorie-restricted monkeys.
That’s great news, except most of us have trouble restricting ourselves to so-called normal amounts of food. Most Americans are overweight, and many in normal weight ranges still regularly eat more calories than they need, just for the fun of it—I do, and I love it, even though I know I shouldn’t and even as I live to regret it. Most scientists also like to eat, so they began looking for the biological mechanisms of calorie restriction. Maybe, they hypothesized, it worked via a molecule that they could copy, manipulate, or manufacture so people could get the benefits of calorie restriction without having to deprive themselves so drastically.
Enter resveratrol, a plant-derived compound that activates sirtuins,13 a class of intracellular proteins that regulate important biological pathways related to aging and other processes that influence aging, including inflammation, energy efficiency, and stress resistance. Resveratrol induces cellular changes associated with longer life spans, extends the life span of multiple lower species, including fruit flies and fish, and improves both health and survival in mice on a high-calorie diet. It can also be credited with the increased popularity of red wine. People are most likely to adopt dietary changes they enjoy.
Scientists are also investigating other molecules that the body makes in response to caloric restriction, such as the ketone body beta-hydroxybutyric acid (BHB) created when people eat a “ketogenic diet” high in fat and low in protein and carbohydrates. A recent study in aging mammals demonstrated positive effects of BHB on memory and life span. The results suggested that BHB affected gene expression. As the senior scientist involved in the project stated, “We’re looking for drug targets.14 The ultimate goal is to find a way for humans to benefit from BHBs without having to go on a restrictive diet.” Those who want the benefits now can exercise, a natural way to create ketone bodies. In fact, ketogenesis may be why exercise improves brain function, health span, and life span.
There are many different paths through which scientists believe they can affect aging, health, and possibly longevity. Cell-based strategies include therapies such as “senolytics,”15 clearing senescent cells with certain aging-associated markers.16 Other therapies under investigation to slow or stop aging include antioxidant supplementation and a compound called rapamycin, which was first discovered oozing from bacteria on Easter Island. Rapamycin influences the immune system (it’s already used in transplant medicine) and has been shown to prolong life in flies,17 worms, and rodents. Last but not least, and putting a modern twist on the humoral approach, several start-ups now replace the blood of older people with blood from young volunteers, hoping to transfer a variety of youth-related compounds all at once.
Some therapies, however widely touted and seemingly sound, aren’t remotely ready for human trials. Stem cells,18 for example: although they have proven uses in regeneration, in 2018 there is no evidence they work at achieving longevity.
The language and arguments of “anti-aging” have evolved, but the underlying message isn’t new. Nor is the participation of physicians in the anti-aging business. Throughout history, some have entered that arena with the intent of improving human lives, while others have exploited people’s endless appetite for self-deception and false hopes. Market-driven manipulators have invoked the same militaristic terms used by medicine in reference to cancer, drug abuse, and AIDS, simultaneously suggesting that not to “fight,” “battle,” or “defy” aging is foolhardy, and that to do so is to avail oneself of the full armamentarium of modern medical science. Never mind that only a tiny minority of these products and procedures are considered medical enough to warrant the investigation, impartial review, and safety and efficacy oversight we accord actual medical products and devices. And the field is made confusing by the overlap between real and pseudoscience in the use of hormones,19 blood, and other bodily substances that were no less popular in the 1880s than they are today. It’s also gendered. Men aim for ongoing sexual vigor and, among the supremely wealthy and powerful, more time in which to enjoy their money and power. Women strive for beauty and all that feminine beauty carries with it in our society—namely, visibility, relevance, allure, and worth.
In scientific circles, anti-aging usually refers to efforts to delay or “cure” old age, not to the multitude of discriminatory beliefs and policies related to aging. In coining that term, proponents hoped to align it with words like antibiotics, one of the most significant medical advances in human history. But this anti- is primarily used in relation to aging as it is used in the words antiestablishment or anti-immigration, meaning opposed to or against part of the natural life cycle. Worse still, it’s a tiny leap from that usage of anti-aging to being against aging people and traits.
The American Academy of Anti-Aging Medicine, unlike most medical organizations, has a.com and not a.org address—the difference being a profit goal versus a mission goal. In 2002 fifty-two of aging’s most prominent scientists—including Leonard Hayflick, who showed the finitude of cell divisions,20 “the Hayflick limit,” and Jay Olshansky, who has worked on discovering the upper limits of longevity—issued a statement that “the business of what has become known as anti-aging medicine has grown in recent years in the United States and abroad into a multimillion-dollar industry. The products being sold have no scientifically demonstrated efficacy, in some cases, they may be harmful,21 and those selling them often misrepresent the science upon which they are based.”
For at least the last century and a half, humans have had great faith in our ability to affect aging in ways superior to those of our predecessors. In 1905 the immunologist Arthur E. McFarlane wrote in “Prolonging the Prime of Life” that science will bring fitness and health to old age. Over a hundred years later, science has yet to deliver on that prediction. The leading researchers say the prospects are promising, although that has been said by leading researchers for centuries. That they haven’t succeeded yet doesn’t necessarily mean the concept is flawed; perhaps the failures come from the methods rather than the goal. (Never mind the various nondisease issues, including overpopulation, climate change, pseudo foods, social policies, and tech use that have negative impacts on human health and longevity.) For many, science and technology have become the only hope, the only way. As a result, clear and present suffering gets ignored, as do the many noncurative strategies that might diminish or alleviate it.
Also largely ignored are the late effects of cures. Fixing one problem often creates new ones that might be avoided or mitigated if only all people and problems counted and we were willing to invest in the full range of tools and skills at our disposal. For instance, survive your cancer and you’re liable to develop delayed side effects ranging from other cancers to diseases of any organ in the body. Survive your heart attack or infection and you’re likely to become old. That will mostly be a good thing, except that the longer we keep you alive, the more likely you are to reach the phase of old age when most of modern society and medicine have nothing for you, not even dignity or compassion.
Innovation comes with trade-offs. Our ingenuity and technical skills have nearly doubled the human life span, but now people who previously would have died shortly after birth, or after devastating war injuries, or in extreme old age, remain alive. Draw the line too close, and lives are unnecessarily sacrificed; draw it too far, and we cause systematic suffering. To further complicate matters, looking at the same set of circumstances, people draw the line in different places. Generally, we tend to err toward life. Some of this is likely instinct, but part may be learned, a sociocultural habit adopted in the early years of modern medicine when antibiotics and surgeries offered what would have seemed like miracles in earlier eras. Our current advances have very different consequences than the ones of earlier generations—consequences that millions must live with but that are barely recognized or addressed by the medical institutions that produce them, especially if the current best approaches to improving human health and lives don’t require science or technology so much as shifts in attitudes, priorities, and values.
Someday, iterations of one or more of the “anti-aging” approaches are likely to succeed, maybe not in reversing aging altogether but in eliminating some of its downsides. Meanwhile, there are two paths we can take that would be transformative in the near future: justice in policy and kindness of attitude.
GAPS
Before clinic one day, our administrator informed me that my new patient was ninety-eight years old and went by the name of Kid. I began reviewing his old records with a smile on my face and high expectations for our first meeting.
Seconds later, I stared at my computer in disbelief. Although by and large American health care puts its money and efforts into treatment, prevention is unequivocally the better approach economically, medically, and morally, since it keeps people from getting sick and needing medical care in the first place. Usually, I am all for prevention. But the most recent entry into Kid’s electronic medical record was a note from a neurologist prescribing daily aspirin for stroke prevention, and I was not at all sure about aspirin for Kid.
Aspirin has risks that increase considerably with age and include internal bleeding, hospitalization, and death. A 2011 study found it was one of the top four drugs22 associated with emergency hospital visits in people over age sixty-five.
Kid had been older than sixty-five for over thirty years. What does prevention mean, I wondered, when a person has already outlived 99.99 percent of his fellow humans?
Trying to take good care of Kid, my neurologist colleague had applied the only evidence she had—evidence from younger patients—in deciding on a plan of care. That calculation had two important flaws. First, we do not know whether aspirin prevents stroke in ninety-eight-year-olds, as that has never been studied. Second, we do know from outcomes data, common sense, and scientific studies that the body’s response to drugs changes and the risk of drug side effects increases in old age. Basically, we could not know that aspirin would benefit Kid, but could be confident that the medication put him at significant risk for internal bleeding, kidney failure, and other adverse effects.
The routine prescription of medications with proven benefits in younger adults and only proven harms in old people happens with all kinds of drugs. Old people, excluded from the trials that show benefit, are prescribed the drug, and sooner or later the reports of adverse events start coming in—except, of course, when patients, families, nurses, and doctors attribute the symptoms to disease, age, or the relentless decline they expect in a sick old person.
On call one sunny, spring weekend, I received a message from the caregiver grandson of a patient in her nineties with atrial fibrillation. Her cardiologist had started her on a newly approved blood thinner that had been shown to be safer and easier to manage than the one she was on. The potential benefits to this mostly homebound nonagenarian were huge. She wouldn’t need to have her blood drawn to check drug levels. Getting this very old woman into the lab for blood tests was an ordeal, and finding veins was hard, causing bruising. Equally important, since what and how much she ate varied widely and certain foods interfered with her past blood thinner, she had been at risk for having blood that was either too thin, which could lead to dangerous bleeding, or not thin enough, increasing her chances of stroke. That risk would be eliminated on the new drug.
On the phone, her grandson said that she was confused. She didn’t seem sick or different in any other way. While there might have been an illness brewing, medications commonly cause confusion in old people, and the timing was just right for a reaction to the new pill. I stopped it, and she got better. On Monday, the cardiologist said the medication didn’t cause delirium and restarted it. On Tuesday evening, she was confused again. We again stopped it, and she got better. Here’s the worst part of this story: it’s mostly older people who have atrial fibrillation, yet there were no requirements to include them23 in trials of the drugs to treat that or other age-related conditions. (The Inclusion Across the Lifespan Policy starts in 2019.) Even when not excluded based on their age, old people are frequently rejected from studies because of their lab results, organ function, or chronic diseases. Once the studies are published, other older patients with the same conditions are prescribed the “trial proven” medications and told they are safe and helpful.
In clinical medicine, we are supposed look for “Occam’s razor,” or a single unifying diagnosis that explains all a patient’s symptoms, physical exam features, and test results. This strategy often works well in young or mostly healthy people. In older age groups, it’s more often the exception than the rule—as it is for younger and middle-aged people with multiple chronic diseases. And still most guidelines, “standards of care,” and quality metrics are developed one disease at a time. Relatively few guidelines address what happens in the real world24: people who have two, three, or multiple conditions. For them, guidelines can offer contradictory advice or lead to so many recommendations that an inordinate amount of a person’s time, efforts, and money are going to their medications and health behaviors. In that situation, their risk of adverse consequences is high. Medications interact with one another, leading to numerous or synergistic side effects, or the regimen becomes impossible, undesirable, or unaffordable. Maybe the person stops the most expensive drug or the one that makes them feel bad, not knowing until it’s too late whether it was a minor one or a critical one.
Age changes the organs that clear medications (mostly the kidney and liver), and old people are particularly susceptible to adverse reactions that can affect anyone. Older bodies also have reactions that younger bodies generally don’t. An old person taking more than four medications has a significantly increased fall risk, one factor that puts falls in the top tier of problems that cause illness, disability, and death in old age.
In real life, what’s going on with a person’s heart or lungs or mood never occurs in isolation. In science, you need to isolate what you want to study to ensure your results are relevant to your topic. Because medicine leads with science, we have organ and disease specialists, and they form professional groups and societies that produce guidelines about the care of their organ or disease. In an article for the Journal of the American Medical Association, doctors illustrated what a guideline-adherent hypothetical seventy-nine-year-old patient would have to do if she had diabetes, hypertension, arthritis, osteoporosis, and chronic obstructive pulmonary disease, conditions that commonly coexist.25 Following guidelines, this person would take twelve medications in nineteen doses at five different times of day on average. She also would receive fourteen to twenty-four daily (depending on how you count) recommendations for diet and exercise. Her grand total of twenty-six to thirty-six health activities per day would constitute a near full-time job and put her at jeopardy for many interactions and adverse events. If she failed to do those activities, she would run the risk of being labeled “a non-compliant patient.”
The exclusion of old people from studies26 is ridiculous. Osteoporosis—and the sometimes treatable, sometimes debilitating fractures it causes—is largely a disease of old people,27 with the majority of cases in both men and women occurring in people in their late seventies or eighties. Yet a study looking at all the randomized control trials on the management of osteoporosis entered in the rigorous Cochrane Library Database28 found that the mean age of participants was sixty-four. For this condition with a mean age near eighty-five, a quarter of all trials excluded patients on the basis of age. This is like studying menopause in thirty-year-old women.
High-quality care for the decades of life beginning at age sixty-five often requires different approaches and metrics than those developed for younger adults. Paradigms based not on chronological age but on more dynamic variables that also include illness burden, functional status (a real-world marker for physiological fitness), health goals, and life expectancy have been proposed in areas of medical care from cancer screening29 to surgery.30
Currently, we don’t know enough about how the substages of old age differ biologically, immunologically, or in health risks because we haven’t studied them the way we’ve studied subphases of childhood and adulthood. In part, that’s because in the past the relative rarity of old people made it hard to enroll enough of them in trials and would yield results of use to fewer people. But our population has been aging for over a century, so that’s not the whole story.
Studying very old people poses unique practical challenges, from demands that are more onerous for older participants to the impossibility of getting informed consent from people with dementia. It also can be hard to distinguish age effects from those of the many diseases and medications most of us acquire in our later years. Last but not least, many people have argued that studying old people is a less-good use of resources than studying younger populations. But life is rarely a zero-sum game. Most Americans are or will become old, and all of us benefit from a healthier populace. Frequently, research on older adults helps younger people, too. In one recent study, young and middle-aged adults who received more aggressive treatments for colon cancer fared worse than the older adults who received what some clinicians would call “less care.” You can’t distinguish age from disease or better from worse if you don’t look at all the options. And you can’t safely prescribe medications for people if you haven’t studied those medications in those people’s bodies.
I met Arturo seven months after he had been hospitalized for diverticulitis and then, two weeks later, for pneumonia. When he was finally home and healthy again, his dementia worsened, and he had trouble sleeping. Part of this was likely the dementia, but another part was situational. Except when the physical therapist came and helped him out of bed, he lay on the same mattress in the same room, day in and day out. There was a window and a TV and his daughter, Teresa, who brought food or sat with him chatting when she got home from work, but come bedtime, he was in the same place and position he’d been all day and sleep eluded him.
In the weeks after he came home, Teresa did everything she could think of to help him sleep. She gave him warm milk. It didn’t help. He suggested a slug of bourbon might be more effective, but she thought that would be a mistake with his medications and the forgetfulness and confusion that had gotten so much worse during his weeks in the hospital.
The less Arturo slept at night, the more he dozed during the day. By nighttime, he was wide-awake. Sometimes he hallucinated, calling out to people who weren’t there. The apartment was small, so if he was up talking or watching TV, Teresa slept poorly. She’d already missed days of work when he was sick, and now she was showing up exhausted and impaired. She reasoned that if he could sleep at night, they’d both be in better shape.
On her way home one Wednesday, Teresa stopped at the pharmacy and found an entire aisle devoted to sleeping medications. Always dutiful, she read the warnings. The cautions were mostly about things her father no longer did: driving a car or operating heavy machinery. Many had cautions about use in children or pregnant women as well. At home, she gave him the pills, and they seemed to help a little.
In the months that followed, Arturo complained that his vision was getting worse. His family assumed that it was old age. His grandson bought him a larger television. Then one day he couldn’t pee and ended up back in the hospital. They said he had an enlarged prostate blocking his urine and putting him into kidney failure. They put in a urinary catheter and told Teresa he would need to have a catheter for the rest of his life.
I met him the next month. He’d actually been referred to us after the first two hospitalizations, but we always had a long wait list. Sometimes people died or worsened before we could get to them. There weren’t enough of us, and since people who are homebound and their 24/7 caregivers tend not to have the time or wherewithal to make a fuss, either no one noticed how many old San Franciscans had inadequate access to the health care they needed even when they had perfectly good insurance, or it wasn’t the sort of thing they cared about enough to take action.
Four to six million older Americans are homebound, and homebound old people31 have 22 percent more emergency department visits and 57 percent more hospital admissions than nonhomebound old people. Housecalls reduces those numbers, sometimes considerably, saving the health system money, and patients and families much pain and hardship. This is partly because the cost of one emergency visit32 equals the cost of ten housecalls—numbers that demonstrate our system’s skewed and counterproductive reimbursement and priorities.
During my history and physical exam of Arturo, Teresa said something about the blindness and urine problems being new and fairly sudden. That made me look up from my computer. I asked her a question I had asked earlier, but this time I had her show me any medication she had given him over the last few months.
The culprit was on top of the refrigerator, not with the official medications, the ones with labels from the pharmacy as a result of a doctor’s prescription. Like most people’s, Arturo’s over-the-counter medications were neatly organized in a pillbox, with separate compartments for each day’s morning, noon, afternoon, and nighttime doses. Like most people, Teresa had assumed that medications she could get over the counter without a prescription were safe if taken as directed, weaker than those that required a prescription.
“Why are so many medicines so easy to buy if they’re so dangerous?” she later asked me, as the pieces came together.
It’s a good question. Her father’s sleeping medication was sold over the counter, and its side effects and toxicities weren’t mentioned on the packaging. Because of Arturo’s prostate disease, the pills shut down his urinary tract, leading to kidney failure. They worsened his glaucoma. While the medicine helped him sleep at first, eventually it worsened his confusion and that made him more likely to hallucinate, so neither Arturo nor Teresa were getting much sleep.
At the hospital, when asked about medications, she had even mentioned the sleeping pills. Yet no one had said anything. Maybe they, like Teresa, assumed the medication was safe.
“We used to give patients that all the time,” said a friend of mine who’s a retired nurse. “I only stopped taking it myself when my daughter read online that it was dangerous.”
Current over-the-counter medications harm older adults and current warnings value certain lives over others. When an old person gets sick, we assume it’s par for the course. A quote often attributed to Hippocrates offered sound geriatric advice on this topic: “Leave your drugs in the chemist’s pot if you can cure the patient with food.”
CHOICES
In the third year of medical school, doctors-in-training moved out of the classroom and into hospitals, spending two to eight weeks in each of the core medical specialties. At its start, the only thing I knew for sure was that I wouldn’t be a surgeon. I was born nearly blind in my left eye, so I lack depth perception, and no one wants a surgeon who can’t tell for sure where her scalpel is in relation to their colon or artery.
Of course, my first rotation was surgery. Within hours of arrival at the hospital, I watched with fascination as a resident and senior surgeon opened a patient’s abdomen, removed its faulty parts, made additional improvements, and—after hours of concentrated, painstaking work—closed it again. Later that evening, the patient woke up, sore, groggy, and considerably healthier. It was amazing.
On the second day, I scrubbed in on several cases, one surgery with each of the three residents on my team, all male, all over six feet tall. By midday, I realized that we all look more or less the same inside, and the slow process of cutting, cauterizing, and reattaching, while terribly important, wasn’t very interesting to me. “It’s way better when you’re doing it yourself,” explained the kindest member of my supervising quartet. In the final days of my rotation, Ahmad would guide me through an amputation, the rare sort of surgery where my lack of depth perception didn’t matter, and I would realize that he was right.
In the interim, I learned many things about surgery, making it clear that even with perfect vision it wasn’t for me. Most mornings, my team of residents would discuss our patients in the cafeteria. The only times in fifteen or forty hours that we would eat, these sessions were also where I learned in person what some men sound like when women aren’t around. As a female medical student not going into surgery, I achieved a unique state of being simultaneously present and invisible. They “pimped” me about my cases, aggressively testing my knowledge while listening to my patient updates. The rest of the time, they spoke in ways I’ve otherwise only ever overheard. By the end of the first month, I could pretty accurately guess the rating on a scale of one to ten each resident would assign any unsuspecting women who passed through their sightlines. I assume that had my own looks warranted a higher rating, this might not have happened. I also assume that Ahmad’s kindness stemmed in part from his having suffered similar insults. Further, I realized that making repairs on a sleeping patient wasn’t too different from making them on a hard drive or a vacuum cleaner. While cognitive skills are essential, much of the actual work is physical and technical. I wanted a specialty where the challenges were more intellectual and relational.
Pediatrics remained at the top of my list for the first five months of my third year. In the sixth month, I did my required rotation on the toddler ward at Children’s Hospital. Very quickly, I realized that sick kids saw doctors as mean and scary—and that included me. Like the nursing staff, some doctors had long-standing, affectionate relationships with families, but theirs were more formal and skewed by a power differential. They also spent less time with the children. These little patients had heartbreaking stories of genetic bad luck, parental abuse, or horrific misfortune. At every procedure, they cried and screamed, too young to understand what was being done to them. Meanwhile, my classmates on our outpatient rotation reported that in clinics most kids were healthy, therefore medically “uninteresting.” By winter break, I knew I would not become a pediatrician.
Over that vacation, I began reading books on mental illness. Psychiatry was my next rotation, and I liked the idea of a specialty where talking to patients was paramount. I was eager to learn the medical take on such human basics as mood, behavior, identity, and sanity, and gain the skills to translate these concepts into better lives for patients. On my first day, after an orientation, I was told to join a therapy group already in progress. I entered a room of young to middle-aged adults arranged in a large circle.
Trying not to be disruptive, I scanned the room to pick out one of my supervisors. When I couldn’t, I moved quickly toward one of the two empty chairs and tried to figure it out from the discussion. On an inpatient psych ward, I reasoned, that should be easy. Thirty minutes later, other than the doctor leading the session, I still wasn’t sure who was who. More disturbing still was that when I began tending the sickest patients, I would sometimes have thoughts like This person is completely crazy. That could have been funny, considering where I was, if it weren’t so clearly morally reprehensible. By the first week’s end, I had to admit that I didn’t have what it took to be a good psychiatrist. To my surprise, I found that although I considered myself less science-oriented than the average doctor, I wanted a specialty that would allow me to use more of my newfound biological knowledge and technical skills.
Next came neurology, then obstetrics and gynecology. Those weren’t for me either. I began to worry that the years and substantial expense of my medical education had been wasted. I was running out of both time and specialties, and concerned that I’d made a terrible mistake. Maybe I didn’t want to be a doctor. I considered my remaining options. I was drawn to fields that recognized people as more than the sum of their parts and ailments, valued considerations of context and culture, and acknowledged life’s inherent ambiguities. Those preferences ruled out dermatology, pathology, radiology, and anesthesia, with their focus on a single organ, cells, images, and machines, respectively.
That left two options: family or internal medicine. Over spring break, I read John McPhee’s Heirs of General Practice and John Berger’s A Fortunate Man. I loved the idea of family medicine, but neurotic as I was about ever knowing enough medicine to do right by my patients, a specialty that required expertise in kids and adults, medical, surgical, and obstetric care might leave me in a state of perpetual anxiety and insecurity. I wanted breadth, but maybe not quite that much of it.
Internal medicine was my last rotation. Right away, I knew I’d found my niche. It included everything about the care of adults except major surgery. Patients could have physical or mental illness, or both. They could be eighteen, one hundred, or anywhere in between. You could talk to them, and most could choose the treatment course that would work best for them. I loved its range and possibilities. Also, the internists I worked with that month were smart, thoughtful, and kind to each other and to their patients—traits not present on many of the teams I’d worked with in other fields.
Internal medicine allowed a honing of specialized skills while offering an array of career opportunities: primary or intensive care, hospital or clinic, global, preventative, or occupational health. I would know how to take care of most internal organs and diseases and many different populations. For me, this was the perfect option.