1. RISE OF THE TEEN AGE

If a society is to preserve its stability and a degree of continuity, it must know how to keep its adolescents from imposing their tastes, attitudes, values, and fantasies on everyday life.

—ERIC HOFFER, 19731

 

Once, there was a world without teenagers. Literally. “Teenager,” the word itself, doesn’t pop into the lexicon much before 1941. This speaks volumes about the last few millennia. In all those many centuries, nobody thought to mention “teenagers” because there was nothing, apparently, to think of mentioning.

In considering what I like to call “the death of the grown-up,” it’s important to keep a fix on this fact: that for all but this most recent episode of human history, there were children and there were adults. Children in their teen years aspired to adulthood; significantly, they didn’t aspire to adolescence. Certainly, adults didn’t aspire to remain teenagers.

That doesn’t mean youth hasn’t always been a source of adult interest: Just think in five hundred years what Shakespeare, Dickens, the Brontës, Mark Twain, Booth Tarkington, Eugene O’Neill, and Leonard Bernstein have done with teen material. But something has changed. Actually, a lot of things have changed. For one thing, turning thirteen, instead of bringing children closer to an adult world, now launches them into a teen universe. For another, due to the permanent hold our culture has placed on the maturation process, that’s where they’re likely to find most adults.

This generational intersection yields plenty of statistics. More adults, ages eighteen to forty-nine, watch the Cartoon Network than watch CNN.2 Readers as old as twenty-five are buying “young adult” fiction written expressly for teens.3 The average video gamester was eighteen in 1990; now he’s going on thirty.4 And no wonder: The National Academy of Sciences has, in 2002, redefined adolescence as the period extending from the onset of puberty, around twelve, to age thirty.5 The MacArthur Foundation has gone farther still, funding a major research project that argues that the “transition to adulthood” doesn’t end until age thirty-four.6

This long, drawn-out “transition” jibes perfectly with two British surveys showing that 27 percent of adult children striking out on their own return home to live at least once; and that 46 percent of adult couples regard their parents’ houses as their “real” homes.7 Over in Italy, nearly one in three thirty-somethings never leave that “real” home in the first place.8 Neither have 25 percent of American men, ages eighteen to thirty.9 Maybe this helps explain why about one-third of the fifty-six million Americans sitting down to watch SpongeBob SquarePants on Nickelodeon each month in 2002 were between the ages of eighteen and forty-nine.10 (Nickelodeon’s core demographic group is between the ages of six and eleven.) These are grown-ups who haven’t left childhood. Then again, why should they? As movie producer and former Universal marketing executive Kathy Jones put it, “There isn’t any clear demarcation of what’s for parents and what’s for kids. We like the same music, we dress similarly.”11

How did this happen? When did this happen? And why? More than a little cultural detective work is required to answer these questions. It’s one thing to sift through the decades looking for clues; it’s quite another to evaluate them from a distance that is more than merely temporal. We have changed. Our conceptions of life have changed. Just as we may read with a detached noncomprehension how man lived under the divine right of monarchs, for example, it may be that difficult to relate to a time when the adolescent wasn’t king.

About a hundred years ago, Booth Tarkington wrote Seventeen, probably the first novel about adolescence. Set in small-town America, the plot hinges on seventeen-year-old William Baxter’s ability to borrow, on the sly, his father’s dinner jacket, which the teenager wants to wear to impress the new girl in town. In other words, it’s not a pierced tongue or a tattoo that wins the babe: it’s a tuxedo. William dons the ceremonial guise of adulthood to stand out—favorably—from the other boys.

That was then. These days, of course, father and son dress more or less alike, from message-emblazoned T-shirts to chunky athletic shoes, both equally at ease in the baggy rumple of eternal summer camp. In the mature male, these trappings of adolescence have become more than a matter of comfort or style; they reveal a state of mind, a reflection of a personality that hasn’t fully developed, and doesn’t want to—or worse, doesn’t know how.

By now, the ubiquity of the mind-set provides cover, making it unremarkable, indeed, the norm. But there is something jarring in the everyday, ordinary sight of adults, full-grown men and women both, outfitted in crop tops and flip-flops, spandex and fanny packs, T-shirts, hip-huggers, sweatpants, and running shoes. And what’s with the captain of industry (Bill Gates), the movie mogul (Steven Spielberg), the president (Bill Clinton), the financier (Warren Buffet), all being as likely to walk out the door in a baseball cap as the Beave? The leading man (Leonardo DiCaprio) even wears it backward. “Though he will leave the hotel later with a baseball cap turned backwards … he is not so much the boy anmore,” The Washington Post observes of the thirty-year-old actor. No, not so much.12

If you’ve grown up with—or just grown with—the perpetual adolescent, you see nothing amiss in these familiar images. It is the mature look of men from Joe DiMaggio to FDR—the camel hair coats, the double-breasted suits, the fedoras—that seems only slightly less fantastic to the modern eye than lace-collared Elizabethan dandies. The image of man, particularly as it has been made indelible on the movie screen, has changed from when Cary Grant starred in The Philadelphia Story, or William Powell starred in anything. In an essay called “The Children Who Won’t Grow Up,” British sociology professor Frank Furedi sums up the difference.

John Travolta nearly bust a gut being cute in Look Who’s Talking, while Robin Williams demonstrated he was adorable as Peter Pan in Hook. Tom Hanks is always cute—a child trapped in a man’s body in Big, and then Forrest Gump, the child-man that personifies the new virtues of infantilism.13

Such virtues require little effort besides dodging maturity. “I’m not old enough to be a ‘mister,’” goes the middle-aged refrain, a reflexive denial of the difference between old and young. This plaintive little protest is no throwaway line. Rather, it’s a motto, even a prayer, that attests to our civilization’s near-religious devotion to perpetual adolescence.

Such devotion is quickly caricatured in the adulation of the craggy rock star, age sixty-three, still singing “(I Can’t Get No) Satisfaction.” But the desiccated oldster cavorting like the restless youngster is hardly the end of the phenomenon. In a world where distinctions between child and adult have eroded, giving rise to a universal mode of behavior more infantile than mature, Old Micks are no more prevalent than Baby Britneys—which is as good a name as any for the artless five- or six-year-olds taught to orgasmo-writhe (à la poppette du jour), belly bare and buttocks wrapped like sausages. At one time, so sexually charged a display by a child would have appalled the adults around her; now, Baby Britneys—and they are legion—delight their elders, winning from them praise, Halloween candy, even Girl Scout music badges.

What caused the change? Even now, the Baby Boom figures into any explanation of our cultural mentality. But before the first Boomers came of age, a tectonic shift in sensibilities was already taking place that the multitudes of adolescents in the making would later magnify, accelerate, and institutionalize.

To make a snapshot case, consider the respective images of two screen goddesses that took shape on either side of World War II: Jean Harlow, the archetypal platinum blonde of the 1930s, and Marilyn Monroe, the definitive 1950s sexpot. While both women’s lives ended prematurely, Harlow from illness, Monroe from suicide, it is Monroe who lives on as the “icon” everlasting, the symbol of an industry to which her contributions are surprisingly limited. The salient point is this: Prewar Harlow, who began her career at age nineteen (and died at age twenty-six), never played anything but womanly roles. Prostitute, stage star, executive secretary, or social climber, she always projected an adult sensibility. Postwar Monroe, on the other hand, made a career out of exuding a breathy, helpless sexuality that, in spite of her mature age (she died at thirty-five), was consistently and relentlessly childlike. There’s a reason movie audiences were willing to redirect their screen idolatry from the younger femme fatale (emphasis on “femme”) to the older sex kitten (emphasis on “kitten”). That is, the sequential popularity of such actresses reflects more than a simple variation on a theme of blonde loveliness. Rather, it reflects a changing paradigm of womanhood itself, a shift that signifies, to borrow a phrase from the late Senator Moynihan, the dumbing down of sexuality, a force at the crux of the infantilizing process—and the sexual revolution to come.

It came. Instead of sifting through the rubble of the old social structure—blasted to bits, of course, as new sexual behaviors and attitudes volcanically emerged—let’s look at Baby Britney again. Some three decades after the sexual revolution, she rises from the ruins to symbolize the extent to which sexuality, particularly female sexuality, has been snatched from its traditional time and place in human development—as a rite of passage to adulthood, to marriage, to having children—and grafted onto girlhood, even toddlerhood, much to the regret of those among us who persist in costuming our wee ones on Halloween as cats, princesses, and cowgirls.

Why the regret? Because not everyone has gone along with the new order. A sizable segment of the population still resists the pressure to transfer the milestones of maturity—including, besides sexuality, a large chunk of financial and other freedoms—to the very young. These are people who instinctively acknowledge differences between adults and children, and who harbor, maybe secretly, a nostalgic appreciation for the old-fashioned maturation process. Even as age has been eliminated from the aging process, they have a hunch that society has stamped out more than gray hair, smile lines, and cellulite. What has also disappeared is an appreciation for what goes along with maturity: forbearance and honor, patience and responsibility, perspective and wisdom, sobriety, decorum, and manners—and the wisdom to know what is “appropriate,” and when.

This is not to say that gray lives and blue noses offer the only anchors against the hedonistic currents of the times. There is a wide and complex range of experience—emotional, aesthetic, physical, mental, and spiritual—for which only the maturing human being is even eligible. Of this, of course, the immortal bards were well aware; today’s artists are numb to it. Etched onto our consciousnesses, in the universal shorthand of Hollywood and Madison Avenue, is the notion that life is either wild or boring; cool or uncool; unzipped or straitlaced; at least secretly licentious or just plain dead. And framing these stark and paltry choices for us is the same kind of black-and-white sermonizing that once preached milk-and-honey visions of heaven and fire-and-brimstone visions of hell. Instead of eternal salvation, of course, we now seek instant fulfillment; instead of damnation, we do anything it takes to avoid the deep, dark rut of middle-class convention. Or so we claim.

That’s why, between the Very Beginning and Journey’s End, an important aspect of Middle Age has gone missing—the prime of “making a life.” The phrase is Lionel Trilling’s, the esteemed critic and English professor, who, in the shank of the 1960s, saw that this work of making a life, “once salient in Western culture,” as he put it, was effectively over. This act of conceiving of human existence, one’s own or another’s, as if it were a work of art that could be judged by established criteria, he wrote, “was what virtually all novels used to be about; how you were born, reared, and shaped, and then how you took over and managed for yourself as best you could. And cognate with the idea of making a life, a nicely proportioned one, with a beginning, a middle, and an end, was the idea of making a self, a good self.”14

We still, of course, have the beginning, but the middle only stretches on in a graceless vector that stops, one day, at an endpoint. Such a life is not, in Trilling’s words, nicely proportioned, but it is, as he shrewdly thought, propelled by a new cultural taboo against admitting personal limitation—one of the tribal beliefs that sets Baby Boomers apart from their parents. As Trilling could see, “If you set yourself to shaping a self, a life, you limit yourself to that self and that life.” And “limitation,” particularly to the perpetual adolescent, is bad.

You close out other options, other possibilities which might have been yours. Such limitation, once acceptable, now goes against the cultural grain—it is almost as if the fluidity of the contemporary world demands an analogous limitlessness in our personal perspecive. Any doctrine, that of the family, religion, the school, that does not sustain this increasingly felt need for a multiplicity of options and instead offers an ideal of a shaped life, a formed life, has the sign on it of a retrograde and depriving authority, which, it is felt, must be resisted.15

Trilling was writing in what his wife Diana Trilling has chronicled as the personally painful aftermath of the 1968 student sacking of Columbia, his beloved alma mater, and where he had taught English for four decades. But if the raging turmoil around him destroyed the “shaped life” he admired, it inaugurated a new way of life defined by its very shapelessness: being without becoming; process without culmination; journey without end; indeed, the state of perpetual adolescence that is a way of life to this day.

Theories abound to explain why this happened, ranging from a high incidence of second marriages, which presumably inspires childish behavior, to a low incidence of deprivation, which presumably inspires childish behavior. “Permissive society” is always a choice culprit; ditto the warm and enveloping cushion of affluence. Anxiety about societal change is another possible rationale. “Nostalgia for childhood” is what Professor Furedi called it when he came across a knot of college students clustered around a television showing Teletubbies. His diagnosis? “Profound insecurity about the future.”16

Of course, that’s what they were saying half a century ago. If profound insecurity about the future really were the cause, profound insecurity about the future—sea monsters, starvation, wild savages—would have worried rock-of-ages Pilgrim Fathers into awkward-age Pilgrim Sons. Or the Black Death would have left Europe in a unified fetal position rather than on the brink of the Renaissance. Or, to look into the more recent past, the Great Depression would have driven jobless, hungry Americans into a mass-cultural second childhood. Instead, of course, the early twentieth-century flowering of popular arts in music and theater and film was uninterrupted by such insecurity—insecurity not only about the future, but also about that day’s supper—culminating in jazz, swing, the American popular song, the American musical comedy, Hemingway, and the golden age of Hollywood, not to mention an astonishingly widespread appreciation of it all, and definitely not Teletubbies. In fact, it was during the period of peace, prosperity, and bright futures that followed World War II that the adult began to ape the adolescent. Something else triggered the evolutionary tailspin.

It’s no coincidence that the cultural dive became most vividly noticeable about the same time the popular culture, particularly the new medium of television, settled into its rut of portraying age as “square,” and youth as “hip.” For something like fifty years, media culture, from Hollywood to journalism to music to Madison Avenue, has increasingly idealized youth even as it has increasingly lampooned adulthood, particularly fatherhood. But the culture culprit, too, is not a satisfactory answer. After all, there’s plenty of “old” out there these days, from sixty-five-year-old Paul McCartney to eighty-two-year-old Paul Newman, that the media culture still celebrates as ever “hip” and never “square.” Assorted AARP-members—from seventy-year-old Jack Nicholson to seventy-nine-year-old Maya Angelou—still swim in the cultural mainstream.

Caveats against trusting anyone over thirty aside, senior citizenship doesn’t invalidate the casual, anti-Establishment pose of an old Jack Nicholson, or the stick-it-to-The-Man edge of an elderly Maya Angelou. That’s because media culture is as anti-authority as it is anti-adult. So long as there is that requisite whiff of subversion, that pro forma slap at the boogey bourgeoisie, even advanced age is irrelevant to cultural currency, which explains a lot about the backward ballcap of the middle-aged midlevel manager and the acid-washed hip-huggers of the car pool queen. These props of youth are also props of “edgy” attitude—a determined bid to embody “hip,” not “square,” and always “unconventional.” A way to smack, at least, of smacking at the bourgeoisie even while maneuvering the Chevy Suburban into the mega-mall parking lot.

The better part of a century ago, George Orwell found himself chafing under similar, artificial constraints. In a review of The Rock Pool, a forgettable British novel of the 1930s, Orwell bemoaned this same blinkered philosophy presented by the author, Cyril Connolly. He seems to suggest, Orwell wrote, “there are only two alternatives” in life—degradation (good) or respectability (bad)—an either-or dilemma Orwell found “false and unnecessarily depressing.”

Of course, the future author of 1984 didn’t know the half of it, writing at a time when the more countercultural behaviors described in The Rock Pool—“drinking, cadging and lechering”—were still largely confined to, or at least accepted by, only an elite margin of society. Orwell rapped Connolly’s admiration for his antiheroes—the main character ends up prizing his “present degradation” over “respectable life in England”—and interpreted it as a sign of the novelist’s “spiritual inadequacy.” Orwell went on,

For it is clear that Mr. Connolly prefers [his antiheroes] to the polite and sheeplike Englishmen; he even compares them, in their ceaseless war against decency, to heroic savages struggling against Western civilization. But this, you see, only amounts to a distaste for normal life and common decency.… The fact to which we have got to cling, as to a lifebelt, is that it is possible to be a normal decent person and yet to be fully alive.17

Alas, the great man sounds a little desperate. Maybe he foresaw, in that characteristically prescient way of his, that society was preparing to amputate notions of “normal” and “decent” from anything connected to being “fully alive.” The operation, as it turned out, was a big success. “Decency” has become a euphemism for narrowness and even bigotry, while “normal” is a sarcastically loaded term of opprobrium set off not by a scarlet letter exactly, but certainly by scarlet quotation marks. Being “fully alive” in today’s culture has little to do with Orwell’s “lifebelt,” that life lived according to a traditional assortment of “normal” and “decent” experiences. Rather, it is more directly tied to a tally of one’s abnormal or indecent life experiences—outright vices that include destructive drug use, dicey sexual couplings, or prankishly criminal behavior.

This is an adolescent attitude but it is at least semiuniversal among adults. But something funny happened on its way to the masses. Flouting convention—or simply appearing to—has become as conventional as it gets. When the JC Penney catalog, purveyor to the heartland, can mass-market an erstwhile symbol of social subversion—the black vinyl motorcycle jacket with metal studs and a matching Brando cap—to the family pooch, loyal retainer of home and hearth, it is time to acknowledge how very square “hip” has become. And how very old—literally and figuratively—“youthful rebellion” has become.

Until the second half of the last century, there existed a state of tension, hostility even, between the middle class and the avant garde—a pair of cultural combatants variously and overlappingly known as the bourgeoisie and the art world; the Establishment and the counterculture; the silent majority and the protest generation; squares and cool people; Us and Them. In 1965, Lionel Trilling coined the phrase “adversary culture” to describe the avant garde slice of life, breakaway movement, or class born of modernism, that had detached itself from the habits and thoughts of the larger mainstream culture to judge, condemn, and—as time would tell—ultimately subsume that larger culture.

While the distinction itself wasn’t brand new, by 1965 it was suddenly more significant. “There are a great many more people who adopt the adversary program than there formerly were,” Trilling noted. And a great many more of those great many people were youngsters. Since the Baby Boom, the seventy-nine million children born between 1946 and 1964, exponentially more children existed; since the expansion of higher education following World War II, exponentially more of those children went to college. In 1945, there were 1.675 million college students taught by 165,000 faculty. By 1970, there were between seven and eight million college students taught by 500,000 faculty. This represented a mass elite—something new under the sun.18

Irving Kristol assessed the change in 1968.

So long as the “adversary culture” was restricted to an avant-garde elite, the social and political consequences of this state of affairs were minimal.… The prevailing popular culture, however artistically deficient, accepted the moral and social conventions—or deviated from them in conventionally accepted ways. But in the 1960’s the avant-garde culture made a successful takeover bid, so to speak, and has now become our popular culture as well. Perhaps this is, once again, simply the cumulative impact of a long process; perhaps—almost surely—it has something to do with the expansion of higher education in our times. In any case, it has unambiguously happened: the most “daring” and self-styled “subversive” or “pornographic” texts of modern literature, once the precious posession of a happy few, are now read as a matter of course—are read in required course—by youngsters in junior colleges all over the country. The avant-garde has become a popular cultural militia. [Emphasis added.]19

One consequence is a steep decline in quality of the antibourgeois arts—a body of work which, in a provocative parenthetical aside, Kristol called “one of the great achievements of bourgeois civilization.” The paradoxical fact is, this antibourgeois body of work—from Impressionism to Expressionism, from Strindberg to Shaw to Pirandello, from Debussy to Stravinsky to Ravel, from Yeats to Pound—couldn’t have exploded onto the cultural scene without having burst from a bourgeois bottle. And the tighter and more airless that bottle was, the better those arts tended to be. In other words, no (bourgeois) pressure, no (antibourgeois) pop.

Little wonder, then, as the ranks of the avant garde shock troops swelled, the cutting edge became the shapeless middle. Suddenly, everyone was a Bohemian. Such popularity worried leftist intellectual Michael Harrington. “Bohemia could not survive the passing of its polar opposite and precondition, middle-class morality,” he wrote in 1972. “Free love and all-night drinking and art for art’s sake were consequences of a single stern imperative: thou shalt not be bourgeois. But once the bourgeoisie itself became decadent—once businessmen started hanging non-objective art in the boardroom—Bohemia was deprived of the stifling atmosphere without which it could not breathe.”20

But it could change, as could the middle class. And while Bohemia as we know it now—an all-encompassing state of middle-class mind—may not be choking on its freedom, its reflexively tiresome shouts of defiance sound more than a little hollow. When U2’s Bono promises Grammy night fans to keep “f——ing up the mainstream,” as critic Mark Steyn has noted, Bono fails to see—or admit—that he is the mainstream, a bonanza to corporate stockholders, and well fit to perform at the official, ribbon-cutting opening of a presidential library in Little Rock.

The fact is, since the death of the grown-up, “f——ing up the mainstream” has become a mainstream occupation. Maybe the best way into an understanding of the phemenon is to revisit a time, maybe the last, when the wall between the mainstream and the counterculture was still in place, if crumbling. That would be back in 1970, on one of the last mornings of December when Elvis Presley arrived in Washington, D.C., to meet Richard Nixon. Or so the thirty-five-year-old rock idol hoped.

In a five-page letter to the president scrawled on American Airlines stationery, Presley introduced himself: “I am Elvis Presley and admire you and have great respect for your office.” While Presley’s sincerity may have grounded the letter’s more atmospheric flights of fancy (“I have done an in-depth study of drug abuse and Communist brain-washing techniques.…”), what stands out years later is his instinctive awareness of his own tenuous relationship to both the bourgeois culture and the counterculture, specifically the culture of rock music—still two distinctly separate realms. Two decades later, of course, counterculture idol Bob “The Times They Are a-Changin’” Dylan would entertain at the first inauguration of the forty-second president. (And three decades later, profane rapper Kid “F—— You Blind” Rock, would be invited—then disinvited—to entertain at the second inauguration of the forty-third president.) In 1970, though, during the first term of the thirty-seventh president, the times hadn’t quite all the way a-changed.

As Presley put it in his letter to Nixon, “the drug culture, the hippie elements, the SDS, Black Panthers, etc., do not consider me as their enemy, or, as they call it, The Establishment. I call it America and I love it.” Hoping for some sort of “federal credentials” to add to his collection of law-enforcement badges, Presley offered to help with the nation’s drug problem “just so long as it is kept very Private.”

There was no need to explain Presley’s reticence. To meet with the president, Presley knew he had to jump the then-unbreached wall between the antibourgeois rock culture and the antirock bourgeoisie. “If the rock ’n’ roll world had known of this letter’s contents,” chides Patricia Jobe Pierce in The Ultimate Elvis, one of numerous volumes of packaged Presleyana, “it would have felt deeply betrayed.”21 And so “Jon Burroughs” checked into the Hotel Washington to await the president’s pleasure, incognito if resplendent, in purple cape and amber shades emblazoned with the initials “EP.”

Across the cultural divide, Richard Nixon, too, was content to keep the rock star “drop by” confidential. No flash-popping, full-press Rose Garden photo op for this president, no matter how many American citizens were loyal Elvis fans. The president seemed to know he and Presley made a joltingly odd couple, one that would be unacceptable to both men’s still largely separate constituencies. In fact, during the thirty-five-minute Oval Office chat, as recorded in a slim picture book on the meeting by former White House assistant Egil “Bud” Krogh, Nixon repeatedly emphasized the importance of Presley’s maintaining his “credibility”—i.e., independence from Establishment links. This, Krogh speculates, underscored Nixon’s awareness of the hazards of guilt-by-association for both the king of rock ’n’ roll and the anointed leader of the Silent Majority.22

Such a wall now looks medieval, particularly after the Clinton era, which began, arguably, with candidate Clinton’s 1992 appearance on The Arsenio Hall Show where he donned Ray•Bans and trotted out his saxophone to honk “Heartbreak Hotel”—Elvis Presley’s first number one hit. Not only was candidate Clinton not hiding his attachment to Elvis—indeed, his Secret Service code name would be Elvis—he was trying to broadcast it to as many millions of American voters as possible.

Maybe this is just a particularly sharp illustration of how times have changed. But it is also an illustration of how a people changes. This transformation is so thoroughly complete that we, as a people, no longer see it—or its implications. Thirty-odd years ago, it was clear to Michael Harrington that Bohemia was losing its identity; today, no one notices the mainstream has lost its Bohemia. What was once perceived as a threat by most Americans is now an icon. That certainly goes for Elvis, once known (and it wasn’t a compliment) as “the pelvis,” but he was just the beginning. A powerful brand more than twenty-five years after his death—he consistently tops Forbes’s macabre top-earning dead celebrities list—Elvis was just a simple pioneer overtaken by the legions who followed and, in Harrington’s words, made the bourgeoisie decadent, a progression that made the mainstreaming of countercultural behavior possible.

This mainstreaming of countercultural behavior is probably the most significant marker of our own stretch of civilization. To be sure, the ebb and flow of decadence run through the ages, but it is only in our own time that it washes over us all like some giant fountain of youth that is not only hard to resist, but impossible to avoid. At essence, countercultural behavior—so pithily summed up by Bono as “f——ing up the mainstream”—is juvenile behavior: a range of indulgent actions taken to bug Mom, enrage Dad, and satisfy sophomoric appetites for sex, drugs, rock ’n’ roll, and their variants, or even just appear to do so.

It is the pretense—faux-f——ing up that mainstream—that marks the perpetual adolescent. He may not be a punk, but he’ll talk that way; she may not be a slut, but she’ll dress that way. It reveals the choice, to borrow that phrase again, to define one’s self down, to identify with the attitudes and behaviors of a countercultural youth movement grown not up, but old. And it has its roots—its real roots—not in the 1960s, or 1970s, but in the more placid decade before, the one, paradoxically, that conservative culture critics often invoke as the last days of American Eden. It was in the 1950s that the adult was pushed aside even before most Baby Boomers were even out of diapers.

The vast numbers of babies arriving during the Boom certainly riveted society’s attention on the young, but it’s worth noting, in a curious counterpoint, that the median age of the American population in 1960, 29.5, was still older than the median age had been in 1900 (22.9), 1910 (24.1), 1920 (25.3), 1930 (26.5), and 1940 (29.0). This means the new emphasis on youth we associate with the 1950s came along even as the general population had been steadily aging for decades. Indeed, after dipping down to 28.1 in 1970, our median age has been rising ever since, even as the behavioral age of our society has plummeted. (Remember the thirty-year-old video gamester.) In 2000, the median age in the United States was 35.3 years old—more than five years older than it was in 1950.23 By striking contrast, in 1800, shortly after our Founding Fathers’ work was done, the median age of newly independent Americans was just sixteen.24

Of course, something about the American sixteen-year-old had become radically different by the 1950s—not just from the previous century, but from the previous decade. To begin with, there was much more money in his pocket, and it didn’t end up in the family kitty as it had, for example, during the Great Depression. The newly flush teen, born before the Baby Boom, was allowed, then expected, to buy himself a measure of freedom unavailable in the past, staking his place in a new subculture that arose beyond parental design and control, a place where teen taste and desire were king. So, too, not incidentally, was Elvis, and scores of other less enduring pop wonders. But there was even more to it than that.

As early as 1958, Dwight MacDonald took a crack at explaining the American teenager, who was not long on the scene but already the subject of intensive scrutiny. “Probably more books dealing with teenagers have been published in the last fifteen years than in the preceding fifteen centuries,” MacDonald observed in The New Yorker. Such books, which represented a new genre of guidebooks for parents, included: Facts of Life and Love for Teen-Agers, Milestones for Modern Teens, Understanding Teenagers, Do You Know Your Daughter?, and How to Live with Your Teenager. He continued: “The list goes on and on, and it includes many titles that would have been puzzling even in fairly recent times, because their subject matter is not the duty of children toward their parents but precisely the opposite. [emphasis added]”25

It is hard to overstate the significance of this change. To say the tide had turned is to imply a temporary, cyclical shift. What had occurred—a cultural whiplash that twisted around a child’s duty to his parent into a parent’s duty to his child—has turned out to be permanent. This is not to suggest that parental duty did not exist before; it did, of course. In fact, maybe “duty” isn’t precisely the right word to describe the phenomenon MacDonald noted. What changed was a sense of priorities. Long before the Baby Boom crested, adults—parents—were abdicating their rights and privileges by deferring to the convenience and entertainment of the young. Rather suddenly, adults were orbiting around their children, rather than the other way around.

This mid-century switch now seems not only irreversible but somehow eternal, as though there had never been another way. The newness of this human reconfiguration may escape us, but it should become clear that without this parent-child role reversal, without this human power shift, the structural failures that permitted the behavioral revolutions of the 1960s to go forward unimpeded simply could not have occurred.

Was the child-parent switch inevitable? It may seem so, given other changes. A few years after MacDonald’s essay, culture critics Grace and Fred M. Hechinger, authors of a 1963 book, Teen-Age Tyranny, remarked on another significant and related shift that had taken place within the family. “In the old days of an agrarian economy, children were an economic boon to the family. The more children there were, the more ‘hands’ in the field or behind the store counter. Today, the greater the number of children, the lower must be the standard of living for the parents.”26 This interesting, if icy, thought, is quite incontestable as any parents struggling to pay for their children’s upbringings well know. But once children—teenagers, in particular—became an expense even as their role in maintaining the well-being of the family had disappeared, maybe some new degree of generational friction was unavoidable.

And there was something else to consider: While the postindustrial-age child doesn’t assist his family economically, the postindustrial family doesn’t train—can’t train—Junior to assist himself economically in society, either. This condition may be changing with the advent of homeschooling, but in the middle of the last century, the family role was shrinking. Long ago, youngsters learned trades or farming from their parents, but twentieth-century industrial society required kinds of training that institutions, from nursery school to graduate school, were set up to provide. In 1961, sociologist James S. Coleman took note of the implications.

The family becomes less and less an economic unit in society, and the husband-wife pair sheds its appendages: the grandparents maintain a home of their own, often far away, and the children are ensconced more and more in institutions, from nursery school to college.

This age-segregation is only one consequence of specialization: another is that the child’s training period is longer.… This setting-apart of our children in schools—which take on ever more functions, ever more ‘extracurricular activities’—and for an ever longer period of training has a singular impact on the child of high-school age. He is ‘cut-off’ from the rest of society, forced inward toward his own age group, made to carry out his whole social life with others his own age. With his fellows, he comes to constitute a small society, one that has most of its important interactions within itself, and maintains only a few threads of connection with the outside adult society.27

These changes were still new when Coleman’s book was published, part of the postwar reorganization of society around youth to the exclusion, or at least the marginalization, of adults. World War II had long receded into the Dark Ages, its epic heroes having vanished into civilian life as quickly as they had emerged. It is no exaggeration to say that their supreme triumph was also, in a very important sense, their final curtain. The so-called Greatest Generation, so dubbed in their dotage, had left the cultural stage as young men.

The impact of this exit was considerable, if not altogether understood. Something was palpably different, but what? Surveying beloved boy heroes in American fiction in 1958, the New Yorker’s Dwight MacDonald held up a literary model to illustrate the cultural change. “When Tom Sawyer and Penrod reached thirteen,” he wrote, “they did not become teenagers but remained children, who accepted the control of grownups as something they could no more escape than they could the weather (though they could sometimes put up an umbrella).” Even in the case of the independent operator Huck Finn, MacDonald continued, “It was small-town respectability that stifled him, not adult life in general.” And like Huck, Willie Baxter, from Booth Tarkington’s presciently titled Seventeen, aspired to manhood rather than teenhood.

His footling masquerades, his opalescent daydreams were all directed toward persuading others and himself that he was an authentic, full-grown Man. The typical pre-teenage-era adolescent, in short, was part of the family, formed by adult values even when he was challenging the grownups who held themperhaps most so then. [Emphasis added.]28

This—to contemporary ears—is a farfetched thought: that teen rebellion could ever take place within an adult context. But such a concept isn’t just fiction. As MacDonald also noted, in both Middletown and Middletown in Transition, Robert S. Lynd and Helen Merrill Lynd’s landmark sociological studies of life in a typical American city during the 1920s and 1930s, there was no mention of teenage problems even in the sections devoted to education and child rearing. That was because, he wrote, “the very concept was unknown.”29 Which is itself quite a concept. But then so is the whole notion of adolescence before the Teen Age—a halcyon time, no doubt, in which “hooking up” would be a crochet stitch and Britney Spears would peak as a Mouseketeer.

That was the time of the teen who moped at the moon, seeking solitude—not sex, not homies. In the words of psychiatrist Dr. Robert M. Linder—he who coined the title (and sold it to the movies) “rebel without a cause”—such solitude was the “trademark of adolescence and the source of its deepest despairs and of it dubious ecstacies.” By 1954, Dr. Linder was already noticing that young people had “abandoned solitude in favor of pack-running, of predatory assembly, of great collectives that bury, if they do not destroy, individuality.… In the crowd, herd or gang, it is a mass mind that operates—without subtley, without compassion, uncivilized.”30

I doubt the good doctor would have been a big fan of Friends, which, after 240 episodes, has ossified notions of the perpetual pack-running adolescence into ever-thus rigidity. Of course, long before Friends, the peer group had developed to upset a balance of power that had traditionally been anchored by the family. With the peer group attenuating, if not replacing, parental force in adolescent life, a new dynamic was emerging, one that helps explain why the parent lost his place at home and in the popular culture. By the 1950s, “Coming, Mother”—the famliar tagline from the Henry Aldridge radio serial, hugely popular in the 1930s and 1940s—was just an echo; what MacDonald described as the “dignified, wise, awesome father” of the equally popular Andy Hardy movie series, starring Mickey Rooney and Lewis Stone, was just a ghost. And so, too, was the character of Andy Hardy—the teenage boy who was always “innocent, lighthearted, and, whenever it came to a showdown, firmly under the parental thumb.”31

Along with that “parental thumb,” the dignified father character vanished like a dinosaur right about the time his real-life model by the thousands came home in triumph from Europe and the Pacific, having just won World War II and saved the world from the Nazi death machine. You would think that would have meant salad days for Dad. Not really. It was happy days for Junior.

And why not? It was a kiddie world. Offering a novel theory on the rise of Elvis, rock ’n’ roll historian Phillip H. Ennis adds something quite illuminating about the demise of Dad. Presley, Ennis writes, was too young to have seen action in either World War II or Korea. As a result, he gained prominence as a peacetime idol independent of “the adults who guided the nation through the great war.” This may have deprived the early rock ’n’ roll star of the formative experience of the age—or, rather, what quickly become the previous age—but it also gave him a connection with the younger generation of children, kids whose fathers and older brothers had gone to war. Many of these youngsters, Ennis continues, had experienced the war as a period of uprootedness: “Shepherded by women, they moved through strange cities and new schools, with only their teenage scenes in which to make sense of the world.” Elvis, he explains, would become “a lightning rod for all that dislocation and urgent need for identification.” In the period of peace and prosperity that followed—the period in which not only Elvis, but all of rock ’n’ roll evolved—“it is not too extreme an assertion to say that Elvis delegitimated the adults’ command over these kids by making any authority conferred by World War II irrelevant.”32

He certainly represented something new, something that took nothing from the era that had passed, leaving Benny Goodman, among others, washed up as an innovator and industry leader by age forty. All of which may help explain one of the great unsolved mysteries of the last century: why the World War II generation petered out as a cultural influence, handily overpowered by the lithe likes of pop music’s Elvis Presley, fiction’s Holden Caulfield, and movieland’s James Dean.

Maybe more important than these three anti-adult Musketeers is the fact that the postwar period was a time when authority in general, including good, old-fashioned leadership, had a noxiously bad rap. At enormous cost, we had just vanquished the divine monarchy of Imperial Japan and the Nazi killing machine of the Third Reich—enemies characterized by their robotically vicious authoritarianism. In a sense, domestic trends toward more “democratic” modes of child rearing—including the relaxed discipline popularized by Dr. Spock, and “child-centered” schools that canceled the traditional diktat of parent and teacher—were as much the fruits of our victory as was budding democracy in Japan and Germany.33

And then there was the perhaps inevitable progression of modern liberalism itself. Judge Robert H. Bork explains that there are two leading characteristics of modern liberalism. The first is radical egalitarianism, which he defines as “the equality of outcomes rather than of opportunities.” The second is radical individualism, which he defines as “the drastic reduction of limits to personal gratification.”34 Both creeds shaped the educational doctrines that first emerged in the 1920s, dominating our schools by the 1950s. The educational doctrine that nonjudgmentally prizes all forms of self-expression equally is perfectly compatible with the judge’s definition of radical egalitarianism—a belief in equality of outcomes, rather than equality of opportunities. In other words: If it’s all supposed to be wonderful, then it all must be wonderful—as if all self-expression is created equal. Meanwhile, the educational doctrine that drives child-centered education quickly becomes an exercise devoted to fulfilling the tastes and desires of the young. From this it is no large leap to Judge Bork’s conception of radical individualism, with its emphasis on personal gratification.

As early as 1962, the Hechingers saw the problem with this approach, a genuine “root cause,” if ever there was one. “Trouble came when the sound idea of the ‘child-centered’ school was combined with the permissive doctrine of extreme self-expression,” they wrote. Sounds like radical individualism combined with radical egalitarianism.

From it follows the equation of individualism with selfishness. It is one thing to say that the purpose of the school is to teach the child, but quite another to let the child dominate the school and the curriculum. The early progressives insisted that the curriculum make sense to the child, and that the content of education be adjusted to age, maturity, and comprehension of the pupil. The perversion of this sound doctrine came when this was equated with the child’s likes and dislikes.35

Crew cuts and pressed jeans notwithstanding, the youngsters the Hechingers had in mind were raised to rebel—long before anyone rocked around the clock, said they wanted a revolution, or was born to run. These children may not have meant to rebel, but their parents had left them to the devices of a system that almost completely segregated them from adult influence and guidance, from maturing lessons and the example of restraint, patience, and wisdom.

The impact of junior high school is a case in point. Created for social rather than educational reasons, its founders argued that because early teens have social problems that differ from both grammar school children and high school students, they should be separated—protected—from both. This isolated the youngsters, not only exaggerating but also perpetuating their concerns.

The Hechingers write:

In many ways, this is typical of the American interpretation of the teen-age problem. Instead of making adolescence a transition period, necessary and potentially even valuable (if often slightly comical), it began to turn it into a separate way of life to be catered to, exaggerated and extended far beyond its biological duration. Eventually it became a way of life imitated by young and not-so-young adults.

This normalized an abnormality. It gave teen-age an air, not of matter-of-fact necessity, but of special privilege and admiration. Instead of giving teen-agers a sense of growing up, it created the impression that the rest of society had a duty to adjust its way and its standards to teen-culture.36

Again, this was published in 1963, still several years ahead of the quasi-official “youth movement.” The fact is, even before the first clenched fist or painted peace sign, youth had already moved, lock, stock, and barrel, to a place far from Henry Aldrich, Andy Hardy, and their parents, somewhere that ran—that was allowed to run—according to rules adults didn’t make, couldn’t understand, but were increasingly bound by. Early on, anthropologist Margaret Mead picked up on the change—the abdication of the adult. In a book published in 1949, she was quoted as saying: “When mothers cease to say, ‘When I was a girl, I was not allowed…’ and substitute the question, ‘What are the other girls doing?’ something fundamental has happened to the culture.”37 That fundamental happening was that adults, having been launched into their new orbit around children, were suddenly looking to those same children not only for guidance but also approval. In other words, long before the 1960s crack-up, American culture was no longer being driven by the adult behind the wheel; it was being taken for a ride by the kids in the backseat.

Where to? Seventeen magazine, celebrating its seventeenth birthday in 1961, gave its readers a general idea:

When Seventeen was born in 1944, we made one birthday wish: that this magazine would give stature to the teenage years, give teen-agers a sense of identity, of purpose, of belonging. In what kind of world did we make our wish? A world in which teen-agers were the forgotten, the ignored generation. In stores, teen-agers shopped for clothes in adults’ or children’s departments, settling for fashions too old or too young.… They suffered the hundred pains and uncertainties of adolescence in silence.… In 1961, as we blow out the candles on our seventeenth birthday cake, the accent everywhere is on youth. The needs, the wants, even the whims of teen-agers are catered to by almost every major industry. But what is more important, teens themselves have found a sense of direction in a very difficult world.… Around the entire world, they are exerting powerful moral and political pressures. When a girl celebrates her thirteenth birthday today, she knows who she is. She’s a teen-ager—and proud of it.38

Oh, brother. Just wait till she heads off to college, a member of the graduating class of 1970. Of course, what marks this editorial—no doubt written by no teenager—is the laughably spoiled brattiness of it all. The idea that 1944, the year of D-day, could be recalled as a time of forgotten, ignored American teenagers who languished in department stores reveals, as nothing else quite does, just how tightly focused and limited adolescent horizons had become—or, better, how tightly focused and limited adults had allowed them to become.

Meanwhile, just for the record, quite a number of teens, circa 1944, including my dad, had a perfectly healthy “sense of belonging,” all right—to the United States Army. But there is something else that is equally striking about this absurd little teenybopper manifesto: the staggering truth of it. By 1961, the accent everywhere was indeed on youth. It still is. And it’s time to say it’s getting old.