2. THE TWIST
I can hear it now: “It was ever thus.”
As in: The young have always rebelled against the old; it was ever thus. The old have always resented the rebellion of the young; that was ever thus, too. There’s great comfort in such statements. They reassure us that we’re part of one big happy continuum, motes in the sweep of generations, hurtling forward on the force of young rebellion … and there’s absolutely nothing anyone can do about it. Any conceivable action to rein in or modify that rebellion, the theory goes, is as pointless as trying to stop waves on a beach. After all, time and tide wait for no teenager. Another thing about the ever-thus argument: It absolves adults, of generations past and present, of all responsibility for a calamitous situation.
In a review of Huck’s Raft: A History of American Childhood by Steven Mintz, Washington Post book review editor Michael Dirda does a nice job of laying out the ever-thus school of thought when he writes that Mintz’s history reveals both “how much childhood has changed over the centuries and how much some things never change.”
Dirda notes that Cornelia A. P. Comer, a Harvard professor’s wife, complained in The Atlantic Monthly that
today’s youth were selfish, discourteous, lazy, and self-indulgent. Lacking respect for their elders or for common decency, the young were hedonistic, ‘shallow, amusement-seeking creatures,’ whose tastes had been ‘formed by the colored supplements of the Sunday paper,’ and the ‘moving-picture shows.’ The boys were feeble, flippant, and ‘soft’ intellectually, spiritually, and physically. Even worse were the girls, who were brash, loud, and promiscuous with young men. This was published in 1911, but it could be—old-fashioned diction aside—Tom Wolfe inveighing against college freshman in 2004. I suppose every generation of adults tends to feel, when regarding the young people around them, that the barbarians are at the gates. But really, there’s nothing for us to worry about: One day our children will have children of their own.1
On its face, the ever-thus argument is fairly overwhelming. Like Mrs. Prof. above, adults have long thrown up their hands at “young people today”—“today” being any time since about 1400. But the outrage is gone, and certainly the shock. Adults might throw up their hands reflexively at the beserk doings of young people, but the overriding emotion today is an impotent sense of resignation because, after all, that’s just the way things are and have always have been. End of circular argument.
Meanwhile, Tom Wolfe, whom Dirda singles out as the ever-thus voice of adult objection, is just a solitary scold whose roar is drowned out by a legion of critical voices.2 These inveigh not so much against the vacuous promiscuity and drenching vulgarity of the college campus that Wolfe brought to light in I Am Charlotte Simmons, but against Wolfe himself for bringing such things to light in the first place; that is, for making such a big deal about about the numbing degradation and pointlessness of dormitory dalliances. Some denounce his reportage as a distortion of the facts; others, as a hackneyed reality, but it is Wolfe’s alarmist judgmentalism that is today deemed antisocial, not the retrograde behaviors that inspire it. Besides, most critics add, it was ever thus.
But was it really? The grab bag of offenses held up at arm’s length in 1911 by the indignant faculty wife sound familiar nearly a century later, but they also sound unremarkable, having become acceptable through common practice. Certainly no one from elite-seat Harvard is generating rip-roaring screeds about hedonistic youth these days. In fact—and this would really rock Mrs. Comer—in 2004 Harvard College extended official recognition to an undergraduate sex magazine featuring students posing in the nude. In our time, faculty, and no doubt faculty wives as well, quite expect and even encourage such behaviors as a first amendment exercise. They endorse them, condone them, and never, ever decry or—horrors—judge them.
This, in and of itself, indicates a clear-cut change between then (and by “then” I mean some point before the middle of the twentieth century) and now. Where there was once mainly indignation, there is now mainly abdication. What remains more or less unchanged is the younger generation’s impulse to break with the past and take charge. But the older generation’s reaction—the essential social counterbalance once provided by adult society—has been knocked completely out of whack.
In 1921, the morality of the “younger generation”—whose few surviving members are centenarians today—was the subject of an extensive survey by The Literary Digest, a leading periodical of the day. Only ten years had passed since Cornelia Comer’s complaint in The Atlantic Monthly, but it had been a decade during which the world had shrunk, shaken, and accelerated through a global war and revolution, poison gas and the assembly line, air mail service, women’s suffrage, and Prohibition. Bringing the machine gun into nations’ arsenals turned international conflict into a new kind of slaughter; removing the corset from women’s wardrobes gave sexuality a new kind of public intimacy. Long skirts vaulted to the kneecap, and long hair was bobbed to the chin. Women flouted convention by coloring their lips with lipstick they carried in handbags, and men flouted the law by sweetening their drinks with liquor from hip flasks. Revolution created the Communist Soviet Union in 1918, and inspired the founding of the Communist Party USA in Chicago the following year. A sense of anarchism was in the air. Soon, the casual lawlessness of an illegal cocktail would spread a newly criminal code of behavior that enervated, rather than salved, a war-burned generation.
“The older generation had certainly pretty well ruined the world before passing it on to us,” wrote John F. Carter, sounding off in 1920, also in The Atlantic Monthly. “They give us this thing, knocked to pieces, leaky, red-hot, threatening to blow up; and then they are surprised that we don’t accept it with the same attitude of pretty, decorous enthusiasm with which they received it, way back in the ’eighties.”3 Such were the sentiments Carter shared with some large number of his contemporaries, more than a million of whom had returned home from European killing fields. Pretty, decorous enthusiasm was out, along with a slew of customs and restraints that had once preserved it, or at least tried to do so. Little wonder The Literary Digest in 1921 was moved to wonder, “Is the Younger Generation in Peril?”
Its answer was yes. Morality among the young was decidedly in decline, this according to a vote of 107–81 by college and university heads and student newspaper editors. Adding in the votes of religious newspaper editors, the tally becomes a landslide for the existence of moral decline, 202–102.4 More significant than the findings, though, was what the survey indicated about the extensive breadth of concern in the rest of society. Today, morality as a public good—and by “morality” I mean communal notions of decency in relationships and comportment—is valued mainly by conservative religious groups and institutions directly at odds with what we think of as the Establishment, or mainstream culture. By contrast, in 1921, morality was still a concern of groups and institutions that made up the Establishment, or mainstream culture. Big difference. Maybe this, as much as anything else, helps explain why the social explosions that put the roar in the Roaring Twenties did not also blow traditional society to smithereens. It would take the cultural minefields of the 1960s, laid in the 1950s, a decade of even wider and deeper social and cultural change, to do that.
In other words, it’s no surprise that at a time when The Literary Digest was tallying its votes on youth and morality, the Catholic archbishop of the Ohio diocese was warning his flock against “bare female shoulders,” not to mention the shimmy. But it’s a revelation to learn about an Ivy League student editor (at Brown, which would later in the century earn an ultra-flaky left reputation) who entered a protest against “girls who wear too few clothes and require too much ‘petting.’” Or about modesty-in-dressing campaigns launched by college women from the University of Nebraska to Smith College.5 From New York University News in downtown Manhattan came the following moralistic manifesto:
Overlooking the physiological aspects of women’s clothing, there is a strong moral aspect to this laxity of dress. When every dancing step discloses the entire contour of the dancer, it is small wonder that moralists are becoming alarmed. The materials, also, from which women’s evening dresses are made are generally of transparent cobweb. There is a minimum of clothes, and a maximum of cosmetics, head-decorations, fans, and jewelry. It is, indeed, an alarming situation when our twentieth-century debutante comes out arrayed like a South Sea Island savage.6
Young fogies of the world unite.
But it wasn’t just young fogies. A New York Times fashion writer of the day was moved to report that “the American woman … has lifted her skirts far beyond any modest limitation.”7 In Philadephia, the Dress-Reform Committee made up of “prominent citizens,” decided to determine what constituted a “moral gown” once and for all. How? The committee consulted 1,160 clergymen of all denominations in and around Philadelphia via questionnaire. In the tradition of dog bites man, “there was far from a unanimous verdict,” The Digest reported. Hemline furor was such that a “group of Episcopal church-women in New York proposed an organization to discourage fashions involving an ‘excess of nudity’ and ‘improper ways of dancing.’” And not just any “churchwomen”; this highly select group included Mrs. J. Pierpont Morgan, Mrs. Borden Harriman, Mrs. Henry Phipps, Mrs. James Roosevelt, and Mrs. E. H. Harriman—Manhattan matrons with money and power, or, as they used to say, “wealth and social position.”8
Such as Caroline Kennedy Schlossberg, Aerin Lauder, and Blaine Trump in our day. Imagine Caroline, Aerin, and Blaine banding together to denounce MTV fashions, rap music, and freak dancing. It’s utterly inconceivable. Society women still raise money for charity—and hooray for them—but they don’t raise standards for anything. It is strictly physical ills alone—cancer, AIDS, battered women, the environment—that drive charitable work in our time. This isn’t to suggest there’s anything wrong with raising money to cure illness and ease penury. Rather, it’s to note a telling difference: unlike in the past, charity work today isn’t driven by moral concerns about modesty, sexual exhibitionism, or the value of marriage.
Meanwhile, it doesn’t appear that Mesdames Morgan, Harriman, et al., pulled skirts back down to earth on their own. As Vogue reported in 1929, it took “Patou of Paris,” who was tired of “ridiculously short dresses,” along with the general gravitational pull of the stock market crash to do that.9 But in their attempts, these pillars of society helped support and maintain the essential social tension that preserves and promotes what was once known as decent, or conventional, behavior. Such efforts actually have the bonus effect of making unconventional behavior that much more thrilling. But that’s another story.
The point is, whenever a sneaking sense of moral decline clutches at us today with the newest assault on the most current notion of propriety, there’s no such bouyant social tension to fall back on; nor is there any agreement on what constitutes propriety, or conventional behavior. We live with a social weightlessness that induces a state of moral freefall or paralysis at any new challenge. Consider the adult reaction to the fad for high school “freak dancing,” that rhythmic, boy pelvis to girl buttocks pseudointercourse to “music” that makes the shimmy look like a minuet.
Forty miles from Washington, D.C., is the burgeoning if semirural region of Loudoun County, where things went smash, as they did in so many high schools across the country of late, when freak dancing at the prom, according to The Washington Post, left “some chaperones … so offended that they refused to take part again.”
These days, the reaction is commonplace, and it says more about the chaperones than it does about the “dance.” The adults in charge were not so offended by the spectacle of their daughters simulating rear-entry sexual intercourse with their sons that they felt compelled to (1) pull the DJ’s plug, (2) flip on the house lights, and (3) call the town council into emergency session. They were only so offended that they decided to stay home and pull down the shades next time. By withdrawing in protest, they not only ceded their power, they vacated their rightful place in their children’s lives. Just as bad, they did nothing to restore their own dignity, let alone their children’s.
Where were Mrs. Morgan, Harriman, Phipps, et al., when you needed them? Suddenly, the school’s principal was all but alone. In the run-up to the next dance, he asked his students to sign a pledge against bringing drugs, alchohol, and freak dancing into the campus-sponsored event. Mild enough. He didn’t institute a dress code, which would have been appropriate; nor did he prohibit four-letter sex talk on the turntable, which would have made sense. He just asked attendees to observe the nation’s controlled-substance laws, and to “face each other.”
So what happened? The no-freak pledge, as The Post reported, “sparked a student-led protest about freedom and self-expression,” with more than three hundred students signing a petition against this supposed abridgment of student rights. And this made some parents proud of their sons and daughters of liberty. “Civil rights are falling by the wayside every second,” said Laura George, a mother who actually encouraged her daughter—her daughter!—and her classmates to stand up for their “civil rights” to in effect simulate sexual intercourse on the gymnasium floor. “I’ve got to take a stand here for my kids. I’ve got to teach them that you question authority when authority’s gone mad.”10
Here we have society’s guardians: The I-quit chaperone and the Right-to-freak mom; but it’s “Authority” that’s gone mad. Meanwhile, the Freak Three Hundred are learning not to “question authority,” but to defy it as an obstacle to personal gratification—the ultimate goal, as Grace and Fred M. Hechinger wrote forty years ago, of child-centered education. In the lessons of Loudoun Valley High we may see the ABCs of modern liberalism as defined by Robert Bork—radical individualism and radical egalitarianism. The radical individualists of Loudoun Valley High School reject any limits to their desire to freak dance in the name of a radical egalitarian notion that equates sexual exhibitionism with freedom of speech.
Meanwhile, poor, pathetic Mr. Authority-gone-mad—the principal—became so insecure and defensive about his dance instructions that he took to the school’s public address system to reassure the student body that “we’re not going to be the Gestapo” when it came to enforcing them.11 By invoking nothing less barbaric and repugnant than Hitler’s storm troopers, the principal reinforced a corrosive caricature of legitimate authority—in this case, grown-ups who wanted to uphold a minimal standard of decent behavior at a school dance. The kids and the parents cry “First Amendment!,” the principal denies he’s Hitler, and the terms of what passes for modern-day debate are set. Meanwhile, according to The Post coverage, no group of parents emerged to oppose freak dancing; no newspapers editorialized against the grossness of it all; not even a church group spoke out. Which begs the question: Where have all the grown-ups gone? It is a strange, and, I would argue, new state of affairs when rebels without a cause face off against reactionaries without a reaction.
Such is the unbearable lightness of being a grown-up today, a condition that belies the ever-thus canard that our social order is no different from the one our parents inherited. It may be natural to cling to this comfy thread of imagined continuity; after all, “it was ever thus” is a whole lot more comforting than “this is something new to the species.” But our society, our “establishment,” no longer has a solid foundation, and rests on random pillars that sway or even fall for lack of communal support. This is not as much a matter of saying, “It was better when…” as a matter of saying “It was different because…” And it was different because adults used to be repositories of cultural tradition—such as, the fairly pretty basic taboo on sexual exhibitionism in the high school gym. As pillars of propriety—even sobriety—today’s adults are shaky at best, wholly unequipped to hold down their end of the once-mighty generation gap.
“Ever-thus” lore has it that such a gap has always isolated teens, permitting them to live a practically tribal life, virtually undisturbed, according to their own rules and appetites. In fact, it has never been “ever thus”—at least not until after World War II. In the wake of war and depression, an unprecedented swell of affluence swamped the family structure, providing children the means to live a social life in common with their peers. This marooned them, in effect, in a world apart from their parents and other adults. The common compass of the past—the urge to grow up and into long pants; to be old enough to dance at the ball (amazingly enough, to the music adults danced to); to assume one’s rights and responsibilities—completely disappeared.
As consumerism became the postwar pastime, and as consumption, particularly consumption of entertainment, became driven by the yearnings of adolescents, the influence of the adult on taste and behavior rapidly diminished. At first, the focus on adolescence seemed to signal the emergence of a new, largely conventional subculture—a club of sorts, with the strictest limits on membership (age), that would eject its expired membership into the adult mainstream. But this did not prove to be the case. How could it? That mainstream had all but dried up.
But first, the fork in the river. In 1944, a little-known grandmother named Helen Valentine effectively launched adolescence with the creation of Seventeen magazine. A marketing visionary convinced of the unexploited bounty of the youth market, Valentine persuaded retailers and manufacturers to target teenage girls for the first time. The significance of this shift is profound. In 2003, the teenage market was a $70 billion-a-year industry, but half a century ago it didn’t even exist. Seventeen gave it life. Profits came quickly; the revolution came later. The fact is, Grandma Valentine’s Seventeen didn’t make many waves. It promoted a vision of youthful desire grounded in personal responsibility, as Grace Palladino explained in her book Teenagers: An American History. By the 1950s, however, a younger generation of businessmen was ready to ditch the personal responsibility baggage in order to concentrate on the youthful-desire part.12
As Palladino pointed out, Eugene Gilbert, one of the more enterprising if unsung young men of the last century, made such desires his business—literally, inventing lucrative marketing strategies that relied on exhaustive and never-before-attempted surveys of teen opinion. In so doing, Gilbert found the key to understanding the adolescent. In a 1959 Harper’s article entitled “Why Today’s Teen-agers Seem So Different,” Gilbert identified the adolescent essence: namely, the “role where he is most distinctly himself as a consumer.”13
Eureka. Fixing the lens of the market on adolescence transformed society’s perception of the teen age, magnifying its social importance to match its financial potential. As the teenager was finding his voice through his pocketbook—almost literally, with the advent of rock ’n’ roll—the adult was losing his predominance and, even more significant, his confidence. Worth noting is that the first generation to lose its collective nerve this way and cede control of the mainstream to up-and-coming “youth culture” was the so-called Greatest Generation, the one that had just won World War II. There is a certain poignance, and even mystery, to the fact that these victors in an epic world war returned home to lose a domestic culture war that would climax in the 1960s. That is, these many millions of men may have returned from Europe and the Pacific to head traditional households and drive postwar prosperity, but they were also put to pasture culturally in no time. I think of my own dad, just turning forty when the Beatles arrived in 1964, balking at being directed to “nostalgia” bins at Tower Records on Sunset Boulevard in L.A. for any non–rock ’n’ roll popular record albums.
But nostalgia—nostalgia of any kind—is not the issue. Generational power shifted, marking maybe not the end of a way of life, but certainly the end of a way of living—the end of growing up. The shift in confidence, obscured by victory in war and prosperity in peace, is key. For as British art historian Kenneth Clarke has noted, it is confidence, above all, from which civilizations rise.14
What kind of civilization, then, has arisen with adolescent confidence? The impact of rock music—the most vivid reflection of teen ascendance—can hardly be overstated. More than the simple addition of a new musical form, the development of rock ’n’ roll (and varied offshoots) has both reflected and stimulated brand-new behaviors, attitudes, and aspirations.
Consider the following turn of events. In 1965, after a Beatles concert ended in mayhem, Cleveland’s mayor banned all rock concerts from public venues, declaring that rock music didn’t contribute to the culture of the city and tended to incite riots. By the 1990s, of course, another Cleveland mayor decided that not only did rock ’n’ roll contribute to the culture of the city, it was the culture of the city; hence Cleveland’s campaign to make itself the site of the new rock “museum.” The shift was tectonic.
Designed by I. M. Pei, the $92-million complex was built to enshrine the memorabilia of a musical movement that, to put it bluntly, brought free love, getting high on drugs, and a reflexive anti-Americanism to the masses. While such practices are not usually associated with civic boosterism, the civic boosters were there on opening day in 1995, government officials and assorted Babbitts, to honor—and to be honored by—Yoko Ono, Little Richard, Jann Wenner, and other oligarchs of the antibourgeoisie.
Everyone smiled. Everyone clapped. Everyone stood for a Woodstock recording of Jimi Hendrix’s amplified assault on “The Star-Spangled Banner” as Marine Corps Harrier jets performed a flyby salute. Middle America cheered, oblivious to the stupendous irony of the moment as the resounding clash of symbols—Amerika-bashers meet America-boosters—failed to arch any eyebrows.
The question is, how did we get from Cleveland, circa 1965, to Cleveland, circa 1995? What happened to that earlier mayor—not to mention the voters who supported him? Many, undoubtedly, simply changed their minds. Those who didn’t found themselves in a cultural no-man’s-land as the battle line between socially acceptable and socially unacceptable shifted and disappeared. Harvard’s Harvey Mansfield has pointed out that the sixties revolution was a revolution in style more than in substance; in taste and manners more than in politics.15 We see the fruits of that revolution, then, not in new political boundaries, but in new cultural boundaries—and perhaps especially in the lack thereof.
The choice of Paul McCartney to win one for the zipper at Super Bowl 2005 is revealing. What adult, back in, say, 1965, could have imagined, as Beatlemania was approaching its anti-Establishment crescendo, that the day would come—and easily within his lifetime—when the American people would applaud Beatle Paul for providing “decent half-time entertainment,” fulfilling a virtual “guarantee he’ll be innocuous,” while not minding “his role as the Super Bowl’s atonement for past excess”?16 (“Past excess” refers to Janet Jackson’s notorious “wardrobe malfunction” of the previous year.) Once, “decent,” “innocuous,” and “atonement” were not the first words associated with young Paul, John, George, or Ringo. Forty years ago, the Fab Four were still combustibly controversial with barely prevailing middle-class culture. They were still seen as the flying wedge of a rock culture that sundered families and propelled generations along separate tracks. Indeed, the Beatles were rather more likely to be banned from major venues (as they were in Cleveland) than credited with raising the moral tone inside them.
What would help 2005 explain to 1965 the transformation of Paul McCartney from barbarian at the gate to defender of the faith? Even after taking a gander at the 2004 “wardrobe malfunction,” 1965 still wouldn’t really understand 2005’s decision to append the appearance of the Beatle to the appearance of the breast. That is, it’s not enough to shine by comparison. But even if the People We Used to Be acknowledged that the People We Have Become regard Paul McCartney as mainstream-wholesome, it remains very hard to explain why this is so according to the ordinary logic of progress. Sure, at age sixty-five, Paul McCartney is older, and Queen Elizabeth has made him a knight. But it’s worth noting that the songs Sir Paul played to be innocuous and decent early in the twenty-first century were the songs he played to be groovy and cool in the middle of the twentieth. In the 1960s, the reference to “California grass” in “Get Back” was unsavory at best, and downright dangerous, prefiguring the explosion in drug use that destroyed thousands of lives. In 2005, it’s “decent,” which suggest only one thing: He didn’t change; we did. Which has left us at odds with our past.
This is probably where James Dean should come in. Or, better, “the James Dean legend,” the screen persona that typifies adolescent angst—the brooding, surly, feckless animus toward powers that be, parents, society, whatever. Amazingly enough, it is this hormonal moodiness that revolutionized the culture, undermining not only adult authority but also adult confidence in adult authority.
The legend was born the day Dean died in 1955, age twenty-four, in a car wreck. It took place a few days before the October opening of Rebel Without a Cause, the movie that would give American juvenile delinquency “social significance.” It also gave American psychiatry a poster boy for “troubled youth.” (As Dwight MacDonald would put it in 1958, troubled youth “is never to blame; he just has Problems.”17)
Dean’s was a separatist legend, the story of generational divide that later became known as the “generation gap.” Fifty years later, with the overlapping of adolescent and adult aspirations and interests, that gap has closed—which probably helps explain the surge in parental popularity that shows up in recent polls of young people’s opinions—but at the time, the rift was starting to resemble an open wound.
Aspirations and interests aside, it was money that first cracked things apart. After World War II, Dad was less likely to be the sole breadwinner, an economic development that diminished what it meant to be the head of the family. Many stay-at-home moms who had taken jobs in the war industries—leaving kids home alone where they established newly all-important peer groups—kept right on working in peacetime. Indeed, with the rising cost of 1950s living, both parents often found themselves moonlighting, even as more teenagers were entering the job market. But—and this is significant—no longer did these teens feed their wages into the family kitty as young people in earlier days had done; postwar teens pocketed the money. Moving up the economic ladder, postwar teens received hefty allowances. By 1961, Seventeen magazine would describe teenagers as “the most powerful, influential, affluential chunk of the population today.” Glossy hype aside, this was something new, as Seventeen pointed out, writing: “’Twasn’t always thus.”18
A GNP’s worth of new products suddenly appeared on the store shelf, items that had never before existed—45rpm discs, hairspray, portable record players, transitor radios, Princess extension phones—all from manufacturers busy retooling their industries both to stimulate and meet the demands of the “youth market.” Such a market was new; the term itself wasn’t heard much before 1947.19
All this new stuff not only satisfied passing teen tastes, it validated them. It worked like this: If Western Electric manufactured Princess extension phones in “dreamy” colors, then teens should want Princess extension phones in “dreamy” colors. It also entrenched such immature tastes. That is, if manufacturers made Princess phones in “dreamy” colors, then, of course, teens should have Princess phones in “dreamy” colors. The retail relationship between consumer teens and their consumer dreams effectively derailed the adolescent trajectory toward adulthood, stalling and even blocking the transition to more mature tastes and interests.20
It also broke open the family circle, almost literally. Take the portable record player. Something like six million “kiddie players” were sold in 1952 alone, which meant that something like six million kiddies left Dad’s hi-fi in the family room behind for the kiddie-player privacy of their own rooms.21 There, they could play their own forty-five-cent singles beyond parental earshot and objection. Same goes for that dreamy Princess phone. A phone of her own gave “Teena,” as Seventeen dubbed the prototypical youngster it pitched to advertisers, the privacy and ease with which to conduct a social life of her own. Like the portable record player, the extension phone promoted a kind of generational isolation and freedom that had never before existed when the family phone stood in the front hall or the kitchen, ejecting the young from the parental sphere into a world populated by peers.
Even family-friendly settings became teen-only turf. The drive-in movie theater, originally envisioned as a means by which the whole family could see a picture without hiring a babysitter or disturbing other moviegoers, further segregated the generations as it became dating heaven for unchaperoned youngsters.22 Books for teens, too, reflected—promoted?—the growing solipsism of youth. “Instead of focusing on an adolescent’s future,” Grace Palladino observed, books “began concentrating on the teenager’s everyday life. By the early 1950s, the social world of dating, popularity, and the never-ending search for a boyfriend had replaced more long-term, adult goals like discovering talents and choosing a career.”23
Of course, none of this would have been possible without adult aid and acquiescence. After all, who wrote the books and movies? Who designed and sold the phones? And who really footed the bills? Seventeen magazine hammered the point home to its advertisers: “Teena has money of her own to spend … and what her allowance and pin-money earnings won’t buy, her parents can be counted on to supply. For our girl Teena won’t take no for an answer.…”24
Dear, dear Teena. I guess Seventeen had gone to her head—or Dig, Teen, Teen World, 16 Magazine, Modern Teen, Teen Times, Confidential Teen Romances, or any of the other adolescent mags that sprouted in the 1950s, brimming with advice about grooming, dating, and Mom and Dad. (“God’s plenty in the age of specialization,” was Esquire’s comment on the teen-mag explosion.25) More significant than the advice itself, though, was the unprecedented role these magazines had suddenly assumed: “They replaced, tout court, the advice that parents themselves were once expected to give their adolescents,” explains Marcel Danesi, University of Toronto Professor of Anthropology and Semiotics, in his book Forever Young.26 Little wonder parents were having a hard time making “no” stick; their voices were lost in a roar of competing authorities. Between Dewey and Freud, not to mention Spock, the traditional parameters of child-rearing and education were being redrawn, with command-and-control functions being ceded to institutions, authorities, and theorists outside the home. The 1950s may be remembered as being especially family-oriented, Steven Mintz notes, but “the most important development was the growing influence of “extra-familial institutions” such as schools, media, and the marketplace. These “fostered separate worlds of childhood and youth … from which parents, and even older siblings, were excluded.”27
Not surprisingly, so were their rules, values, and experience. The counsel of Ingenue magazine is typical: “In a world changing so swiftly, the best-intentioned mother may actually be handicapped rather than helped by misleading memories of her own adolescence in the dark ages of only a few decades ago when life was simpler.”28 Ah, when life was simpler … exactly when was that? The pull of progress is such that every age grapples with new footings, the mastery of which later generations take for granted. I doubt Miss Ingenue really meant Mom’s life was simpler; I think she meant Mom’s rules were stricter. And what better way to undermine discipline—undermine anything—than by declaring it simplistic, naïve, irrelevant, and out-of-date? Not that these “modern” teens clucking over their poor, backward Mas and Pas were what you would call complex or superior. That is, a generation whose collective taste could be summed up as “all James Dean and werewolf stuff” (this, according to teen marketing guru Eugene Gilbert) wasn’t exactly taking that giant evolutionary leap forward.
Whether Gilbert realized it, the two genres—Dean and the werewolves—were much the same. “Werewolf stuff”—the spate of low-budget horror flicks including I Was a Teenage Werewolf (1957), I Was a Teenage Frankenstein (1957), Blood of Dracula (1957)—took the old damsel-in-distress formula and turned it into a teenager-in-torment story. As in Innocent Adolescent suffers at the hands of Evil Adult—a scientist or teacher, or, worse, a science teacher. Or worst of all, a policeman or parent, as in Invasion of the Saucer Men, which also came out in 1957—a boffo year for teen B-moviegoers, just as 1939 was for A-list-loving adults. “James Dean stuff,” meanwhile, may well have been an A-list proposition, featuring such big names as Elia Kazan, Elizabeth Taylor, Raymond Massey, Montgomery Clift, George Stevens, and, of course, Dean himself, but it exploited the very same theme as the teen horror flicks: sincere youth stymied by hypocritical adults. It was the sensitive against the crass, youth against age, child against parent. The movie ads for Rebel posed the question: “What makes him tick … like a bomb?” By 1955, the answer was obvious: his extremely creepy parents. It was the perfect teen movie.
Having brought Dean to stardom in East of Eden, legendary director Elia Kazan found himself uneasy about his unexpected role in creating the Dean legend. “It was a legend I didn’t approve of,” the late Kazan revealed in his 1988 memoir.
Its essence was that all parents were insensitive idiots, who didn’t understand or appreciate their kids and weren’t able to help them. Parents were the enemy. I didn’t like the way [director] Nick Ray showed the parents in “Rebel Without a Cause,” but I’d contributed by the way [actor] Ray Massey was shown in my film. In contrast to these parent figures, all youngsters were supposed to be sensitive and full of “soul.” This didn’t seem true to me. I thought them—Dean, [Dean’s character] “Cal,” and the kid he played in Nick Ray’s film—self-pitying, self-dramatizing, and good-for-nothing.29
Well, maybe so. But ever since Dean, self-pity, self-drama, and even good-for-nothingness have all too often passed for sensitivity and soul. And if Kazan’s description sounds like the teenager next door, upstairs, or his antihero of the week, it’s no wonder the Dean image remains fresh, a cultural touchstone for our time. The rebel without a cause is the Everyman of our age, the pretend maverick against the imaginary machine—the individual against conformity, the free spirit against the bureaucracy. It’s rock culture against the Establishment. It’s indie rock labels against corporate rock labels. It’s the iconoclast against the bureaucracy, antiglobalists against the WTO, shock jocks against the FCC, freak dancers against “the Gestapo.” And it’s “cool” against “square” every time—the Manichaean split of our time.
Whether American rebels have causes—or just causes for emotion—doesn’t even matter anymore. As the last half century tells us, phony American rebels inspire genuine American rebellions. The rebels even admit as much. After leading the sack of Columbia in the spring of 1968—the rampage through the university purportedly over the university’s ties to the defense industry and the construction of a gymnasium in Morningside Heights, a poor, black neighborhood—SDS leader Mark Rudd urged Harvard- and Boston-area students to launch their own campus demonstrations, regardless of whether they had “issues.”
Let me tell you. We manufactured the issues. The Institute for Defense Analysis is nothing at Columbia. Just three professors. And the gym issue is bull. It doesn’t mean anything to anybody. I had never been to the gym site before the demonstration began. I didn’t even know how to get there.30
If the rebel without a cause is father to the protestor without an issue, he may also lay claim to every self-anointed maverick whose medium (rebellion) is his message. It is this pose that makes James Dean legendary, not those three movie credits of his. It probably also explains Rebel director Nicholas Ray’s later fascination with 1960s radicals, which drew him back to the United States after ten years abroad. “For Ray, Abbie Hoffman, Jerry Rubin, and Black Panther Bobby Seale, among others, were the ultimate rebels,” Vanity Fair informed its readers as the fiftieth anniversary of Rebel approached, an occasion the magazine deemed worthy of an article equal parts salacious and significant. Added his daughter Nicca Ray: “It was like putting James Dean on trial.”31
Heavy. But she has a point. Dean was long dead, but his screen image had lived on to give adolescent rebellion its now-familiar shape and snarl. Which is a kind of genius, I guess. For even as the predominance of motion pictures was ending in the 1950s, screen-Dean was able to embody—and, in death, to embalm—the giant tantrum that was starting to roil the larger culture, a brave, new emotional world where perpetual adolescents would live on in churning opposition not just to adults, but to the idea of adulthood itself.
And, as Seventeen might have put it, “’Twasn’t ever thus.” With the apotheosis of James Dean, almost everything necessary to make the transformation from traditionally adult-oriented society to an adolescent one was in place. And that was long before the Beatles arrived on the American scene in 1964. In fact, the 1960s themselves, while understood as the era of cultural revolution and social change, were in a crucial sense only an epilogue to revolutions and changes that had already taken place in the 1950s. Just as there was “Victorianism before Victoria,” as historian Asa Briggs observed, referring to the moral reformation that began in the eighteenth century under the influence of John Wesley but is associated with the British monarch of the nineteenth century, there was 1960-ism before the 1960s.32 It took place in the 1950s, the decade regarded as rock-stable Republican, cookie-cutter conformist, and stultifyingly bland. The decade may have been all of those things, but it was much more.
A trove of literature survives from the 1950s and early 1960s about the adolescent world that had already taken shape before the authors’ eyes. These include both academic studies, such as James S. Coleman’s The Adolescent Society (1961), and more popular accounts, such as Willard and Marguerite Beecher’s Parents on the Run: The Need for Discipline in Parent-Child and Teacher-Child Relationships (1955), Peter Wyden’s Suburbia’s Coddled Kids (1960), and, of course, the Hechingers’ Teen-Age Tyranny (1963), all of which, in examining what was going on in the schools, the suburbs, and the culture, pinpointed the revolutionary changes.
“The homes of yesteryear were adult-centered. Today we have the child-centered home,” wrote the Beechers in 1955.33
“What worries us is not the greater freedom of youth but rather the abdication of rights and privileges of adults for the convenience of the immature,” wrote the Hechingers in 1963.34
“The mothers and fathers then explored … some of the causes that might be responsible for the obvious lack of brakes on the merry-go-round of their children’s lives,” wrote Peter Wyden in 1960. “Yes, they decided, the neighbors had something to do with it. Yes, and so did the teachers. And the various success-minded and promotion-conscious organizations that ensnare both youngsters and adults in Suburbia. But finally, Mrs. Roediger looked about somewhat defiantly and announced, ‘It’s the parents who don’t say “no”!’”35
Such testimonies might sound like a reason to chalk one up for the ever-thus side of the argument—unless, of course, what the authors above were describing was something new in their own time. As indeed it was. In his landmark study of changing American character, The Lonely Crowd (1950), sociologist David Riesman took careful note of what he perceived as evolving child-centric trends.
Children are more heavily cultivated in their own terms than ever before. But while the educator in earlier eras might use the child’s language to put across an adult message, today the child’s language may be used to put across the advertiser’s and storyteller’s idea of what children are like. No longer is it thought to be the child’s job to understand the adult world as the adult sees it.… Instead, the mass media ask the child to see the world as [the mass media imagines] “the” child … sees it.36
What Riesman observed represented a colossal switch. Not only did cultivating children “in their own terms” cut the child off from the adult world, it accustomed the child to being cut off from the adult world; indeed, it made it unnecessary even to think about the child ever going there. It also drew the adult so deeply into a child’s world that it became hard to leave—a condition that describes many grown-ups today. Consider, for example, contemporary attitudes toward play. “Play has historically been about recreation or preparing children to move into adult roles,” Bryan Page, the chairman of the anthropology department at the University of Miami, told The New York Times. “That whole dynamic has now been reversed. Play has become the primary purpose and value in many adult lives. It now borders on the sacred. From a historical standpoint, that’s entirely backward.”37 From a historical standpoint, that’s also entirely understandable. Once the child-centric approach became the norm, youth culture was where it was at. Where else could adults go—and who would be there anyway?
The most significant expression of youth culture was, and is, rock ’n’ roll. And while the history of rock ’n’ roll is folklorically familiar, there are a few facts worth retrieving from myth’s memory hole. After all, rock endures not simply as a broad musical category, but as the inspiration of a way of life—or, rather, a way to look at life. It is the worldview of the perpetual adolescent who sees constraint and definition as padlocks on self-fulfillment and self-expression, and not as keys to identity—and certainly not as a means to “making a life,” as Lionel Trilling saw them.
Fifty years ago, rock ’n’ roll was still just one pop form among many, a novelty that showbiz believed would probably come and go. Or so the Establishment hoped. Pre–rock ’n’ roll rhythm and blues had—with such tunes as “Sixty Minute Man” (1951) and “Work with Me Annie” (1954)—introduced an unprecedented crudity into popular music, stripping sexual intercourse down to trite rhymes and a backbeat for the AM audience. This made most adults, even staffers at such music industry publications as Cash Box and Billboard, a little squeamish, at least at first.38
They were also unsure of what to make of the new phenomenon. Once the entertainment trade paper Variety had begun reporting on the R&B craze in earnest in the mid-1950s, it reviewed a concert in New York produced by disc jockey cum rock impressario Alan Freed—the “rhythm and blues evangelist,” as Variety dubbed him—who was then just a few years away from being defrocked in a Manhattan grand jury probe into payola. Whether Freed brought R&B religion to Variety, he nevertheless brought in major grosses that, as Variety reported, were “bigger than any jazz concert ever staged anywhere in New York.” Which inspired a certain amount of reverence right off the bat.
Indeed, it was that high-yield audience that drew as much of Variety’s attention as the featured acts, which included, among others, The Clovers, Fats Domino, The Drifters, and Red Prysock. “The kids were jumping like crazy in a pandemonium of honking and stomping that continued without intermission from 8 P.M. to 2 A.M,” the trade publication reported. This, Variety decided initially, made the concert very much an ever-thus event. The “shattering repetoire of whistles, hoots, and mitt-pounding,” it said, was
reminiscent of the days when the kids were lindy-hopping in the aisles of the Paramount Theatre on Broadway when Benny Goodman and his orchestra were swinging there.
Like the swing bands, all the performers introed by Freed were characterized by an insistent, unmistakable beat. Whether instrumental or vocal, the combos based their arrangements on a bedrock repetitive rhythm that seemed to hypnotize kids into one swaying, screaming mass.39
“Reminiscent,” maybe, but there were differences. Old pop was melody-driven, not beat-driven. (Decca Records’ old, almost taunting operating slogan, “Where’s the Melody?” as Variety noted in a separate 1955 story, had effectively become “Where’s the Beat?”) Then there was the makeup of the audience. Rock ’n’ roll concerts drew teenagers, mainly girls. Youngsters had been a huge slice of the big band audience, but adults were there, too, adding up to what Grace Palladino has called “a generational mix that had insured a certain civility on stage.” While there just might have been something else besides the presence of adults in the audience that kept, say, Benny Goodman from smashing his clarinet to bits as an encore, their presence explains why, during a 1945 performance with the Tommy Dorsey band, Frank Sinatra would tell the noisy youngsters in the audience “to keep quiet, there are other folks in the house.” If prerock pop music played for adults and youngsters alike, it played for them all according to adult rules.40
After the war, both the adults and their rules began to disappear. By the middle 1950s, to borrow Sinatra’s line, there were not “other folks” in the house—and, equally significant, it didn’t matter. “The bulk of the audience seem [sic] to be girls under sixteen years of age,” Variety reported in a review typical of the new rock scene on August 24, 1955. “They shrieked at virtually anything as though everything that transpires has hidden meanings that they alone understand, and from the squeals that go on, it’s pretty evident in what direction they lie.”41
Variety was hinting, of course, at sex. And more than hinting at it. While the write-up of a revue that included Charlie & Ray, Bo Diddley, and Captain Lightfoot still harkened back to old King-of-Swing crowds to do descriptive justice to ebullient rock audiences, Variety, like the other music trade publications Cash Box and Billboard, was beginnning to realize that grounds for comparison stopped there.
There is little doubt that this kind of entertainment isn’t the healthiest for youngsters.… Swing … never had the moral threat of rock ’n’ roll which is founded on an unabashed pitch for sex. Every note and vocal nuance is aimed in that direction and, according to the makeup of the present bill, should normal approaches fail to entice box offices in the future, there’s the AC-DC set to fall back on.42
Institutionally, Variety was uneasy about this musical turn of events—this “moral threat.” In an extraordinary front-page editorial published on February 23, 1955, entitled “A Warning to the Music Business,” Variety spelled out the reasons why:
Music leer-ics are touching new lows and if the fast-buck songsmiths and musicmakers are incapable of social responsibility and self-restraint then regulation—policing, if you will—will have to come from more responsible sources.
This opening line—opening salvo, really—represented the considered opinion of the bible of the entertainment business. It was deliberate; it was confident. It did not mince words.
What are we talking about here? We’re talking about “rock and roll,” about “hug” and “squeeze,” and kindred euphemisms which are attempting a total breakdown of all reticences about sex.
The attempted breakdown itself wasn’t new, but rather its newly prominent place in the culture.
In the past such material was common enough but restricted to special places and out-and-out barrelhouses.
In other words, out of earshot of Teena and Junior, not to mention Mom and Dad.
Today “leer-ics” are offered as standard popular music for general consumption, including consumption by teenagers.… The most casual look at the current crop of “lyrics” must tell even the most naive that dirty postcards have been translated into songs. Compared to some of the language that loosely passes for song “lyrics” today, the “pool-table papa” and “jellyroll” terminology of yesteryear is polite palaver.
And here Variety repeats its salient point of outrage:
Only difference is that this sort of lyric then was off in a corner by itself. It was the music underworld—not the main stream.43
This distinction is significant. According to the showbiz chronicle, Old Man Mainstream didn’t just keep rollin’ along, it was abruptly changing course, flooded by currents from the “music underworld.” Variety doesn’t seem to have been balking at a new entertainment form so much as it was balking at the movement of an old entertainment form (crude sex ditties) from a place in the shadows to a place in the sun—from “out-and-out barrelhouses,” whatever they were, to the Top 40. It was this pollution of the mainstream, more than the source of the pollution itself, that was of institutional concern. And as such it was something new.
Or was it? Long ago, the ever-thus argument goes, the emergence of jazz as a popular form, which dates back to the 1911 success of Irving Berlin’s “Alexander’s Ragtime Band,” aroused similar antipathies and fears. In 1926, Paul Whiteman, an early jazz celebrity and impresario, published a memoir-slash-rumination called Jazz that cataloged some of these concerns. Whiteman is best remembered—when he is remembered at all—for staging the 1924 concert at Aeolian Hall that introduced the public to George Gershwin’s “Rhapsody in Blue”; he also presided over a popular symphonic dance band that showcased the tender young likes of Bix Beiderbecke, Bing Crosby, Johnny Mercer, Frankie Trumbauer, Eddie Lang, Joe Venuti, Jack Teagarden, and Hoagy Carmichael. In his book, Whiteman explains he has been keeping a clip file on jazz alarmists for the previous five years. “Whenever I feel blue, I take it out,” he said.
It is more enlivening than a vaudeville show. Ministers, club women, teachers and parents have been seeing in jazz a menace to the youth of the nation ever since the word came into general use.
… “Jazz music causes drunkenness,” one despatch [sic] quotes Dr. E. Elliott Rawlings of New York as saying.
… The jazz spirit of the times was blamed by Dr. Harry M. Warren, president of the Save-a-Life League, in his 1924 report, for many of the fifteen thousand suicides in the United States.
… “The jazz band view of life is wrecking the American home,” declared Professor Herman Derry, speaking in Detroit, Michigan.
Dr. Florence H. Richards, medical director of the William Penn High School for Girls, Philadelphia, based her opposition to jazz on a long and careful study of the reactions of 3,800 girls to that kind of music.
“The objection of the physician,” she explains, “is the effect that jazz has on certain human emotions.… If we permit our boys and girls to be exposed indefinitely to the pernicious influence, the harm that will result may tear to pieces our whole social fabric.”44
Drunkenness, home-wrecking, suicide, social chaos: The Babbittry attributed some pretty awful stuff to brass riffs, syncopation, and the piano pounders of Tin Pan Alley—who, just as Whiteman was writing, were poised to usher in what is even now remembered as the golden age of the American popular song, the form perfected by Jerome Kern, Irving Berlin, Cole Porter, Rodgers and Hart, Dietz and Schwartz, Harold Arlen, and others.
When rock ’n’ roll emerged a few decades later, a similar cast of professionals, politicians, church leaders, and parents would chorus their disapproval. The New York Times interviewed “noted psychiatrist” Dr. Francis J. Braceland, who, after a brawl at a rock concert in 1956, called the music “cannibalistic and tribalistic,” and a “communicable disease.” Also in 1956, Time magazine interviewed psychologists who saw in rock-generated hysteria “a passing resemblance to Hitler’s mass meetings.” Said an Oakland, California, policeman, after watching Elvis Presley perform: “If he did that in the street, we’d arrest him.” But more outspoken than anyone was Frank Sinatra, who, at the height of Presley’s reign as “the King,” said rock ’n’ roll is “the most brutal, ugly, degenerate, vicious form of expression it has been my displeasure to hear.”45,46
Rock inspired a more heated invective than jazz, a fearful emotional intensity that surpassed even the gravely bombastic concerns of the previous pop era. Could it be that in the intervening decades a more passionate mode of expression had evolved due to that ol’ devil loosening effect the Whiteman clip file warned against? Maybe the cranks weren’t really that far off the mark to begin with. Over the top, sure, and ripe for parody. But it is an observable fact that jazz and rock ’n’ roll both have been keys to emotional release—the loosening of strictures, an increasing deference to passion and quest for ecstasy—that has liberated an American personality distinctly different from that which came before. Like all innovations, this is both good and bad. Along with creative vibrancy comes destructive abandon; with emotional exploration comes self-absorption; with musical evolution comes musical devolution. Music soothes the savage beast—some music, anyway—but it also stirs the settled mind, arousing appetites and passions that past civilizations more often than not hoped to restrain, not unleash. Or at least direct through social channels—namely, the precision of a marching band, the decorum of a box seat, the intricacies of choreography—that prevented the communal musical experience from becoming the hedonistic bacchanalia that the modern-day rock concert, à la Woodstock, would come to epitomize. An almost atavistic concern for social order lay behind the fears of jazz and rock in the more straitlaced among us; while derided and dismissed, such fears were never entirely irrational.
This isn’t to suggest that the advent of American pop, circa 1920, and the advent of American pop, circa 1950, were twin events with identical effects. And not just because of the obvious musical differences in melodic, harmonic, and lyric competence and complexity. What matters more in this case are the striking distinctions between a pop culture oriented toward adults and a rock culture oriented toward youth. This may seem like a tricky argument to make, implying, as it does, that one form of loosening, or devolution, is okay, and one form is not; that Dr. Florence Richards was wrong in 1926 but Dr. Francis J. Braceland was right in 1956. But while the comparison may be subjective, it’s still revealing.
With almost a century since the beginnings of jazz, and a good fifty years since the start of rock ’n’ roll, the hard, nonsubjective evidence is in, and it comes down to this: People who listen to Jerome Kern (Ethel Merman, Duke Ellington, or The Hi-Lo’s) don’t want to freak dance; people who listen to Snoop Dogg (Linkin Park, U2, or 50 Cent) don’t want to dance like Fred Astaire.
This is no small thing, no mere preference akin to a taste for, say, wheat bread over rye. It’s an expression of culture clash separating one way of life from another. One mainstream expresses an ideal that draws on a longing-to-loving spectrum of human emotion related to romantic love; the brutish other wants to mash the male pelvis into the female buttocks—or thinks it’s okay, or tries not to think about it. One public sees “a new sun up in a new sky” (Dietz and Schwartz), and the other wants to “do it in the road” (Lennon and McCartney)—or thinks it’s okay, or tries not to think about it. This is no way to keep the toxins out of the mainstream that supports society’s cultural health.
Here’s a nifty culture health check story: One thing my dad found himself thinking about in his last years was a kid he served with in the army during World War II, a guy from New Jersey, eighteen or nineteen years old. One day, kidding around, this young GI started to dance my dad, a guy from Brooklyn, also eighteen or nineteen years old, around the barracks, singing “Cheek to Cheek”—a perfect, if quite complex and unconventional, standard by Irving Berlin that had been introduced eight years earlier by Fred Astaire in Top Hat. Now consigned to the rarified strata of “cabaret,” this was the music of the enlisted man in 1943.
Again, such a fact is no small matter. It’s not for nothing that Plato taught us to “mark the music” to understand an individual or his society. After all, people who hum Berlin or Arlen or Gershwin think they want to fall in love; people who hum (hum?) Mötley Crüe or the Ying Yang Twins think they want to have sex. People who listen to Mel Tormé (Nat Cole, Bing Crosby, or Ella Fitzgerald) don’t want to pierce their tongues; people who listen to Eminem (Alanis Morissette, Kurt Cobain, or Public Enemy) don’t want to pin on an orchid corsage. If the American popular song could idealize romantic love to a fault, rock ’n’ roll degrades physical couplings to new lows—destroying not just the language of love and romance, but also the meaning of love and romance. And, I would sadly add, our capacity to experience both. The fact is, between a world in which romantic love is the ideal and a world where nonmarital sex is the goal lies a vast cultural chasm. And not simply in terms of aesthetics. There are salient differences between a civilization that sings of romantic love and marriage (“Have You Met Miss Jones?”), and a civilization that sings of lust and one-night stands (“[I Can’t Get No] Satisfaction”). More than just the year has changed between 1937, when George and Ira Gershwin’s “They Can’t Take That Away from Me” was a hit,
We may never, never meet again
On the bumpy road to love …
and 1987, when George Michael’s “I Want Your Sex” was a hit,
Don’t you think it’s time you had sex with me
Sex with me
Sex with me.…
In examining the impact of Judeo-Christian law on sexuality, columnist Dennis Prager inadvertently adds a significant religious and historical dimension to the comparison of pop and rock worlds: What we know as romantic love, which aspires to monogamous marriage, builds civilization up; what we know as free love, which aspires to a polymorphous sex life, keeps it down.
It is not overstated to say that the Torah’s prohibition of nonmarital sex made the creation of Western civilization possible. Societies that did not place boundaries around sexuality were stymied in their development. The subsequent dominance of the Western world can largely be attributed to the sexual revolution initiated by Judaism, and later carried forward by Christianity.
The revolution consisted of forcing the sexual genie back into the marital bottle. It ensured that sex no longer dominated society, heightened male-female love and sexuality (and thereby almost alone created the possibility of love and eroticism within marriage), and began the arduous task of elevating the status of women. [Emphasis added.]47
Sounds as if the emergence of monogamy five thousand years ago, and not the invention of the Pill in the 1960s, was the real sexual revolution. Having deliberately uncorked Prager’s “marital bottle,” society is once again dominated by sexuality, drenched in sexual imagery, and gagged by innuendo. From James Bond to Carl’s Jr., from beer to jeans, from cars to computer servers, sex will sell it. And we will buy it—even, unbelievably, parents among us who purchase “pimp” and “ho” Halloween costumes for their trick-or-treaters. Picking up on the incessant sexual-messaging from the media that has turned our talk into one-track babble, freelance writer Sheryl Van der Leun cataloged the results: Martha Stewart Living is “homemaker porn” (CNN People); good fishing places are “bare-naked fishing porn” (Men’s Journal); a British gadget magazine is “pure technoporn” (Digital Living Today); and gourmet recipes may be found at www.foodporn.com.48 From cineplex sex flicks to checkout stand sex tips, we are now media-bathed in a red-light-district glow of sexual suggestiveness, as though there is no other light to show the way.
Worth marking is the prescience of Variety’s ink-stained wretches, who, without benefit of a crystal ball, writing in their showbiz-ese, instinctively recoiled at the mainstreaming of sexed-up pop. At the same time, though, there was little indication in 1955 that rock ’n’ roll had staying power, that it would even surpass, say, the decade-long run of the big bands, whose heyday had ended by 1946. There was certainly no indication that it would become, in its varied permutations, an all-enveloping form that would still dominate the popular arts fifty years later. Maybe without quite knowing why, Variety drew a line on the culture map. The adult voice of industry experience and tradition was telling the music biz to clean up its act, or else.
For the music men—publishers and diskeries—to say “that’s what the kids want” and “that’s the only thing that sells nowadays,” is akin to condoning publication of back-fence language. Earthy dialog may belong in “art novels” but mass media have tremendous obligation. If they forget, they’ll hear from authority.49
Fifty years later, it’s that “they’ll hear from authority” threat that’s so interesting. Variety may not have held out much hope for the collective conscience of the music biz to honor its “tremendous obligation,” but it professed an unflappable, and even serene, confidence in what it described as “the Governmental and religious lightning that is sure to strike.” Don’t say we didn’t warn you, the paper said, tucking its head and bracing for the heavy barrage of Establishment artillery; the attack is coming. This ultimatum is now almost touching to read, based as it is on a guileless belief in the presto-restorative powers of men of the stump and cloth. The grown-ups were coming to the rescue. It’s a bit like watching a little kid brag about the big brother you know will never show for the fight. The fact is, no such institutional bolts from the mainstream blue ever hurtled to Hollywood to wipe the leer off the face of rock ’n’ roll.
That’s not to say there wasn’t Sunday sermonizing against “leer-ics” and their deleterious effects on everything from American womanhood to the space race. A censorship movement of sorts even got off the ground, briefly, pushed by such groups as the National Piano Tuners Association, the National Ballroom Operators Association, and the Catholic Church. Meanwhile, Congress drafted interstate commerce legislation aimed at prohibiting the transfer of “obscene, lewd, lascivious, or filthy publication, picture, disc, transcription, or other article capable of producing sound” across state lines.50
The legislation didn’t pass. It wasn’t long before the tuners, the operators, and the Catholics went back to their pianos, ballrooms, and churches, leaving the music industry to go about maximizing profits. This retreat, writes Glen C. Altschuler, was “due, to a great extent, to the sanitizing of songs, but it was a response as well to the emergence of rock ’n’ roll as a mass culture phenomenon.”51 Not to mention a mass culture moneymaker.
Despite pockets of resistance—voices that grew increasingly shrill as they grew increasingly irrelevant—the public that warmed to the front-burner sexuality of rock ’n’ roll in the 1950s was very different from the public that had once actually turned its back on a comparably torrid phenomenon: sex-scandal-ridden Hollywood in the silent era. “Many fans boycotted anything that came out of that slough,” writes A. Scott Berg of the period in Goldwyn. After state legislators introduced nearly one hundred censorship bills in thirty-seven states in 1921 alone, moviemakers feared their nascent industry would die unless they could somehow regain public confidence. “Hollywood decided to clean house,” writes Berg. This meant recruiting someone from outside the industry to regulate its activities. Having settled on Will H. Hays, an Indiana lawyer, former Republican national chairman and then-current postmaster general, the heads of the largest movie companies—Samuel Goldwyn and Louis B. Mayer among them—petitioned Hays to ask President Harding to relieve him of his government duties so he might head up a “national association of motion picture producers and distributors.”52
Decades later, it’s hard to imagine anybody at Decca, Columbia, RCA—or any of the other “major diskers”—petitioning President Eisenhower to relieve an administration stalwart of his duties in order to save the music industry from itself. Unlike in 1921, there was no censorship legislation on the table in dozens of states. On the contrary, there were increasing numbers of record buyers. By the 1950s, the music industry could afford to be indifferent.
Still, there were other concerns. In another round of hearings in 1958, Congress investigated the use of the public airwaves by government-licensed broadcasters to promote their own privately manufactured product—in this case, music. This time around, Congress kept its hands off the sticky, mucky cultural questions of public obscenity or cultural decline, confining itself to a more or less sterile legal analysis. Which isn’t to say the sticky, mucky cultural questions didn’t arise. The hearings drew the participation, in person and in printed testimony, of many prominent composers, producers, and performers of the day, and they weren’t getting involved for the sake of a legal point. Samuel Barber, George Jessel, Leo Robin, Dean Martin, Groucho Marx, Ira Gershwin, Oscar Hammerstein, W. C. Handy, Clarence Derwent, Harry Ruby, Johnny Green, Burton Lane, Mrs. Sigmund Romberg, Mrs. Fiorello LaGuardia, Morton Gould, Yip Harburg, Jimmy McHugh, Lillian Gish, Howard Lindsay, Sammy Fain, Richard Rodgers, Leonard Bernstein, Harpo Marx, Alan Jay Lerner, Tony Martin, Aaron Copland, and other entertainment industry notables weighed in with Congress to express their conviction, in effect, that the culture they had created wasn’t just slipping away, it was being yanked.53
The legal point of contention was this: Should Congress pass a law to separate radio and television broadcasters from the music publishing and manufacturing businesses that they owned? Since the broadcasters were licensed by the government, they were trustees of public property; the argument against them charged that they shouldn’t be allowed to use public airwaves to promote and sell the music they also published and manufactured. Should CBS and NBC, for example, be allowed to use the public airwaves they leased to promote the songs and records produced by their own subsidiary companies, Columbia Records and RCA Victor Records? Should the radio broadcasters be allowed to play the music catalog of Broadcast Music Incorporated (BMI), the song-licensing group the radio broadcasters themselves owned?
Not incidentally—and here we come to the culture question—the BMI catalog was mainly rock ’n’ roll and country music. This pit the broadcasters and their music against the American Society of Composers, Authors and Publishers (ASCAP), home of the American pop standard. ASCAP charged that an unfair concentration of power—a financially interlocking network of broadcasters, disc jockeys, publishers, and networks—was purposefully keeping ASCAP music from the airwaves. But there was something else. ASCAP’s underlying complaint—echoed by the wider arts establishment—wasn’t only that the broadcasters were abusing the public airwaves by shutting out ASCAP songs in favor of BMI songs: It also believed the broadcasters were unfairly boosting the popularity of rock ’n’ roll at the expense of “good music,” thereby undermining musical taste generally. And not just musical taste: According to a slew of medical, theological, and educational witnesses who came before Congress in 1958, the six other lively arts were in jeopardy as well, not to mention the Constitution, Mom, and apple pie.
In the end, Congress was unmoved by the ASCAP lament, legal or cultural. But, before that, what a moment: Establishment America—the heart of the arts, an elite chunk of academia, and a major slice of our representative political body—was seriously debating musical devolution and cultural decline as if they were really happening and as if they really mattered. Their voices didn’t prevail, but what they said wasn’t distorted by a late-night culture of irony and ridicule, either. That’s because so much of “them” were still so much of “us.” In other words, Establishment culture had not yet gone countercultural. This is a big difference between then and now. Within a decade, as rock historian Phillip H. Ennis has pointed out, the culture war between ASCAP and BMI had ended, and “almost all those prestigious art leaders [and ASCAP boosters] would be replaced by a new set of faces and voices, who were only too happy to meet the rock stars of the day.”54 It’s worth remembering 1958 as a year in which the arts and academia still thought there was something that could, and should, be done to stop the changing of the cultural guard.
This changing of the cultural guard is exactly how the ASCAP-BMI clash should be seen. One kind of music—ASCAP’s pop standard born of the European melodic tradition, African rhythms, and New York wit and energy—was being supplanted by another kind of music: the BMI rock sounds emerging from the heartland strains of folk, rhythm and blues, and country music. And Congress wasn’t the only place to prove the point. In 1953, a core group of ASCAP members led by songwriter par excellence Arthur Schwartz (“Dancing in the Dark,” “That’s Entertainment”), filed an antitrust suit against BMI. Asking for damages to the extremely loud tune of $150 million, Schwartz, himself a Columbia-trained lawyer, and thirty-two other musical giants—including composer Samuel Barber (Adagio for Strings), lyricist Ira Gershwin (everything with brother George), lyricist Dorothy Fields (“The Way You Look Tonight,” “[This Is] A Fine Romance”), Broadway and Hollywood composer Victor Young (“Stella by Starlight,” score for The Quiet Man), lyricist Alan Jay Lerner (Brigadoon, My Fair Lady), composer and impresario Gian Carlo Menotti (The Consul, founder of Italy’s Spoleto Festival), and composer and critic Virgil Thompson—charged, in effect, that rock ’n’ roll tastes were being cultivated in the radio-listening public at the expense of ASCAP composers by BMI-affiliated radio stations that unfairly “created” BMI hits through frequent airplay. Schwartz v. BMI would wend its way through the legal system for the better part of the next two decades—in true Dickensian fashion even outliving some principals—before being dismissed with prejudice in 1971.55
Whether there was merit to the ASCAP composers’ allegations seems less important in retrospect than the question of whether more airplay of ASCAP standards—even a great deal more airplay—could have changed the cultural history of the 1950s. Could the magnetic forces in play have been neutralized simply by more Berlin, Jerome Kern, Cole Porter, and others (including Arthur Schwartz)? Even the plain fact, to begin with, that the songwriters had sought recourse in the courts and Congress over a matter of public taste and manners suggests how far the culture had already moved, from an ASCAP world—a place ordered by adult taste and behavior—to BMI land, a new landscape shaped by adolescent taste and behavior. Legal and hypothetical questions aside, this forgotten clash of the entertainment titans indicates the extent to which the culture war over rock ’n’ roll was effectively over before the 1960s—the time we usually think it begun.
Rosemary Clooney, someone who found stardom in the old ASCAP world, knew things were different in 1959 when she came across a headline in a local newpaper about an upcoming performance at the state fair in her native Kentucky.
FABIAN, NEW TO MUSIC, HELPS STIR INTEREST IN FAIR’S CLOONEY SHOW
I’d never expected to need a teen sensation to attract people to my show. I sang the hits people expected to hear: “Come On-a My House,” “Botcha Me,” “This Old House,” and, for balance, “Tenderly” and “Hey There.” The audience applauded warmly. But when sixteen-year-old Fabian took the mike, fans ran screaming down the aisles to the stage.
Fabian was cast, almost literally, in the teen idol mold: a critic called him “the star who was made, not born,” a “musical Frankenstein” created by showbiz hustlers to pander to the tastes of teenage girls. Noted for his youth and looks, he was a lightweight piece of photogenic flotsam. But he was carried along on a powerful turning tide, and I was caught in the undertow.
There had always been various currents flowing through American pop music: “race” music and “hillbilly” music—politely recast as “rhythm & blues” and “country and western”—as well as the conventional pop that I’d come of age with. But beneath the glassy, placid surface of the 50s, it all began to come together in a turbulent stream of beat, sound and national mood. The timing was right as never before: A new generation of teenagers with a whole new kind of influence was coming along. When I was a kid, we listened to grown-up music and bought grown-up records, the only records there were [emphasis added]. But unlike my generation or those before me, these kids had their own money to spend. That meant that they had their own market, for the first time in popular music.
… I knew only one way to sing a song: The words had to mean something, and you had to be sure you knew what they meant before you started to sing. Then you had to hit the note and hit it true. As a singer, I couldn’t have been less like Fabian if I tried. The review of the Kentucky State Fair called my performance “everything that rock ’n’ roll is not.”
That was meant as a compliment, but it also spelled a certain kind of doom—because the rock wave was cresting, about to break, and when it did, it would wash my kind of music right out of the mainstream.56
Clooney was thirty-one in 1959, and would be named Female Vocalist of the Year. Columbia, however, did not renew her contract—for reasons unrelated to Fabian frenzy, she wrote, but she didn’t sign with another label, either. Freelancing some, she was soon making commericals for Remington Roll-a-Matic shavers (“Buy one and get a free record, Music to Shave By”). “Within a decade,” she wrote, “nobody would be able to get a contract to record the kind of music I understood and loved: not Frank, who’d take a six-year hiatus; not Bing, who’d sign with a British label because he couldn’t find one stateside.”57 Frank was in his mid-forties; Bing was fifty-five. In his mid-thirties, Mel Tormé considered leaving music altogether and becoming an airline pilot. At forty-three in 1960, Nat Cole could get a standing ovation at the Sands for singing “Mr. Cole Won’t Rock and Roll” but that was a last hurrah. Meanwhile, Benny Goodman hadn’t worked full time since he was in his early forties.
The baton had passed, all right. But the extent to which it was ripped from the old guard’s hands may be illustrated by playing the children’s game Mad Libs with the preceding paragraph and inserting modern names into Clooney’s litany of cultural displacement. Like this: In 2006, neither Snoop Dogg, thirty-five, nor Eminem, thirty-four, could get themselves recording contracts, and soon were making commercials for shavers (“Buy one and get Rap to Shave By”); Bono, forty-six, would take a six-year hiatus. Bruce Springsteen, fifty-seven, would sign with a British label, because he couldn’t find one stateside. In later years, he would gratefully enjoy a comeback making TV Christmas specials.
Just as Clooney predicted, a cultural tsunami had washed out the mainstream, producing a shift in taste and behavior that was remarkable for more than purely personal reasons. The culture changed beyond recognition, and adults could no longer find their way—unless, that is, they were the kind of adults who knew where the Peppermint Lounge was in New York City. There, in 1961, a very interesting thing happened on the way to the death of the grown-up.
It had to do with the Twist (as introduced by Chubby Checker and Joey Dee), which became a high-society dance craze in New York. Chronicled and photographed by Vogue, the Twist was suddenly everywhere, from New York Mayor Robert Wagner’s victory ball at the Astor Hotel, to a Metropolitan Museum of Art benefit for its Costume Institute—whose director, The New York Times reported, “shook with dismay” when he discovered what was going on. According to Gay Talese, then a New York Times reporter:
Members of Cafe Society approached Joey Dee with reverence, and one imperial gentleman, straight-spined in a tuxedo, hestitated before asking, “Joey, may I please have your autograph?” Many others begged Joey Dee to continue his music, even if it meant holding up the rest of the show, which included dinner and a parade of historic costumes from the museum’s collection.58
The poor Met director only knew the half of it. The amazing thing about the success of the Twist was that the dance ditty had already taken its turn on the “sub-teen” singles charts in the late 1950s, shooting briefly to number one on Billboard’s “Hot 100” before sinking down and out without ado. “By January, 1961,” the Hechingers reported, citing Billboard magazine, “the same record had made it to the top spot a second time, its return performance the result of the adult craze. The adults had taken over where the sub-teens left off, according to Billboard’s research director, a ‘first’ in the record market.”59
It was a “first” in more ways than one. The subject of various analyses at the time, the success of the Twist inspired Chubby Checker himself to write up an explanation of the craze called, “How Adults Stole the Twist.” (He said he was “really dumbfounded” about the whole thing.) Leave it to the Hechingers to hone in on the essential grown-up/teenager connection: “Whatever the deep psychological reasons,” they wrote, “the Twist and its history points up the new trend of society: instead of youth growing up, adults are sliding down.”60
This was a twist, indeed.