Chapter 8

The Disappearing
Child

To this point, my efforts have been directed at describing how the symbolic arena in which a society conducts itself will either make childhood necessary or irrelevant. I have, in particular, tried to explain how our new and revolutionary media are causing the expulsion of childhood after its long sojourn in Western civilization. It remains for me to put forward some of the direct evidence that this expulsion is indeed well under way.

The evidence for the disappearance of childhood comes in several varieties and from different sources. There is, for example, the evidence displayed by the media themselves, for they not only promote the unseating of childhood through their form and context but reflect its decline in their content. There is evidence to be seen in the merging of the taste and style of children and adults, as well as in the changing perspectives of relevant social institutions such as the law, the schools, and sports. And there is evidence of the “hard” variety—figures about alcoholism, drug use, sexual activity, crime, etc., that imply a fading distinction between childhood and adulthood. However, before presenting or pointing to any of it, I am obliged to acknowledge that the conjecture advanced in this book as to why this is happening cannot be proved, no matter how much evidence is marshaled in its favor. This is so not only because conjectures or theories can never be proved, even in the physical sciences, but also because in any effort at social science the very idea of proof or refutation is so encrusted with ambiguities and complexities that one can never be sure if the evidence has left a conjecture standing or has laid it low or is just plain irrelevant.

To illustrate: It has been claimed that the onset of puberty in females has been falling by about four months per decade for the past one hundred and thirty years, so that, for example, in 1900 the average age at which menstruation first occurred was approximately fourteen years, whereas in 1979 the average age was twelve years.1 I rather fancy this statistic because, if true, it suggests that the contraction of childhood began to occur even in physiological terms shortly after the invention of the telegraph; that is, there is an almost perfect coincidence of the falling age of puberty and the communications revolution. I should therefore love to offer this as evidence in favor of my argument, but I rather think there are better explanations available, particularly those having to do with changes in diet.

To take another example: It is a certainty that the American household is shrinking. Today, there are only 2.8 persons per household, as compared to 4.1 in 1930. Or to look at it from another direction, in 1950, 10.9 percent of American households had only one person in them. Today, the figure is 22 percent.2 Americans are not only having fewer children but apparently are spending less time nurturing them at home. Is this an effect of our changing communication environment? I believe it is, but one would be foolish to deny the contribution of other factors such as the increased affluence of Americans, their incredible mobility, the women’s liberation movement, etc. In other words, as in this example, not only may there be multiple causation but, as in the first example, there well may be other theories to explain the facts. After all, in trying to account for changes in social organization or, indeed, for any cultural tendencies, there are many points from which one may embark. Marxists and Freudians, for example, would have ready explanations as to why childhood is disappearing, assuming that they agreed the evidence shows it is. Sociobiologists, anthropologists and—who knows?—perhaps even Scientific Creationists will not find themselves dry on the issue either. I have chosen the explanation offered in this book because insofar as any single perspective can be said to be tenable, this one best explains the facts. Indeed, nothing seems more obvious to me than that childhood is a function of what a culture needs to communicate and the means it has to do so. Although economics, politics, ideology, religion, and other factors affect the course of childhood—make it more or less important—they cannot create it or expunge it. Only literacy by its presence or absence has that power. I shall not, however, reargue this idea here. I wish only to say that I believe the idea is plausible, that it has at least a modest recommendation from the facts of history, and that it is supportable by present trends. The purpose of this chapter is to show that childhood is disappearing. After considering the evidence, the reader, inevitably, will decide if my theory is useful.

I should like to start, then, by calling attention to the fact that children have virtually disappeared from the media, especially from television. (There is absolutely no sign of them on radio or records, but their disappearance from television is more revealing.) I do not mean, of course, that people who are young in years cannot be seen. I mean that when they are shown, they are depicted as miniature adults in the manner of thirteenth- and fourteenth-century paintings. We might call this condition the Gary Coleman Phenomenon, by which I mean that an attentive viewer of situation comedies, soap operas, or any other popular TV format will notice that the children on such shows do not differ significantly in their interests, language, dress, or sexuality from the adults on the same shows.

Having said this, I must concede that the popular arts have rarely depicted children in an authentic manner. We have only to think of some of the great child stars of films, such as Shirley Temple, Jackie Coogan, Jackie Cooper, Margaret O’Brien, and the harmless ruffians of the Our Gang comedies, to realize that cinema representations of the character and sensibility of the young have been far from realistic. But one could find in them, nonetheless, an ideal, a conception of childhood. These children dressed differently from adults, talked differently, saw problems from a different perspective, had a different status, were more vulnerable. Even in the early days of television, on such programs as Leave It to Beaver and Father Knows Best, one could find children who were, if not realistically portrayed, at least different from adults. But most of this is now gone, or at least rapidly going.

Perhaps the best way to grasp what has happened here is to imagine what The Shirley Temple Show would be like were it a television series today, assuming of course that Miss Temple were the same age now as she was when she made her memorable films. (She began her career at age four but made most of her successful films between the ages of six and ten.) Is it imaginable except as parody that Shirley Temple would sing—let us say, as a theme song—“On the Good Ship Lollipop”? If she would sing at all, her milieu would be rock music, that is, music as much associated with adult sensibility as with that of youth. (See Studio 54 and other adult discos.) On today’s network television there simply is no such thing as a child’s song. It is a dead species, which tells as much about what I am discussing here as anything I can think of. In any case, a ten-year-old Shirley Temple would probably require a boyfriend with whom she would be more than occasionally entangled in a simulated lover’s quarrel. She would certainly have to abandon “little girl’s” dresses and hairstyles for something approximating adult fashion. Her language would consist of a string of knowing wisecracks, including a liberal display of sexual innuendo. In short, The Shirley Temple Show would not—could not—be about a child, adorable or otherwise. Too many in the audience would find such a conception either fanciful or unrecognizable, especially the youthful audience.

Of course, the disappearance from television of our traditional model of childhood is to be observed most vividly in commercials. I have already spoken of the wide use of eleven- and twelve-year-old girls as erotic objects (the Brooke Shields Phenomenon), but it is necessary to mention one extraordinary commercial for Jordache jeans in which both schoolgirls and schoolboys—most of them prepubescent—are represented as being driven silly by their undisciplined libidos, which are further inflamed by the wearing of designer jeans. The commercial concludes by showing that their teacher wears the same jeans. What can this mean other than that no distinction need be made between children and adults in either their sexuality or the means by which it is stimulated?

But beyond this, and just as significant, is the fact that children, with or without hyperactive libidos, are commonly and unashamedly used as actors in commercial dramas. In one evening’s viewing I counted nine different products for which a child served as a pitchman. These included sausages, real estate, toothpaste, insurance, a detergent, and a restaurant chain. American television viewers apparently do not think it either unusual or disagreeable that children should instruct them in the glories of corporate America, perhaps because as children are admitted to more and more aspects of adult life, it would seem arbitrary to exclude them from one of the most important: selling. In any case, we have here a new meaning to the prophecy that a child shall lead them.

The “adultification” of children on television is closely paralleled in films. Such movies as different as Carrie, The Exorcist, Pretty Baby, Paper Moon, The Omen, The Blue Lagoon, Little Darlings, Endless Love, and A Little Romance have in common a conception of the child who is in social orientation, language, and interests no different from adults. A particularly illuminating way in which to see the shift in child film imagery that has taken place in recent years is to compare the Little Rascals movies of the 1930s with the 1976 film Bugsy Malone, a satire in which children play the roles of adult characters from gangster movies. Most of the humor in the Little Rascals films derived its point from the sheer incongruity of children emulating adult behavior. Although Bugsy Malone uses children as a metaphor for adults, there is very little sense of incongruity in their role playing. After all, what is absurd about a twelve-year-old using “adult” language, dressing in adult clothes, showing an adult interest in sex, singing adult songs? The point is that the Little Rascals’ films were clearly comedy. Bugsy Malone comes close to documentary.

Most of the widely discussed changes in children’s literature have been in the same direction as those of the modern media. The work of Judy Blume has been emulated by many other writers who, like Ms. Blume, have grasped the idea that “adolescent literature” is best received when it simulates in theme and language adult literature, and, in particular, when its characters are presented as miniature adults. Of course, I do not wish to give the impression that there are currently no examples in children’s literature (or, for that matter, in television or movies) of children who are emphatically different from adults. But I do mean to suggest that we are now undergoing a very rapid reorientation in our popular arts in regard to the image of children. One might put the matter, somewhat crudely, in this way: Our culture is not big enough for both Judy Blume and Walt Disney. One of them will have to go, and as the Disney empire’s falling receipts show, it is the Disney conception of what a child is and needs that is disappearing.3 We are in the process of exorcising a two-hundred-year-old image of the young as child and replacing it with the imagery of the young as adult.

Although this is exactly what Ms. Blume, our modern filmmakers, and TV writers are doing, no moral or social demerit may be charged against them. Whatever else one may say in criticism of our popular arts, they cannot be accused of indifference to social reality. The shuffling black, the acquisitive Jew, even (to some extent) the obedient and passive wife, have disappeared from view, not because they are insufficiently interesting as material but because they are unacceptable to audiences. In a similar way, Shirley Temple is replaced by Brooke Shields because the audience requires a certain correspondence between the imagery of its popular arts and social reality as it is experienced. The question of the extent to which, say, television reflects social reality is a complex one, for there are times when it lags slightly behind, times when it anticipates changes, times when it is precisely on target. But it can never afford to be off the mark by too great a margin or it ceases to be a popular art. This is the sense in which we might say that television is our most democratic institution. Programs display what people understand and want or they are canceled. Most people no longer understand and want the traditional, idealized model of the child because that model cannot be supported by their experience or imagination.

The same is true of the traditional model of an adult. If one looks closely at the content of TV, one can find a fairly precise documentation not only of the rise of the “adultified” child but also of the rise of the “childified” adult. Television is as clear about this as almost anything else (although, without question, the best representation of the childlike adult is in the film Being There, which is, in fact, about the process I am describing). Laverne, Shirley, Archie, the crew of the Love Boat, the company of Three, Fonzie, Barney Miller’s detectives, Rockford, Kojak, and the entire population of Fantasy Island can hardly be said to be adult characters, even after one has made allowances for the traditions of the formats in which they appear. With a few exceptions, adults on television do not take their work seriously (if they work at all), they do not nurture children, they have no politics, practice no religion, represent no tradition, have no foresight or serious plans, have no extended conversations, and in no circumstances allude to anything that is not familiar to an eight-year-old person.

Although students of mine who are dedicated TV watchers have urged me to modify the following statement, I can find only one fictional character regularly seen on commercial television, Felix Unger of The Odd Couple, who is depicted as having an adult’s appetite for serious music and whose language suggests that he has, at one time in his life, actually read a book. Indeed, it is quite noticeable that the majority of adults on TV shows are depicted as functionally illiterate, not only in the sense that the content of book learning is absent from what they appear to know but also because of the absence of even the faintest signs of a contemplative habit of mind. (The Odd Couple, now seen only in reruns, ironically offers in Felix Unger not only an example of a literate person but a striking anomaly in his partner, Oscar Madison—a professional writer who is illiterate.)

A great deal has been written about the inanity of popular TV programs. But I am not here discussing that judgment. My point is that the model of an adult that is most often used on TV is that of the child, and that this pattern can be seen on almost every type of program. On game shows, for example, contestants are selected with great care to ensure that their tolerance for humiliation (by a simulated adult, the “emcee”) is inexhaustible, their emotions instantly arousable, their interest in things a consuming passion. Indeed, a game show is a parody of sorts of a classroom in which childlike contestants are duly rewarded for obedience and precociousness but are otherwise subjected to all the indignities that are traditionally the schoolchild’s burden. The absence of adult characters on soap operas, to take another example, is so marked that as of this writing a syndicated “teen-age” version of a soap opera, called Young Lives, has been embarked upon as if to document the idea that the world of the young is no different from the world of the adult. Here television is going one step further than the movies: Young Lives is Bugsy Malone without satire.

All of this is happening not only for reasons suggested in the last three chapters but also because TV tries to reflect prevailing values and styles. And in our current situation the values and styles of the child and those of the adult have tended to merge. One does not have to be a sociologist of the familiar to have noticed all of the following:

The children’s clothing industry has undergone vast changes in the past decade, so that what was once unambiguously recognized as “children’s” clothing has virtually disappeared. Twelve-year-old boys now wear three-piece suits to birthday parties, and sixty-year-old men wear jeans to birthday parties. Eleven-year-old girls wear high heels, and what was once a clear marker of youthful informality and energy, sneakers, now allegedly signifies the same for adults. The miniskirt, which was the most embarrassing example of adults mimicking a children’s style of dress, is for the moment moribund, but in its place one can see on the streets of New York and San Francisco grown women wearing little white socks and imitation Mary Janes. The point is that we are now undergoing a reversal of a trend, begun in the sixteenth century, of identifying children through their manner of dress. As the concept of childhood diminishes, the symbolic markers of childhood diminish with it.

This process can be seen to occur not only in clothing but in eating habits as well. Junk food, once suited only to the undiscriminating palates and iron stomachs of the young, is now common fare for adults. This can be inferred from the commercials for McDonald’s and Burger King, which make no age distinctions in their appeals. It can also be directly observed by simply attending to the distribution of children and adults who patronize such places. It would appear that adults consume at least as much junk food as do children.4 This is no trivial point: it seems that many have forgotten when adults were supposed to have higher standards than children in their conception of what is and is not edible. Indeed, it was a mark of movement toward adulthood when a youngster showed an inclination to reject the kind of fare that gives the junk-food industry its name. I believe we can say rather firmly that this marker of the transition to adulthood is now completely obliterated.

There is no more obvious symptom of the merging of children’s and adults’ values and styles than what is happening with children’s games, which is to say, they are disappearing. While I have found no studies that document the decline of unsupervised street games, their absence is noticeable enough and, in any case, can be inferred from the astonishing rise of such institutions as Little League baseball and Pee Wee football. Except for the inner city, where games are still under the control of the youths who play them, the games of American youth have become increasingly official, mock-professional, and extremely serious. According to the Little League Baseball Association, whose headquarters are in Williamsport, Pennsylvania, Little League baseball is the largest youth sports program in the world. More than fourteen hundred charters have been issued, over two and a half million youngsters participate, from ages six to eighteen. The structure of the organization is modeled on that of major league baseball, the character of the games themselves on the emotional style of big league sports: there is no fooling around, no peculiar rules invented to suit the moment, no protection from the judgments of spectators.

The idea that children’s games are not the business of adults has clearly been rejected by Americans, who are insisting that even at age six, children play their games without spontaneity, under careful supervision, and at an intense competitive level. That many adults do not grasp the significance of this redefinition of children’s play is revealed by a story that appeared in The New York Times, July 17, 1981. The occasion was a soccer tournament in Ontario, Canada, involving four thousand children from ten nations. In one game between ten-year-old boys from East Brunswick, New Jersey, and Burlington, Ontario, a brawl took place “after fathers had argued on the sidelines, players had traded charges of rough play and foul language, and one man from Burlington made a vulgar gesture.” The brawl was highlighted by a confrontation between the mothers of two players, one of whom kicked the other. Of course, much of this is standard stuff and has been duplicated many times by adults at “official” baseball and football games. (I have myself witnessed several forty-year-old men unmercifully “riding” an eleven-year-old shortstop because he had made two errors in one inning.) But what is of most significance is the remark made by one of the mothers after the brawl. In trying to put the matter in perspective, she was quoted as saying, “It [the brawl] was just 30 seconds out of a beautiful tournament. The next night our boys lost, but it was a beautiful game. Parents were applauding kids from both teams. Over all, it was a beautiful experience.” But the point is, What are the parents doing there in the first place? Why are four thousand children involved in a tournament? Why is East Brunswick, New Jersey, playing Burlington, Ontario? What are these children being trained for? The answer to all these questions is that children’s play has become an adult preoccupation, it has become professionalized, it is no longer a world separate from the world of adults.

The entry of children into professional and world-class amateur sports is, of course, related to all of this. The 1979 Wimbledon tennis tournament, for example, was marked by the extraordinary performance of Tracy Austin, then not yet sixteen, the youngest player in the history of the tournament. In 1980, a fifteen-year-old player made her appearance. In 1981, a fourteen-year-old. An astonished John Newcombe, an old-time Wimbledon champion, expressed the view that in the near future twelve-year-old players may take the center court. But in this respect tennis lags behind other sports. Twelve-year-old swimmers, skaters, and gymnasts of world-class ability are commonplace. Why is this happening? The most obvious answer is that better coaching and training techniques have made it possible for children to attain adult-level competence. But the questions remain: Why should adults encourage this possibility? Why would anyone wish to deny children the freedom, informality, and joy of spontaneous play? Why submit children to the rigors of professional-style training, concentration, tension, media hype? The answer is the same as before: The traditional assumptions about the uniqueness of children are fast fading. What we have here is the emergence of the idea that play is not to be done for the sake of doing it but for some external purpose, such as renown, money, physical conditioning, upward mobility, national pride. For adults, play is serious business. As childhood disappears, so does the child’s view of play.

This same tendency toward the merging of child and adult perspectives can be observed in their tastes in entertainment. To take an obvious example: The 1980 Nielsen Report on Television reveals that adults (defined as people over the age of eighteen) rated the following as among their fifteen most favored syndicated programs: Family Feud, The Muppet Show, Hee Haw, M*A*S*H, Dance Fever, Happy Days Again, and Sha Na Na. These programs were also listed among the top fifteen most favored by those between the ages of twelve and seventeen. And they also made the favored list of those between the ages of two and eleven! As for (the then) current shows, the male adult group indicated that Taxi, Mork & Mindy, M* A* S* H, Three’s Company, ABC Sunday Night Movie, and The Dukes of Hazzard were among their favorites. The twelve-to-seventeen age group included the same shows.5 In the 1981 Nielsen Report, adult males favored six syndicated programs (out of ten) that were the same as those favored by the twelve-to-seventeen age group, and four (out of ten) that were the same as the two-to-eleven age group.6

Such figures are painful to contemplate but are entirely consistent with the observation that what now amuses the child also amuses the adult. As I write, Superman II, For Your Eyes Only, Raiders of the Lost Ark, and Tarzan, the Ape Man are attracting customers of all ages in almost unprecedented numbers. Twenty-five years ago, such films, which are essentially animated comic strips, would have been regarded as children’s entertainment. Not as charming, innocent, or creative as, say, Snow White and the Seven Dwarfs but nonetheless clearly for a youthful audience. Today, no such distinctions need to be made. Neither is it necessary to distinguish between adult and youthful taste in music, as anyone who has visited an adult discotheque can attest. It is still probably true that the ten-year-old-to-seventeen-year-old group is more knowledgeable about the names and styles of rock groups than are those over the age of twenty-five, but as the declining market for both classical and popular “adult” music suggests, adults can no longer claim that their taste in music represents a higher level of sensitivity than teen-age music.7

As clothing, food, games, and entertainment move toward a homogeneity of style, so does language. It is extremely difficult to document this change except by repairing to anecdotes or by asking readers to refer to their own experience. We do know, of course, that the capacity of the young to achieve “grade level” competence in reading and writing is declining.8 And we also know that their ability to reason and to make valid inferences is declining as well.9 Such evidence is usually offered to document the general decline of literacy in the young. But it may also be brought forward to imply a decline of interest in language among adults; that is to say, after one has discussed the role of the media in producing a lowered state of language competence in the young, there is still room to discuss the indifference of parents, teachers, and other influential adults to the importance of language. We may even be permitted the assumption that adult control over language does not in most cases significantly surpass children’s control over language. On television, on radio, in films, in commercial transactions, on the streets, even in the classroom, one does not notice that adults use language with more variety, depth, or precision than do children. In fact, it is a sort of documentation of this that there has emerged a small industry of books and newspaper columns that advise adults on how to talk as adults.

One may even go so far as to speculate that the language of the young is exerting more influence on adults than the other way around. Although the tendency to insert the word like after every four words still remains a distinctive adolescent pattern, in many other respects adults have found teen-age language attractive enough to incorporate in their own speech. I have recorded many instances of people over the age of thirty-five, and from every social class, uttering, without irony, such phrases as “I am into jogging,” “Where are you coming from?” (to mean “What is your point of view?”), “Get off my case,” and other teen-age locutions. I must leave it to readers to decide if this tendency is confirmed by their own experience. However, of one thing, I believe, we may be sure: Those adult language secrets to which we give the name “dirty words” are now not only fully known to the young (which may always have been the case) but are used by them as freely as they are by adults. Not only on the soccer field in Ontario but in all public places—ball parks, movie theaters, school yards, classrooms, department stores, restaurants—one can hear such words used comfortably and profusely even by children as young as six years old. This fact is significant because it is an example of the erosion of a traditional distinction between children and adults. It is also significant because it represents a loss in the concept of manners. Indeed, as language, clothing, taste, eating habits, etc., become increasingly homogenized, there is a corresponding decline in both the practice and meaning of civilité, which is rooted in the idea of social hierarchy.10 In our present situation, adulthood has lost much of its authority and aura, and the idea of deference to one who is older has become ridiculous. That such a decline is in process can be inferred from the general disregard for rules and rituals of public assembly: the increase in what are called “discipline problems” in school, the necessity of expanded security at public events, the intrusion of the loudest possible radio music on public space, the rarity of conventional expressions of courtesy such as “thank you” and “please.”

All of the foregoing observations and inferences are, I believe, indicators of both the decline of childhood and a corresponding diminution in the character of adulthood. But there is also available a set of hard facts pointing to the same conclusion. For example, in the year 1950, in all of America, only 170 persons under the age of fifteen were arrested for what the FBI calls serious crimes, i.e., murder, forcible rape, robbery, and aggravated assault. This number represented .0004 percent of the under-fifteen population of America. In that same year, 94,784 persons fifteen years and older were arrested for serious crimes, representing .0860 percent of the population fifteen years and older. This means that in 1950, adults (defined here as those over and including fifteen years of age) committed serious crimes at a rate 215 times that of the rate of child crime. By 1960, adults committed serious crimes at a rate 8 times that of child crime; by 1979, the rate was 5.5 times. Does this mean that adult crime is declining? Not quite. In fact, adult crime is increasing, so that in 1979 more than 400,000 adults were arrested for serious crimes, representing .2430 percent of the adult population. This means that between 1950 and 1979, the rate of adult crime increased threefold. The fast-closing difference between the rates of adult and child crime is almost wholly accounted for by a staggering rise in child crime. Between 1950 and 1979, the rate of serious crimes committed by children increased 11,000 percent! The rate of nonserious child crimes (i.e., burglary, larceny, and auto theft) increased 8,300 percent.11

If America can be said to be drowning in a tidal wave of crime, then the wave has mostly been generated by our children. Crime, like most everything else, is no longer an exclusively adult activity, and readers do not need statistics to confirm this. Almost daily the press tells of arrests being made of children who, like those playing tennis at Wimbledon, are getting younger and younger. In New York City a nine-year-old boy tried to hold up a bank. In July 1981, police in Westchester County, New York, charged four boys with sexual assault of a seven-year-old girl. The alleged rapists were a thirteen-year-old, two eleven-year-olds, and a nine-year-old, the latter being the youngest person ever to be accused of first-degree rape in Westchester County.12

Ten- to thirteen-year-olds are involved in adult crime as never before. Indeed, the frequency of serious child crime has pushed youth crime codes to their limits. The first American juvenile court was established in 1899 in Illinois. The idea could come to its end before the century is out as legislators throughout the country hurriedly try to revise criminal laws so that youthful offenders can be treated as adults. In California a study group formed by the attorney general has recommended sending juveniles convicted of first-degree murder to prison rather than to the California Youth Authority. It has also recommended that violent offenders sixteen years old and younger be tried as adults, within the court’s discretion.13 In Vermont the arrest of two teen-agers in connection with the rape, torture, and killing of a twelve-year-old girl has driven the state legislature to propose hardening the juvenile codes.14 In New York, children between the ages of thirteen and fifteen who are charged with serious crimes can now be tried in adult courts and, if convicted, can receive long prison terms. In Florida, Louisiana, New Jersey, South Carolina, and Tennessee, laws have been changed to make it easier to transfer children between the ages of thirteen and fifteen to adult criminal courts if the crime is serious enough. In Illinois, New Mexico, Oregon, and Utah, the privacy that usually surrounds the trials of juveniles has been eliminated: newspaper reporters may now regularly attend the proceedings.15

This unprecedented change in both the frequency and brutality of child crime, as well as the legislative response to it, is no doubt attributable to multiple causes but none more cogent, I think, than that our concept of childhood is rapidly slipping from our grasp. Our children live in a society whose psychological and social contexts do not stress the differences between adults and children. As the adult world opens itself in every conceivable way to children, they will inevitably emulate adult criminal activity.

They will also participate in such activity as victims. Paralleling the assault on social order by children is the assault by adults on children. According to the National Center on Child Abuse and Neglect, there were 711,142 reported cases of child abuse in 1979. Assuming that a fair amount of child battering goes unreported, we may guess that well over two million instances of child abuse occurred that year. What can this mean other than that the special status, image, and aura of the child has been drastically diminished? It is only half an explanation to say that children are beaten up because they are small. The other half is that they are beaten up because they are not perceived as children. To the extent that children are viewed as unrealized, vulnerable, not in possession of a full measure of intellectual and emotional control, normal adults do not beat them as a response to conflict. Unless we assume that in all cases the adult attackers are psychopaths, we may conclude that at least part of the answer here is that many adults now have a different conception of what sort of a person a child is, a conception not unlike that which prevailed in the fourteenth century: that they are miniature adults.

This perception of children as miniature adults is reinforced by several trends besides criminal activity. For example, the increased level of sexual activity among children has been fairly well documented. Data presented by Catherine Chilman indicate that for young white females the rise has been especially sharp since the late 1960s.16 Studies by Melvin Zelnick and John Kantner of The Johns Hopkins University conclude that the prevalence of sexual activity among never-married teen-age women, among all races, increased by 30 percent between 1971 and 1976, so that by age nineteen, 55 percent have had sexual intercourse.17 We may safely assume that media have played an important role in the drive to erase differences between child and adult sexuality. Television, in particular, not only keeps the entire population in a condition of high sexual excitement but stresses a kind of egalitarianism of sexual fulfillment; sex is transformed from a dark and profound adult mystery to a product that is available to everyone—let us say, like mouthwash or underarm deodorant.

One of the consequences of this has been a rise in teen-age pregnancy. Births to teen-agers constituted 19 percent of all the births in America in 1975, an increase of 2 percent over the figure in 1966. But if one focuses on the childbearing rate among those of age fifteen to seventeen, one finds that this is the only age group whose rate of childbearing increased in those years, and it increased 21.7 percent.18

Another, and grimmer, consequence of adult-like sexual activity among children has been a steady increase in the extent to which youth are afflicted with venereal disease. Between 1956 and 1979, the percentage of ten-to-fourteen-year-olds suffering from gonorrhea increased almost threefold, from 17.7 per 100,000 population to 50.4. Roughly the same increase is found in the fifteen-to-nineteen-year-old group (from 415.7 per 100,000 to 1,211.4). The traditional restraints against youthful sexual activity cannot have great force in a society that does not, in fact, make a binding distinction between childhood and adulthood. And the same principle applies in the case of the consumption of drugs. For example, the National Institute on Alcohol Abuse and Alcoholism concludes that a substantial number of fifteen-year-olds drink “considerable amounts.” In one study of the drinking habits of tenth-to-twelfth-graders, almost three times as many males indicated they were “heavier” drinkers (meaning they drink at least once a week and consume large amounts when they drink) than those who indicated they were “infrequent” drinkers (meaning they drink once a month at most and then in small amounts). Alcoholism, once considered an exclusively adult affliction, now looms as a reality for our new population of miniature adults. Of other drugs, such as marijuana, cocaine, and heroin, the evidence is conclusive: American youth consume as much of it as do adults.19

Such figures as these are unmistakable signs of the rise of the “adultified” child, but there are similar trends suggestive of the rise of the “childified” adult. For example, the emergence of the “old persons’ home” as a major social institution in America bespeaks of a reluctance on the part of young adults to assume a full measure of responsibility for their parents. Caring for the elderly and integrating them into family life are apparently perceived as an intolerable burden and have rapidly diminished as adult imperatives. Perhaps more significant is the fact that the present generation of young adults is marrying at a dramatically lower rate and having fewer children than their parents’ generation. Moreover, their marriages are not as durable. According to the National Center for Health Statistics, parents are getting divorced at twice the rate they did twenty years ago, and more children than ever before are involved in marital dissolution: 1.18 million in 1979 as compared to 562,000 in 1963. Although we must assume multiple causality for such a trend, including what Christopher Lasch calls the rise of the narcissistic personality, we may fairly claim that it indicates a precipitous falling off in the commitment of adults to the nurturing of children. The strongest argument against divorce has always been its psychological effect on children. It is now clear that more adults than ever do not regard this argument to be as compelling as their own need for psychological well-being. Perhaps we might even say that, increasingly, American adults want to be parents of children less than they want to be children themselves. In any case, children have responded to this new mood by, among other things, running away in droves. According to the FBI, 165,000 children were taken into custody by police in 1979. It is assumed that at least three times that number went undetected.

In the face of all this one would expect the rise of a “philosophy” of sorts to justify the loss of childhood. Perhaps there is a principle governing social life that requires people to search for a way to affirm that which is inevitable. In any case, such a philosophy has, indeed, emerged, and we may take it as evidence of the reality it addresses. I refer here to what is sometimes called the Children’s Rights Movement. This is a confusing designation, because under its banner are huddled two conceptions of childhood that are, in fact, opposed to each other. One of them, which I do not have in mind in these remarks, believes that childhood is desirable although fragile, and wishes to protect children from neglect and abuse. This view argues, for example, for the intervention of public authority when parental responsibility fails. This conception of childhood dates back to the nineteenth century and is simply a widening of the perspective that led to child labor laws, juvenile crime codes, and other humane protections. The New York Times has referred to those who stand up for this idea as “child savers.”

The other conception of “child’s rights” rejects adult supervision and control of children and provides a “philosophy” to justify the dissolution of childhood. It argues that the social category “children” is in itself an oppressive idea and that everything must be done to free the young from its restrictions. This view is, in fact, a much older one than the first, for its origins may be found in the Dark and Middle Ages when there were no “children” in the modern sense of the word.

As is frequently the case in such matters, we have here a “reactionary” position being advanced by those who think of themselves as “radicals.” In any case, these are people who might be called “child liberators.” Among the earliest of them was Ivan Illich, the brilliant social critic, whose influential book Deschooling Society (1971) argued against compulsory schooling not only on the grounds that schools were unimprovable but, even more, that compulsory schooling effectively bars the young from fully participating in the life of the community; that is, prevents them from being adults. Illich redefined the relationship of children to school by insisting that what most people see as a benevolent and nurturing institution is instead an unwarranted intrusion in the life and learning of a certain segment of the population. The force of Illich’s argument derives from the fact that information is now so widely distributed, available from so many sources, and codified in ways that do not require sophisticated literacy that the school has lost much of its meaning as the fountainhead of learning. Moreover, as the distinction between childhood and adulthood becomes less marked, as children less and less have to earn adulthood, as less and less is there anything for them to become, the compulsory nature of schooling begins to appear arbitrary.

This impression is intensified by the fact that educators have become confused about what they ought to be doing with children in school. Such ideas that one ought to be educated for the greater glory of God or Country, or even for the purpose of beating the Russians, lack both serious arguments and advocates, and many educators are willing to settle for what Marx himself would have emphatically rejected: education for entry into the marketplace. This being the case, a knowledge of history, literature, and art, which once was the mark of an educated adult, recedes in importance. Moreover, it is not as well established as many think that schooling makes an important difference in one’s future earning power. Thus, the entire edifice of our educational structure is laced with dangerous cracks, and those who would demolish the structure altogether are by no means misinformed. Indeed, there is a sense in which their proposals are redundant. As childhood disappears, so must schools. Illich does not have to write a book about it so much as merely wait.

All of this is the theme of John Holt’s Escape from Childhood. In this and other books he argues for the liberation of the child from the constraints of a three-hundred-year-old tradition of bondage. His arguments are broadened—that is, taken to their logical conclusion—in Richard Farson’s extraordinary book, Birthrights (1974). Farson argues that the child’s right to information, to his or her own choice of education, to sexual freedom, to economic and political power, even to the right to choose his or her own home environment, must be restored at once. “We are not likely to err,” he says, “in the direction of too much freedom.”20 Farson, who is not unaware of the history of childhood, evidently finds the fourteenth and fifteenth centuries a suitable model for the ways in which the young ought to be integrated into society. He believes, among other things, that the principal objection to incest is that people are made to feel unreasonably guilty about practicing it; that all sexual behavior should be decriminalized, including sex between adults and children; that arrangements need to be made to permit children to live wherever and with whom they wish, including “homes” governed by themselves; and that children must be given the right to vote “because adults do not have their interests at heart and do not vote in their behalf.”21

Such a child’s rights movement as this may be said to be a case of claiming that the disease is the cure. Expressed more neutrally, what this sort of advocacy represents, as noted, is an attempt to provide a rationalization for what appears to be an irreversible cultural tendency. Farson, in other words, is not the enemy of childhood. American culture is. But it is not a forthright enemy, in the sense that one might say, for example, that America is against communism. American culture does not intend to be against childhood. In fact, the language we use to talk about children still carries within it many of the assumptions about childhood that were established in the eighteenth and nineteenth centuries. Just as our language about war preserves the idea of a nineteenth-century war, when, in fact, such an idea today is preposterous, our language about children does not match our present social reality. For in a hundred years of redesigning how we communicate, what we communicate, and what we need to be in order to share in it all, we have reached the point of not needing children, just as we have reached the point (although we dare not admit it) of not needing the elderly. What makes Farson’s proposals so horrifying is that without irony or regret he reveals the future.