Just Kidding
The Fragmented Generation
Many years ago, my children spent an idyllic week on a friend’s homestead in upstate New York that abutted a neighbor’s deep and mucky pond. They were five, seven, and ten years old at the time and were living the neutered lives of privileged urban children everywhere (lives full of museum visits and pressure-treated sandboxes, and an occasional dip in a fountain), and so the prospect of roaming free on an expanse of land that contained an actual body of natural water—with its tantalizingly lethal possibilities—was too much to miss. Each day, the children would run down a little grassy hill from the house and sneak through a rickety fence, armed with sticks and pails, to while away happy hours at the pond’s murky edge. I’m sure it didn’t occur to them even for a second, as they were harvesting their amphibious bounty, that they might have looked like trespassers.
But soon enough, a cantankerous old gentleman straight out of central casting had the children firmly in his sights. He was apparently dressed in tattered overalls and was seen angrily shaking a fist at them, or possibly even a pitchfork. (The details remain a little hazy on this point.) In any case, the menacing Mr. McGregor was heard shouting loudly from his porch to get those pesky (et cetera) interlopers off his goddamned (et cetera) land. The children froze like rabbits. For an instant, nobody could move. My two cowardly older sons and their friends grabbed their gear and prepared to hightail it back home. But, according to family legend, my whisky-voiced five-year-old, Eleni, stood her ground. “We’re just kids!” she hollered back dismissively, hand on hip.
For me, this story illustrates a key observation about the way childhood has changed over the years. I think Eleni was staking her claim to be taken seriously as a whole child, above all, with a child’s special nature and entitlements, and not to be seen narrowly, as merely a troublemaker or a problem. My daughter’s notion of being “just a kid,” with its implicit appeal for understanding and forbearance, seems awfully picturesque these days.
This appeal to the world of childish things highlights a trade-off that must be made all the time by those of us who care for young children. A child has an angry outburst and bites a classmate, or deliberately breaks a glass, and we give the child a pass. Maybe she’s struggling in school or has a new baby sibling. We look at the big picture: “She’s just a child,” or “That’s so unlike her,” we reassure ourselves. But other times, the same behavior looms large, and looks like a pathology. This narrow part of the child, an isolated aspect of his experience, defines the child, acquires a label, demands a response.
The dilemma is made more difficult by an increasing reality of contemporary life: the nineteenth- through mid-twentieth-century perspective on childhood as a coherent, demarcated life stage that children share among themselves has given way, in the late twentieth and early twenty-first century, to seeing childhood as a collection of discrete and not-very-childlike loose parts: behaviors, personality traits, identities, histories, and learning styles; but also distinct symptoms, labels, disorders, glitches, quirks, problems, and needs. The experience of childhood—and, distinctly, our perception of childhood—has become highly fragmented.
I confess to some ambivalence about this shift. On the one hand, the splintering of childhood can result in our giving too much weight to only one part of a child’s experience, and result in his being labeled or pigeonholed, often in a negative way that draws out a flaw or characteristic. My daughter instinctively resisted this pigeonholing and demanded to be viewed as a coherent whole. She understood that she was more than a simple miscreant who deserved a telling off. Yes, she was a rule breaker in that moment, but she was also a child who loved nature, a child who might not have realized she was on a neighbor’s property, a child capable of bravery, a child who deserved to be treated gently precisely because of her general childishness. And thus, her “just kids” disclaimer. Like the unknowable bat we encountered in Professor Nagel’s essay, she was more than the sum of those good and bad parts.
Yet pigeonholing isn’t always a bad thing. It can also highlight something important about a child that was formerly unseen, a particular condition or attribute that needs attention. Children who were once written off as “dumb,” “lazy,” “rotten apples”—or even dismissed as “just kids”—now have their own unique emotional signatures that invite consideration and support. It’s a trade-off between seeing either the forest or the trees. The human brain appears unwilling to zoom in on both the background and the foreground at the same time. So, at every level of observation, we are missing something—the big picture or the small parts—and there is always a cost to observing only one. Ideally, all the people concerned with these matters would work together to form a complete picture. But in practice, this is not easy. Both perspectives—let’s call them the parts and the whole—have their advantages and disadvantages. The challenge is to figure out when to examine children’s fragments and when it makes more sense to see “just kids.”
I often hear adults mocking the increasing preciousness of childhood and, especially, the way children today are continually reminded of their specialness. The backlash to this perceived indulgence has been merciless even from those who contribute to the problem.1 Here, have a trophy! Hang on, you’re not so fabulous after all. But the problem with the disapproving attitude is that it’s easy to forget how unspecial, and unprotected, children have been throughout much of human history (and indeed still are in many places). Life has become bearable for lots of twenty-first-century children because of our increased awareness of certain features of childhood that were once shrouded in secrecy or misunderstanding: child abuse, psychiatric disorders, physical and intellectual disabilities, even the everyday reality of normally developing children who weren’t white and male. Would girls or children of color have the opportunities available to them today if the United States hadn’t embraced the specialness movement?2
Nowadays, we embrace children’s uniqueness in ways that would have been unimaginable even a generation ago. Who knew, in 1970, that a brilliant child might also have dyslexia? Or the other way around—that one who couldn’t read was actually the brightest in the class? Or that communities such as Montgomery County, Maryland, would create a program for gifted and talented children with learning disabilities that would become a model for understanding children’s full academic potential?3
The United States has become vastly more welcoming, in just a generation, to children with atypical cognitive and physical development, to immigrants and English-language learners, even to trans- and intersexual children. This new way of seeing (and valuing) children is rooted in earlier historical changes that made life better for children: first, at the turn of the last century, little children stopped dying of infectious diseases and social service agencies and new laws were created to protect vulnerable children. Then came postwar prosperity, smaller families, and the rise of the major human rights movements, all of which paved the way for the idea that special children could be worthy of legal protections beyond those now extended to children in general.
It’s hard to overstate the radicalism of some of these efforts to put newly visible children on the map. The acceptance was not just a feel-good gesture, but came with real policy teeth. Prior to the passage of the Education for All Handicapped Children Act (EHA) in 1975, thousands of children with disabilities were not able to attend school; unbelievably by today’s standards, the wealthiest nation in the world did not guarantee a public education for all.4 Subsequent legislation, such as the 1990 Individuals with Disabilities Education Act (IDEA), further refined the rights and opportunities for children with disabilities, and expanded them to include conditions such as autism and brain damage. Programs such as Early Intervention,5 a coordinated safety net of services for at-risk infants and young children, grew out of this movement, and they derive their success precisely because we were willing to agree that certain children were indeed special, or at a minimum recognizable as disabled or sick, and in need of specific services for specific kinds of problems.
The recognition of children’s focused needs, and their complex, and sometimes problematic makeup, was freeing in some ways. Along with it came a loosening of moral judgment about disability and a decline in ignorance and hatred. There was a new notion that all children were deserving of care, irrespective of the genesis of their troubles. You don’t hear too much about “bad seeds” anymore, except in extreme instances.6 Even in the case of child murderers, we recognize the sway not only of genes and questionable parenting practices, but also of culture, community, and larger societal forces, including economics and geography.
Consider the evolution of our attitude towards deafness. It’s a good time to be a deaf child in America. This may seem hard to credit, if you are a hearing person, but there are many deaf parents today who rejoice when they learn that their new baby is also deaf. In fact, deaf culture has so many distinct, and distinctly appealing, features that hearing people have been known to feel a sense of envy when they encounter it. But as Andrew Solomon explains in his magisterial exploration of childhood difference, Far from the Tree, it wasn’t always so.7 For decades, the prevailing wisdom held that the only solution to a calamity such as deafness was to downplay a deaf child’s hearing impairment. Deaf children were usually not diagnosed as deaf for a long time, and when they were, they were still just thrust into the hearing world, usually with disastrous results.
Such was our ignorance of deafness, and of the key principles of child development, that deaf children were even denied the opportunity to acquire meaningful language as babies, through early exposure to signing, and thus missed critical periods for language acquisition as they struggled, and almost uniformly failed, to properly master lipreading and speech. Deaf children were routinely mainstreamed in classrooms under the misguided belief that their deafness could be wrestled into submission through conformity to the one-size-fits-all world of child rearing so common in earlier generations. In other words, there was nothing special about being a deaf child, other than its awfulness, and there were therefore almost no special accommodations for it.
As a consequence, few deaf children ever reached anything close to their innate cognitive potential (hence the word “dumb” to describe them when they failed to learn to speak), and most deaf children lived trapped in a world of social, intellectual, and emotional isolation.
But, as our society has become more conscious of the unique needs and requirements of deaf people, the experience of being a deaf child has become recognizably better. Babies are now routinely screened at birth for deafness, so that families can begin early language exposure immediately.8 There are new assistive technologies, including cochlear implants, whose effectiveness is increased with early adoption. More important, deaf children today have access to a positive identity and a sense of pride in their unique language and culture that would have been unimaginable a few decades ago. It’s important to understand that these changes occurred precisely because we shifted our gaze from the big picture of childhood as an omnibus state and zoomed in close to understand the state of deafness in its specificity. Nowadays, Solomon argues, deaf children have reached the puzzling position of feeling truly liberated from their disability by dint of being viewed as a special, and protected, class of the disabled.
This identification of objective phenomena such as deafness—or child abuse or autism—has been hugely effective, even (or perhaps especially) when there seems to be little to celebrate in the disabling event.
Part of the process of becoming more sensitive and responsive to children has thus involved highlighting their vulnerabilities. Now we can take the loose part out of a child and highlight it, give it a label that can offer relief, protection, and compassion, as well as a recognized vocabulary to describe the experience shared with many similar children.
On the other hand, social historian Michel Foucault argued in The Birth of the Clinic that a side effect of nineteenth-century advances in medical science was that we erased the individual’s actual suffering by focusing only on the category of their disease. Through the development of nosology (the identification and classification of different diseases), patients finally acquired an objective diagnosis that pointed a path to treatment and prognosis (hooray!), but they lost something of themselves in the process, becoming merely what Foucault called an “endlessly reproducible pathological fact.”9
The same might be said about early childhood. In finally seeing children’s concerns—commendably and scientifically—we began, unfortunately, to lose sight of the small person hosting those problems. Over time, the fragmentation became a bit more sophisticated: a child is now said to have a condition, rather than be the condition. But, semantics aside, the result is shades of the same: the child and the rest of his childliness become harder to see.
The extraordinary irony of the young child’s deletion really can’t be overstated when we consider that the aim of identifying all the fragments and loose parts of childhood in the first place was to make such a child more visible, not less. The purpose of the labels and categories is to allow a kind of sympathetic and systematic understanding of the child’s problems, which clearly benefits the doctors, teachers, and other adults caring for them and, one sincerely hopes, the children themselves. But do we now see only the labels and syndromes? Has the child become invisible again?
One unfortunate consequence of breaking childhood down into a series of features (and usually problematic ones) is that it’s easy to catastrophize this life stage, shifting our view of the child from one of strength and competence to one of fragility. The child’s view of herself changes, too, in ways that could arguably either enhance or topple what has come to be such a hated phrase: her self-esteem. The moment we acquire names and labels for things, it’s hard to resist using them, and it’s a fair question to ask if our greater sensitivity to children’s problems might be imposing hidden burdens on them.
Parents rightly feel defensive about the hazards of a medical diagnosis for their young son or daughter, especially when they have a genuine desire to support a child in clear distress.10 For example, Dr. Ned Hallowell, one of the world’s experts in the diagnosis and treatment of ADHD (who was also my boss for a year after college when I worked at the Massachusetts Mental Health Center in Boston), notes that, media mythology notwithstanding, parents almost never casually label their children, or pump them with medications, without first carefully considering alternatives; it’s too painful a leap to make recklessly.11
I don’t doubt any of this. Nonetheless, we still have to wrestle with the tricky question of why we fail to recognize in some children certain types of serious problems (such as depression and anxiety disorders), and thus fail to offer them relief, while other children become so easily labeled with certain conditions, such as autism spectrum disorders, nut allergies, and ADHD, all of which have seen inexplicably dramatic increases (at least in diagnosis) in recent years.12
When the CDC released figures suggesting that more than 10 percent of elementary-school-aged children and 20 percent of high school boys have been diagnosed with ADHD, Dr. Hallowell cautioned against making the diagnosis in a “slipshod fashion”13 and recommended distinguishing between bona fide ADHD cases (for which 80 percent of patients will find at least some relief from using methylphenidate) versus what he calls “pseudo-ADHD,” a constellation of childhood behaviors that mimic ADHD but stem primarily from an “environment-induced syndrome caused by too much time spent on electronic connections and not enough time spent on human connections.”14
The autism spectrum is perhaps the most culturally fraught illustration of the complexities of diagnosing childhood pathology, and one of the most frightening. One in 68 children was diagnosed with an autism spectrum disorder in 2014, or almost 15 per 1,000 eight-year-olds, representing a 30 percent increase in prevalence from just two years earlier.15 Much of this increase can be explained by changes in diagnostic criteria; when autism was first identified, in the 1940s, the traditional diagnosis restricted autism only to children with mental retardation, but studies have shown that as autism diagnoses increased, diagnoses of mental retardation without autism decreased, suggesting a possible substitution of diagnostic categories.16
Fred Volkmar, an internationally renowned autism specialist at Yale’s Child Study Center, explains that the research base of autism spectrum disorders has become sufficiently sophisticated that babies and toddlers with ostensible prodromal features, such as tracking inanimate objects but not faces, can be identified as potentially at risk.17 While not all of them will go on to receive an autism diagnosis at age five or six, clinicians’ improved ability to pick up these subtle red flags is likely allowing children who do manifest significant symptoms to function at a much higher level than they might have otherwise, absent the early intervention. But this clinical sophistication is also driving up the numbers of potential and actual cases, and we don’t yet understand the full implications of that increase for the children themselves and for our society.
In a careful analysis of the autism epidemic in the United States in the past twenty years, sociologist Peter Bearman and colleagues found that children who lived in proximity (250 meters) to a child with autism had a 42 percent greater chance of being diagnosed with autism than children who didn’t live near such a child. Between 250 and 500 meters near a child with autism, the risk increased by 22 percent. Interestingly, the risk only held if the children were also in the same school district as the child with autism, suggesting a social connection to the families was necessary and not only geographic proximity.
This social connection was the biggest factor accounting for the increase in autism in recent years (accounting for 16 percent of the increase), and Bearman argued that such a figure was most consistent with a kind of social contagion, or proximity effect. While the authors were careful to avoid any suggestion of causality, the work suggests that two things might be going on here: the spread of social information about autism can help parents identify resources for actual autism cases, but there is also the worrying possibility that this proximity effect might be facilitating misdiagnosis.18
Experts are watching carefully to see if the recent elimination of the diagnostic label Asperger’s syndrome from the most recent version of the Diagnostic and Statistical Manual (the psychiatrists’ diagnostic bible) will lead to a compensatory increase in diagnoses of autism spectrum disorder. In the absence of the old category (which contained many thousands of children), the question is whether children formerly thought to have Asperger’s will now be recategorized somewhere along the autism spectrum or, as some predict, whether some of those children will be returned to the “normal” category. The stakes are incredibly high in either direction, as parents poignantly described to the New York Times.19 Some children will be denied services they need while others may feel liberated to rejoin the mainstream, with all the social consequences of either turn.
It’s important not to become so preoccupied with the possibility of overdiagnosis in one segment of the population that we overlook the woeful problem of underdiagnosis of children who desperately need the services that only come with a diagnostic label in other situations. The mental health problems of poor children and children of color are especially likely to go untreated.20 Still, on the other hand, what are we to make of a statistic like this one: the state of Kentucky saw a 270 percent increase in prescriptions of antipsychotic medications for poor and disabled children on Medicaid between 2000 and 2010, with minority children more than three times as likely to be prescribed antipsychotic medicines than white children in the state.21 Is Kentucky just playing catch-up after decades of inattention to the mental health needs of minority children? Or are certain kinds of children being put on drugs with serious side effects, children who might be managed differently if they were white or wealthy? We don’t really know the answer.
But both overdiagnosis and underdiagnosis are real problems fueled by our loose parts perspective, because we are increasingly programmed to miss the portrait of the whole child. Chopping up childhood causes children to fall victim to two different kinds of error.
To explain this, I need to digress a little to describe the old days of appendectomy surgery. There were two kinds of mistakes a surgeon could make when evaluating a patient with abdominal pain and possible appendicitis. On the one hand, the surgeon could fail to operate and the patient might really have appendicitis and might even die from it. This is formally called a type-two error (a false negative). On the other hand, cutting open the body of a healthy person before the modern era of hospital care was a big deal, too: the surgeon might operate on a patient and find a perfectly healthy appendix, and the patient could die from needless surgery. This is known as a type-one error (a false positive).
But here is the devilish dilemma: you cannot reduce one type of error without increasing the other type of error. There is always a balance. And, in fact, one of the ways that surgeons used to know that they weren’t missing any real cases of appendicitis was to find that, in some specified fraction of the time (say, 5 percent), when they opened up a patient’s belly, there was a slim, pink appendix in the best of health. The presence of healthy appendixes in some cases proved that they weren’t being overly conservative in choosing to operate (and thus missing diseased ones).
The same can be said of our parts-versus-whole problem when it comes to diverse features of children. Sometimes we have to decide whether we are going to overcorrect and brand kids or, at the other extreme, possibly miss their problems entirely.
Educators are sometimes remarkably undiscerning in their identification of “problem children,” and the consequences of this diagnostic slipperiness can be devastating for those caught up in the dragnet of false positives. Here’s how it happened in the case of a seven-year-old boy I knew.
Tom was in second grade when I met him, but he had been flagged in preschool for challenging behaviors, poor fine motor skills, and weak upper-body strength. His learning profile, as it’s called, was duly passed along to Tom’s incoming kindergarten teacher during one of the meetings held each spring between a representative from the public schools and the local preschool staff to get an early read on children who might need special services.
In many domains of life, prior history is a decent guide to the future, but in early childhood, the predictive power of past experience is limited. So we have a stubborn conflict on our hands between flagging potential problems in their nascent form before they become unfixable (the necessity of which only a fool would dispute) and allowing development to unfold on a schedule that nature intended (another eminently reasonable goal to which those caring for young children should aspire).
Once trapped in the diagnostic undertow, however, it’s awfully hard for a child to get free. In Tom’s case, by the time he’d arrived in second grade, his reputation was so entrenched in the eyes of the adults around him that the renewal of his Individualized Education Plan (IEP)—a complete review of which is mandated by law at least every year—seemed a largely pro forma task by school officials and teachers. (I wasn’t sure what role his parents played in the process, but I can only imagine their confusion.)
So, what exactly was Tom’s problem? He was described as being “always in motion” and “unable to sit still” as well as “sometimes rude to teachers” and “a kid who can’t keep his hands off other children.” He also had difficulty holding a pencil and sitting straight and was sensitive to noises and sensory distractions. With such wobbly criteria, it’s hard to imagine which of us wouldn’t end up on an IEP, but Tom was prescribed twice-weekly occupational therapy (OT) sessions.
There was just one problem: there didn’t seem to be a shred of evidence for the IEP’s claims. Tom’s classroom teacher, whom I was assisting and who came to him with fresh eyes, saw an exceptionally curious and verbal boy deeply engaged in his schoolwork and rarely distracting to others. Neither of us saw any signs that he couldn’t control his upper body either, even though that seemed to be the most clinically verifiable criterion for his IEP. Tom was performing far above grade level in every area of academic achievement except penmanship. So why were all the other teachers in the school so frustrated by him? I could see the exasperated looks every time I would escort Tom to specials like art and music and P.E.
The discrepancy in Tom’s status was such a puzzle to me that I came up with a little project to test the effectiveness of one of the occupational therapy interventions recommended on his IEP, the placement of a large rubber band “kick plate” fixed to the back of a chair to prevent Tom from kicking his peers. This recommendation confused me because I had never seen Tom kick a peer. But I assumed that the occupational therapist knew something I didn’t know about Tom’s behavior. I wrote up some outcome measures related to Tom’s concentration and physical behavior, and I observed him carefully in twenty-minute intervals throughout the day until I’d accumulated several weeks of data, pre- and postintervention.
In addition to OT, Tom had been prescribed a variety of classroom aids, such as squeeze balls, to help him stay focused. Physical aids are very popular in the behavior management arsenal of early childhood classrooms. Placing tennis balls on metal chair feet to reduce the dangerous decibel levels in early childhood classrooms always seemed like a very good idea (and designing acoustically soothing classrooms an even better one), but the truth is that some of these therapy props, such as weighted backpacks to keep a child grounded, have become part of the theater of early education but are often inadequately tested and may be as much talisman as documented therapy.22
So what happened with Tom and the rubber band kick plate? To cut to the chase: it was a flop.
During the pre-kick plate observation period, he was engaged, quiet, and well controlled, but, as soon as we introduced the rubber band, his composure unraveled. Suddenly, my field notes were littered with tally marks: “Touches or bumps band.” Check. Check. “Pulls band with fingers.” Check. Check. Check. “Snaps rubber band with hand.” More checks! “Reaches down to snap band.” A lot more checks. I counted hundreds of them. Tom took to sitting sideways or straddling the chair, as if he were trying hard to avoid contact with the distracting kick plate. More than half the time, he brushed against the rubber band inadvertently, then he would follow with what looked like a more intentional kick, as if contact with the rubber band had primed his brain to fiddle with it. In fact, Tom’s pretest behavior was a whole lot better than after the introduction of the kick plate. The inescapable conclusion was that the rubber band intervention not only failed to arrest problematic behaviors in any reasonable sense of the phrase, but, in fact, it introduced its own kind of problem behavior, creating issues that hadn’t previously existed.
When I watched Tom working on his own without the band, he was always able to bring himself under control more quickly than almost all of the other children. Usually he would mutter to himself, “I’m just figuring out what I want to say” (while writing a poem) or something similarly self-directing. A couple of teachers expressed concern to me that Tom “muttered” too much to himself, but there are many studies affirming the benefit of this kind of self-talk as an internal regulating strategy, and evidence suggests that smarter and more playful children do more self-talk than average.23
I began to wonder if the specialists listed on his IEP plan had ever carefully observed Tom at work. I asked Tom what he thought about his OT work:
Me: Why do you think Ms. R. suggested we put the rubber band on your chair?
Tom: I don’t really know. I guess she thinks it is helping me.
Me: Do you think it’s helping you?
Tom: I don’t really know.
Me: Do you think it might be helping you with something?
Tom: I don’t know. I guess Miss R. thinks it will do something. I don’t really know. Um . . . I guess you could ask her.
I shifted gears slightly.
Me: What kinds of things do you do with Ms. R?
Tom: I have no idea.
I paused and gave him a chance to think.
Tom: I don’t know. Just stuff.
Me: Just stuff?
Tom smiled. “Yeah, just stuff.”
Me: Well, I was wondering, do you do stuff like reading and math, or do you different kinds of activities?
Tom: Well, I guess different kinds of activities. Like, I hold things. I hold a lot of things. And you know, things like that. I can’t really explain it.
Tom looked away.
Me: Tom, why do you think you go to see Ms. R?
Tom: Um . . . Do you mind if I go get snack now?
What are we to make of this interview? In a prior interaction, Tom had explained to his class that “leaves are like the solar panels of a plant; they collect energy from the sun and help create food, just like solar panels create energy that heats a house.” Now he had nothing to say. Why did a child with superb reasoning skills and self-awareness not understand the purpose of his twice-weekly OT sessions? Did Tom even know he was on an IEP, and if so, did he know for what reason? Did Tom’s lack of engagement in his therapy suggest that it might be failing to target his identified needs? And in any case, what were his actual needs?
I wondered, too, why there was so little communication among the school staff, whose opinions about Tom’s behavior were in disagreement, and why there was no mechanism to meaningfully update the IEP when its purpose was no longer valid. Once Tom was caught up in the system, there seemed to be no way to spring him from it, and no meaningful opportunity for the teacher to change the plan. On philosophical or practical grounds, exactly how much are we supposed to care that a child is unable to keep still if he is above grade level in his work and appears not to be bothering others too badly? What is the real-world significance of a seven-year-old holding a pencil awkwardly or occasionally grabbing peers in a silly way? And if it is indeed significant, how is that significance to be weighed against the failure to appreciate the incandescent intellect of a young child who can compare leaves to solar panels?
It’s easy to feel cynical about who, exactly, benefits from Tom’s putative disabilities. Certainly, some people’s jobs depend on a steady stream of Toms to justify their employment. But it’s too easy to demonize specific individuals or professions, when it’s the system that lends itself to such egregious miscalibration.
Given the pronounced pathology bias in training programs for educators, clinicians, and researchers, typical development is often given short shrift in the literature in favor of atypical development. In some ways, the scientific method has always worked like this, exploring the special, not the ordinary. But this isn’t always sensible, or even scientific. Thirty years after my time as a mental health worker, I still remember the shocked expressions of a group of psychology interns when an eminent emeritus psychologist at Harvard listened carefully to a lengthy case presentation at clinical grand rounds, paused to reflect, and queried, “I’m just wondering: Do we ever find that a patient is ‘within normal limits’? Other kinds of doctors send patients for tests that come back normal. Why not us?”
Teacher certification programs also tend to gin up the pathology, with “differentiated instruction” for students with alleged limitations in learning style. Support for unique “learning style” instruction is grounded in increasingly discredited beliefs that a “visual learner” shouldn’t be pushed to try homework assignments using his ears.24 I want to be clear that this learning style approach, which has virtually no scientific support despite its zealous adoption in teacher education programs and schools, differs from the theory of multiple intelligences, or cognitive orientations, advanced by one of the twentieth century’s educational giants, Howard Gardner. His theory of multiple intelligences doesn’t perforce dictate, or even suggest, specific methods of instruction. But educators have latched on to his ideas to imply that certain kinds of learners can only do certain types of assignments.
This line of thinking has become such reflexive dogma that new teachers are encouraged to see this learning style–based instruction as a strengths-based approach that brings out the best from children’s innate capacities. But who can fail to see that it’s actually a deficit-based framework accentuating what children can’t do, and which we’ve assumed they will never to be able to do? As Gardner himself has noted, “If people want to talk about ‘an impulsive style’ or ‘a visual learner,’ that’s their prerogative. But they should recognize that these labels may be unhelpful, at best, and ill-conceived at worst.”25
One teacher I admire believes that children who appear to have a weakness for processing auditory information need opportunities to strengthen their listening skills, not rationales to work around them. Believe it or not, this is increasingly revolutionary thinking in a lot of classrooms! This teacher routinely delivers homework assignments orally, even to second graders, and despite the initial pushback from parents and colleagues who are skeptical that children can be forced to learn “inauthentically,” she reports that, with practice, all of the children can remember the assignments. Stories like hers make me wonder if a focus on all this specialness has mutated too easily into what President George W. Bush infamously called the “soft bigotry of low expectations.”
Focusing on children’s loose parts also encourages us to see children as miniature adults because we instinctively look for the shared attribute between adult and child, rather than the overall distinctive condition of childhood itself. We now talk about a little girl who kisses a boy at school, for example, in the language of sexual harassment. We describe children who make pretend guns with their hand in almost the same language as adults who carry actual guns. In fact, we are more punitive with “armed” children! Adults are merely respectfully requested to leave their firearms behind when they visit a Target store,26 but you can get suspended from kindergarten for pretending to fire a fake gun.
Surely it is possible for a teacher to gently correct an overly enthusiastic child without resorting to suspension, or to stop a first grader’s tantrum without calling in police to handcuff her, as happened in a Georgia school a few years ago.27 It’s a sad commentary on American childhood that affectionate children can’t express themselves naturally without fear of being considered sexual harassers or perverts.28
Well-meaning adults can also turn childhood’s dynamic phases into static traits that follow a child through adulthood. Years ago, working in a children’s psychiatric hospital, I was taught that children were incapable of experiencing real clinical depression the way adults did, and as a result, few doctors took their unhappy states seriously enough. But embedded in that neglect was also a kind of faith in the young child, a faith we may have lost, that the suffering child might overcome or even one day outgrow some of the injury, before having its label etched permanently on his brow like a regretted tattoo.
And sometimes those injuries really can be overcome. Recent studies suggest that approximately 10 percent of children with autism outgrow their symptoms in adulthood.29 One landmark study, published in 2007, found that young children diagnosed with ADHD had normal brains that simply developed at a slower rate, which helped to explain why many children outgrow an ADHD diagnosis by middle school.30 Another major study in 2006 found that children’s behavior problems in kindergarten did not predict their academic attainment by the end of elementary school. These findings suggested that it might be adult expectations of young children that are off kilter, not the children themselves.31
Sometimes attention to risk factors, while commendable, can shift our gaze from the children who lack identifiable risk factors but nonetheless still face risk. Let’s consider the much-remarked-upon problem of little boys struggling in school. Most of us have an image of a problem child—the kind of kid who misbehaves in school—and it’s usually an image of a male child that we conjure in our minds. We can be forgiven for making this assumption because it’s boys, on average, who have more trouble sitting still in classrooms and are more likely to show what the child experts call “externalizing” behaviors (pulling somebody’s pigtails, for example, or acting out enough to get expelled). Boys have twice the incidence of learning disabilities and are three times more likely to be diagnosed with ADHD, for example.32 Whether these statistics reflect bias or plain vanilla reality, these are the known facts.
But what should be done about the phenomenon? Some educators think the disproportionate behavior problems of little boys argue for more single-sex schools, but this raises some very thorny questions: Do girls have any trouble at all sitting still in classrooms, or is it just the boys? Well, all right, do the girls have 50 percent as much trouble as boys? Twenty-five percent? Seeing their negative behavior through the gender lens obscures the reality that it’s hard for all young children to sit through unimaginative and taxing classes. The Slice-O-Matic approach to childhood invites adults to draw arbitrary demarcations between a minority of children (misbehaving boys, let’s say) who have a problem deemed worthy of fixing, and the rest of the population, which is expected to take its lumps and put up with lousy lesson plans and uninspiring teaching. I’m caricaturing this perspective, of course, to make a point: seeing only the parts makes it far too easy to doom children to two categories: “problem” and “problem free.” In doing so, we give ourselves permission to avoid serious intervention, such as making schools worthy for all children regardless of their ability to cope with the status quo.
When I mention sex differences as one of the archetypal examples of this sort of fragmentation, I don’t want to suggest that I am oblivious to gender variation in play, or in other areas of learning. Even casual observers can detect these differences within seconds of walking into a classroom. But the truth is that gender variation is far less pronounced in the early years than people realize. Unfortunately, where gender is concerned, a balanced message gets awfully muddled in discussions about meeting children’s needs.
It’s worth remembering that the variation within sexes is far greater than the differences between the sexes. Average height is a good illustration of this point. No one would dispute that men are taller on average than women. If you were designing bathroom sink heights for monasteries and nunneries, this average height difference (five foot nine versus five foot four) might be meaningful. But even if we exclude the very tallest and very shortest people at the extreme ends of the continuum (and only look at the 90 percent of people in the center of the distribution), men’s height would still vary a lot: from five foot four to six foot three (or an eleven-inch range) and women’s from four foot eleven to five foot nine (or a ten-inch range).33 That’s more than twice the variance we find in the five-inch difference between our hypothetical five-foot-nine- and five-foot-four-inch male/female couple. So the law of averages can obscure a lot when it comes to understanding children.
Similarly, we convince ourselves that Asian children have on average higher IQs than white children, or that children who have learning disabilities will on average do worse academically than children who do not, but we need to be a lot more cautious about polarizing distinctions. For one thing, children change and move on from any label, and they are also far too frequently mislabeled.
This approach to children can have a strangely denaturing effect on childhood. It’s as if we’ve thrown children into a giant sieve marked “Childhood,” given it a good shake, and then filtered out all the loose parts: dyslexia, shyness, “really great at soccer,” and “afraid of math because she’s a girl.” We’ve sifted out big attributes like poverty and race and ethnicity and gender, too. What do we have left? The essence of childhood is looking very puny and incoherent. Where did our powerful, inventive little humans vanish to?
Among all the changing child-rearing norms of the last fifty years, the rise in what I would call epidemiological parenting takes the fragmentation of childhood to new heights. As public health information has become more widely available, parents are under increasing pressure to use statistics about risks to inform their parenting choices. Of course, none of us are very good at risk assessment, so we fear child-snatchers more than our neighbor running a stoplight. But, here again, there are good and bad sides to this development. In general, the world of health data, consumer protection, and safety practices has dramatically improved young lives. Accidents, with their whiff of act-of-God inevitability, are out; preventable injuries are in. Fewer children are dying from choking hazards or unfenced swimming pools as a result.
In all the spilled ink about the looming menace of today’s helicopter parents, the media reports have neglected to mention that between 1960 and 1990, we’ve seen a 48 percent reduction in childhood mortality among five- to fourteen-year-olds due to unintended injuries and accidents. The drop was even more significant in the toddler and preschool years: a full 57 percent reduction in accidental death of children between ages one and four.34 Memo to anxious parents out there: Nice work! Your hypervigilance is paying off! (Improved safety regulations made a difference, too.)
Whenever I hear contemporary parents criticized for their phobic vigilance, as if they were expressing an irritating personality tic or goofy parenting fad, I want to forgive them, and even laud them, for their (as it turns out) entirely rational expectation that their child should survive to adulthood! Child death is about as socially unacceptable as a human phenomenon could be. The victory over childhood mortality is possibly the most important piece of the story of how children have become so precious to us, and we must keep it in mind as we consider the many ways that modernity has not only changed childhood but even, fundamentally, enabled it.
In fact, child survival is one of humanity’s surprisingly recent success stories. Historically, many people didn’t experience something called childhood because . . . they were already dead. Today, in the industrialized world, mortality of children under age five hovers around 5 per 1,000. By contrast, in nineteenth-century Sweden, one third of young children died before age five; in Germany, the child mortality rate was 500 per 1,000 children.35 And early childhood mortality among modern hunter-gatherers is one hundred times more than in the United States today.36
Adult lifespan, however, has actually remained remarkably consistent across human populations, including hunter-gatherers.37 If you made it to middle age a century ago, you lived almost as long (within about ten years) as adults do today. So the main increase in life expectancy for Americans over the last century has come from the eradication of infant mortality and child death from things like infectious diseases. We need to try to wrap our heads around this: the crushing of child death in the developed world over the last one hundred years is something truly radical and unique in the history of our species, because our species only gets to conquer childhood mortality once.
Most of these improvements occurred because of rising living standards and the implementation of public health measures early in the twentieth century. But before we get self-congratulatory, we also have to consider those wretched trade-offs. While safety-based parenting may reduce childhood mortality, it could conceivably shorten our lifespans insofar as overly protected children are often less physically active ones, too, and they could be set up for a lifetime of health problems related to a sedentary lifestyle. And even if those oversupervised kids do end up becoming iron men in their dotage, there is still the danger of driving them out of their minds before they’ve even reached kindergarten. The reduction of children to collections of risk factors creates levels of anxiety that appear, and indeed sometimes are, clinically paranoid.
In one recent case, nervous parents in Orange County, California, called the police to investigate a possible “serious stalking incident,” when they found porcelain dolls resembling their children on their doorsteps one morning. The police issued an alarming bulletin and media outlets pounced on the “creepy” dolls. Time magazine helpfully clarified that there were “few things spookier” than receipt of such a doll—which would suggest a rather limited familiarity with the horror genre. In any case, it quickly became clear that the alleged stalker was just a kindhearted elderly woman and fellow parishioner at the girls’ church, who was giving away her doll collection to show the girls a “delightful surprise.”38
A few hearty souls are pushing back on the deficit model of childhood, with all its attendant anxieties and distortions. They’re the Atlantic salmon of the child-rearing world, swimming upstream against fearmongering and the notion of children’s fragility. You can spot their kids fooling around on the roof of the neighbor’s garage, and they often come to grief when they unleash their unfettered parenting style on a neurotic, house-trained populace.
In a reflective essay published in 2014, a mother described her conscious decision to leave her screaming four-year-old in the car for a few minutes (on a cool day) while she ran into a store to do an errand rather than drag the out-of-control child with her.39 It was a rational, if pressured, choice, she explained, made intentionally within the band of not-great choices available to her at the time (and distinct from the nightmare of children left unintentionally in overheated cars). Parents of earlier generations made those kinds of imperfect calculations on a routine basis, a fact I can confirm as someone who clocked many hours alone in hospital parking lots, waiting for my father to finish making rounds.
Nonetheless, the mother was arrested and sentenced to a year of community service, and the volley of retribution was severe, even for an online parent confessional. Defenders rose up, including the infamous “Worst Mom in the World,” who was herself arrested for letting her nine-year-old son ride a train into New York.40 Commenters noted the hundreds of more serious risks to which parents subject children all the time without any legal or public sanction. Others intoned, absurdly, that no risk was ever worth taking where a child is concerned, and, further, that if the mother/criminal were a better parent, the child wouldn’t have had the tantrum that precipitated the act in the first place. I certainly never thought of my father as criminally negligent and, indeed, I liked going on errands in his company. At least the mother in question thought to turn off the car engine, something my harried father would probably not have bothered with.
One of my favorite zero-risk-tolerance incidents in this regard, if I can use the word “favorite” to describe something so irritating and irrational, was the nut hysteria at my children’s school (as at many others) in the mid-2000s.41 The rise in nut allergies is an especially good example of our deficit-riddled vision of childhood because parent-reported peanut allergies in children doubled over a period of just five years,42 an increase so dramatic that it hints at the possibility that, in addition to a real, documented rise of peanut allergies in children (more about that in a moment), some of the parents’ reporting might be embedded in cultural and psychosocial beliefs about children’s vulnerability. I don’t mean to single out one state from a national trend, but Massachusetts was ahead of its time, educationally speaking, as it always is (the state would rank second in the world for eighth-grade science scores if it were its own country),43 and there was a nut policy on the books in the early 2000s.44
What happened was this: We’d been asked to purchase wrapping paper and small gift items as part of a school fund-raiser and because I’d apparently ordered pecan turtles or somesuch, in order to comply with the no-nuts rule I was told I would be allowed to collect my offending items only after regular school hours and from a loading dock at the back of the gym. A few other chastened parents fell into this category, too, and we felt shamed like heroin addicts, shambling up to the back door of the clinic for our methadone dose, apparently too morally incontinent to resist buying nuts from an elementary school, for heaven’s sake (think of the children!).
I tried to point out to the school administration that this subterfuge probably wasn’t necessary: my peanut clusters would be vacuum-sealed (in a metal tin, as it turned out) and not just flying around loose in a box, and, furthermore, the sealed container would itself be confined in a specially addressed packing box, stuffed with Styrofoam peanuts, in a lovely bit of irony, and marked clearly with my name, as typically happens when people receive packages, and not the name of a nut allergy–suffering kindergartner unrelated to me. My arguments were unpersuasive.
This was only one of many nutty capers I was party to. I once watched a school bus evacuation on a field trip when a chaperone spotted a rogue peanut rolling around the aisle. I would have quietly picked it up and shoved it into my purse to avoid drama; instead, the children were subjected to a whole lockdown procedure before getting back on the bus; they looked like they’d been sprayed with mustard gas.
The irony of all of this is that, in my teaching experience, the rare preschooler with a documented food allergy was actually very competent at self-management, as were the parents, and quite willing to adapt to a real world full of daily hazards. It was usually the children without allergies, or those who fell in some hazy area of food sensitivities or undocumented but perceived allergies, who experienced the most fear of food.
I even began to notice nonallergic children telling me they were allergic to a particular food they didn’t like at snack time. I would race in a panic to check the allergy list, wondering how a child’s destiny could have slipped through my hands, only to discover that four-year-olds are very clever when it comes to avoiding carrots. But joking aside, what does it mean to the young child who has acquired the vocabulary of vulnerability and life-and-death stakes even as he himself is perfectly safe?
The prevalence of medically confirmed nut allergies has in fact increased in the last decade, although half as rapidly as that of parent-reported prevalence, to around 2 percent of the population, and researchers have hypothesized that this increase could be explained by widespread implementation of pediatric guidelines cautioning parents to restrict nut consumption during infancy. Scientists grew curious when they observed that Jewish infants in the United Kingdom showed ten times the rate of peanut allergies of Israeli infants, the difference likely explained by the timing of when these babies were exposed to foods containing peanuts (the Israeli babies were fed nuts much earlier). This observation led to a successful randomized controlled clinical trial in which groups of babies were divided into nut-consuming and no-nuts groups. The results were striking: whereas 14 percent of the no-peanuts babies went on to develop allergies at sixty months, only 1 percent of the peanut-consuming babies developed a nut allergy at the same stage. In this study at least, the cure was the poison. What an irony that our attempts to protect children may, literally, have been making them sick.45
The types of diagnostic errors we discussed earlier are actually even more complicated because testing errors are an especially big problem in populations with a low rate of the attribute being measured. Imagine giving a pregnancy test that had an impressively low error rate of 5 percent false positives to a group of ten-year-old boys. The fact that zero percent of ten-year-old boys could in fact be pregnant means that five out of one hundred of them will be falsely deemed pregnant. Take the same test with its 5 percent false-positive rate and administer it to a group of women, all of whom are indeed pregnant. The fact that the test has a false-positive rate of 5 percent is irrelevant because it will ordinarily detect all one hundred pregnant women, thus giving us a false sense of confidence in the test.
In other words, if a test has a particular error rate of assigning positive results to people who actually have the attribute of interest, along with some unavoidable rate of false positive results (whereby people without a condition are wrongly labeled as having it), the overall performance of the test is very much affected by the baseline prevalence of the condition being tested for. For uncommon conditions, such as peanut allergies, it is entirely possible that, among a population of kids, most of whom do not have the condition and a small fraction of whom do, the subpopulation of children labeled as having the condition would actually be mostly comprised of kids who are false positives and did not have it.
This wouldn’t be the first time the medical profession has freaked people out with crummy advice. Parents are not the only drivers of these anxiety-based norms, in other words: their enablers and coconspirators are the legal, medical, and educational establishments that stoke our fears.
For my part, I struggle mightily as I consider these alternating big-picture and tight-focus viewpoints. How do we stitch a child’s loose parts together when we need to see the whole fabric of childhood yet be able to pull them apart when we only want to see a piece of it? We can acknowledge the gains that have come from zooming in on all those little parts: we’re more sensitive to childhood’s variations; we see problems up close that we couldn’t visualize before. But there’s a cost to a fragmented childhood and it’s the loss of the experience of being “just kids.”
Modernity’s gift to early childhood is the gift of space and time—before dying or being sent down a coal mine—simply to grow up. But nowadays we often seem to snatch defeat from the jaws of victory. The tremendous irony of our increasingly fragmented childhood landscape is the danger of giving up on early childhood just as we can now count on it so reliably.