I had received a distressed email from “Cathy,” a well-known entertainer in Los Angeles. Cathy was desperate to get some help for her screen-addicted 17-year-old son, “Mark.” Cathy, a woman of means, had been to 13 psychiatrists and psychologists in Los Angeles, and not one was even remotely familiar with screen addiction. Rather than helping, Cathy told me, “They actually did more harm than good by not understanding this issue.”
Mark, who had started by fiddling with a computer when he was five because his well-intentioned mother thought it could be educational, fell into a horrible screen addiction that was destroying his life. From the time he was very young, his mother told me, his whole demeanor would shift when he got in front of a screen; even the GPS in her car would hypnotize him.
Once he discovered video games at the age of ten, it was all over. He would steal from his mother to buy video games and game consoles; he would get violent and aggressive when he wasn’t allowed to play; he lost all interest in school and hobbies that he had loved: “He used to love to drum, now [the drums] just sit there . . . he just wasn’t the same kid anymore.”
After the failed attempts to get help from the 13 uninformed therapists (“Let the boy play his games; if you take them away from him, he’ll be hopeless”; “Video games are very important for boys socially”), Cathy read all that she could about screen addiction and eventually pulled the plug on all of Mark’s electronics. His school was entirely uncooperative with her efforts to keep him screen-free, which then prompted her to enroll him in a residential therapeutic school that worked with her to keep Mark away from the electronics.
“Most of these schools are so tech whipped!” Cathy said in exasperation. “Even in some of the best therapeutic schools that I interviewed—that supposedly knew about tech addiction, they would say, yes, but for school he could use the computer . . . it’s like they give the machines a free pass.”
But in Mark’s case, he had proven that he couldn’t handle computers—even just for school use. He would lie to his mother that he had to use the computer (which she had to lock in her bedroom) to research various school assignments; after he had spent hours hypnotically surfing the Net on various topics, Cathy would find out that the alleged school assignments were nonexistent—like a true addict, Mark just needed to experience the glow.
Mark has been screen-free in his residential school for almost a year now and is doing quite well and even thinking about going to college—something that would have been unimaginable a year ago. And we are developing a plan, in conjunction with his school, to slowly reintegrate computers into his schoolwork, and I am working with Cathy to develop a tech plan for Mark when he moves back home in May.
“This was just like any other addiction—and, in some ways, worse because it’s so new that we don’t have a lot of precedent on how to deal with this problem,” Cathy said. “We needed to treat this very seriously and carefully. Unfortunately, there just wasn’t a lot of help or awareness of the problem out there.”
Electronic Soma
Although it may come as a shock to some—even to trained therapists—the idea that electronic screens could have an addicting drug-like effect is not a new concept.
In 1985, 25 years before Steve Jobs, in his trademark black turtleneck, introduced the world to the game-changing iPad, a soft-spoken, visionary intellectual NYU professor named Neil Postman wrote a prophetic little book called Amusing Ourselves to Death.1 In it, he suggested that we were living in the equivalent of Aldous Huxley’s Brave New World, only instead of Huxley’s imaginary drug, soma, our addictive elixir was the “new” electronic medium—television.
It was a provocative idea: TV as a cocaine-like drug.
Postman believed that, like soma and cocaine, this visual medium was so highly addicting that it was creating an entire society of uninformed pleasure-seekers. Keep in mind that Postman’s prophetic bit of wisdom was written well before the world had even glimpsed an Xbox, smartphone, iPad, tablet or laptop.
In fact, the insidious tech that Postman was referring to was the rather quaint—by today’s standards—boob tube, with the hot-selling Sony Trinitron being the iPad of its day. The most popular content on this electronic scourge during 1985? The hardly nefarious-sounding Cheers, The Cosby Show, Dynasty and Miami Vice.
But that’s the thing with visionaries like Postman—they can see further over the horizon than most of us. While the majority of people in 1985 probably didn’t think that watching Ted Danson on Cheers was the harbinger of a dystopian future society zombified by soma-like tech, are we not a bit more concerned about tech effects and tech addiction in the year 2016, otherwise known as the year 6 A.i. (After iPad)? In fact, the research is showing us that if TV was soma-like cocaine, then the more powerful, hyperstimulating and interactive iPads can be considered the more addicting crack of the electronic media landscape.
Moreover, Postman didn’t believe that electronic media was just an addictive drug; like philosopher and communication theorist Marshall McLuhan before him, he believed that television also marked a major shift in human development, fundamentally impacting not only the way that we communicate, but also the way that we think as well.
He argued that since television images had replaced the written word as the dominant communication medium, our ability to engage in in-depth rational discourse and the dialectical engagement of serious and complex issues—which had evolved over hundreds of years as a consequence of a reading culture—had now been compromised. We had effectively been dumbed-down as the depth of written language was replaced by the superficial visual images of television’s information-as-entertainment.
An intelligent and thoughtful man not prone to emotional histrionics, Postman became deeply troubled by what he saw on the educational horizon as well. As a professor at NYU’s prestigious Steinhardt School of Education and the chair of its Department of Culture and Communication, Postman knew a thing or two about education.
Aside from its addicting effect, he wrote that, from a pedagogical perspective, electronic media were simply not effective or appropriate for the classroom. He believed that the then-recently introduced personal computer (PC), like television, offered a passive and shallow top-down form of information transfer rather than the engaged and dynamic cognitive interaction that is required when reading complex written discourse. In addition, the “personal” aspect of the PC bothered him too, because it eliminated the dialectical engagement of the teacher-class dynamic which had always been a group—not an individual—process.
In 1995, ten years after the initial stir he caused with Amusing Ourselves to Death, Postman was interviewed on The MacNeil/Lehrer NewsHour (broadcast, ironically enough, on the dreaded electronic medium of TV—an irony that he noted). During the interview, he further elaborated on his opposition to the increasing use of PCs in the classroom, saying that he was opposed to them—and individualized learning in general—because they lacked the necessary group dynamic that was a critically important ingredient in both education and the socialization process.
Today, in a world of online education and virtual classrooms with 10,000 students—who are all “alone together” and isolated in front of their respective computer screens—one wonders what an educator like Dr. Postman would think if he had lived to see the exponential expansion of technology within education as a form of mass communication.
Postman had been stirring the cultural pot and creating controversy about his concerns regarding the media and technology his entire life. In 1982, three years before writing Amusing Ourselves to Death, he had penned an equally dystopian vision of the near-future entitled The Disappearance of Childhood, in which he suggested that the new electronic medium of television would cause childhood to go the way of the Edsel:
“I am going to argue that our new media environment, with television at its center, is leading to the rapid disappearance of childhood in North America and that childhood will probably not survive until the end of this century . . . that such a state of affairs represents a social disaster of the first order.”2
A prescient Postman suggested that the divisions that had developed between children and adults were eroding under the electronic barrage of television, which exposed kids to the previously taboo adult concepts of sex and violence.
Fast-forward three decades: the electronic “liberation” of previously adult themes that had started with television has only been amplified by the information free-for-all of the Internet. While the Web has democratized knowledge, it has also, unquestionably, exposed and sexualized kids and accelerated their development into adulthood. In our YouTube era, when any kid with a tablet can literally see anything ever recorded—from snuff films to porn—is there any doubt that the quaint notion of “childhood” is slipping away?
Recently, a therapy group of ninth-graders whom I had been working with—basically good kids with some emotional issues—were all talking about the beheading and dismemberment video that they had all just seen online.
“Don’t your parents block that from your computers at home?” I naively asked. Jake, the group’s leader, just smirked: “At home? We just watched that at school—we’ve disabled the security filter—it’s easy.”
Gone are the days when an adolescent boy who glimpsed a copy of Playboy had imagination fuel for the year; now, daily graphic images stream unfiltered into the eyes and minds of our kids, forever seared into their psyches. One of my 14-year-old clients, troubled by things that he couldn’t un-see, tried to thoughtfully warn me: “Dr. K—don’t ever look at that Web site—you’ll never be able to get those images out of your head . . . I know that I can’t.”
Yet as technology and the information free-for-all rob kids of their innocence and blur the notion of childhood, it also paradoxically perpetuates and extends their adolescence. Historian Gary Cross describes this phenomenon as “delayed social adulthood,” in which adolescence in the tech era is being redefined to extend well into people’s twenties and even thirties.3
Postman foresaw this phenomenon decades ago, as he understood that hyperarousing glowing screens are addicting—and that if a kid gets hooked, he or she may be hooked for life in a state of perpetual pleasure-seeking adolescence.
Beyond television, who or what is the main culprit in this “failure to launch,” where apathetic and emotionally stunted boys do not become men? Cross blames video games: “In 2011, almost a fifth of men between 25 and 34 still lived with their parents,” a scenario in which many play video games and “the average player is 30 years old.”
Dr. Leonard Sax (author of Why Gender Matters) writes extensively about this adolescent malaise in Boys Adrift (2009) as he, too, cites our video game culture as one of the main culprits in the “failure to launch” dynamic.
In addition to being addicting, according to Sax, video games do not engender the sense of resilience or the patience and drive that the real world requires. In real life, when people lose at sports, they have to lick their wounds and process those experiences as they learn to eventually get back on the horse to compete another day. All of that fosters resilience and emotional growth. When you lose in a video game, you hit the reset button. Game on.
Author and psychiatrist Dr. Mark Banschick adds: “From the psychiatric couch, I have come to see avoidance as being part of a generational style; at least in a sizable group. Boys in particular love their video games and have developed an expectation of instant gratification that makes schoolwork and other chores seem too much. The brain is a developing organ, and we’ve been feeding our boys (and to some degree girls as well) with brain junk food.”4
Thirty-year-old adolescents . . . ten-year-olds who have “seen it all” on YouTube . . . how did this happen? How have we become a society of sexualized kids-as-adults but also, paradoxically, 30-year-old quasi-teenagers?
Perhaps a bit conspiratorially, Postman believed that there was a political subtext to addicting electronic media: just as soma in Brave New World was a mechanism of societal control, so too was our electronic addiction sedating the masses to make us more vulnerable to oppression. Having personally worked with hundreds of poor families of color, I can say that I’ve also been struck by the politically sedating effect that Xbox has had on some of these young disempowered boys.
New Technology: The Good, the Bad and the Ugly
Of course, Neil Postman wasn’t the first person to cast a wary—perhaps even apocalyptic—eye toward a new form of communication technology; history is replete with technophobic Chicken Littles who warned against the evils of everything from the typewriter to the telegraph to radio to moving pictures . . . all of those inventions had their detractors, who were convinced that the fill-in-the-blank latest technological advancement would bring about the end of civilization.
We can even go as far back as ancient Greece and Socrates for “this-new-medium-will-be-the-end-of-us-all” reactions. Unlike Postman, Socrates did not think much of books and the written word; as a proponent of oral storytelling, he thought the written word would kill our memory skills and make us all bloody idiots: “[It] destroys memory [and] weakens the mind, relieving it of . . . work that makes it strong. [It] is an inhuman thing.”5
In addition to memory atrophy, Socrates was also concerned that books would allow information to be communicated without the author or teacher being personally present. Like Postman, he believed that true learning necessarily entailed a living, engaged give-and-take interaction between teacher and student that was dynamic (the dialectic); but, unlike Postman, Socrates believed that a book was a static form of information transfer. According to the wisest man in all of Athens, only fools thought that they were wise because they had learned something from teacherless books: “[Books], by telling them of many things without teaching them,” will make students “seem to know much, while for the most part they know nothing.”
But, luckily for us, unlike Socrates, his student Plato did write. Prolifically. That’s why, ironically, I know about Socrates’ aversion to books by reading about it—in a book . . . just as Neil Postman vented about the dreaded medium of television—on television (during that MacNeil/Lehrer NewsHour interview in 1995).
Perhaps it is true, as a long line of philosophers from Socrates to Marshall McLuhan and Neil Postman have told us, that technology inevitably changes us and that such changes always entail some level of loss.
But can new technology, as some have suggested, perhaps also change us for the better?
Maybe—although at a very high price.
In 2015 the journal Addiction Biology published a study, a collaboration between the University of Utah School of Medicine and Chung-Ang University in South Korea, in which the brain scans of 200 adolescent boys described as video game addicts were examined.6
According to Doug Hyun Han, M.D., Ph.D., professor at Chung-Ang University School of Medicine and adjunct associate professor at the University of Utah School of Medicine, this was the largest and most comprehensive investigation to date of the way that the brains of compulsive video game players differ from the brains of non-gamers.
The researchers found indisputable evidence that the brains of video game–addicted boys are wired differently; chronic video game playing was associated with increased hyperconnectivity between several pairs of brain networks. What was more difficult to determine was whether those changes are good or bad.
“Most of the differences we see could be considered beneficial. However the good changes could be inseparable from problems that come with them,” says senior author and researcher Dr. Jeffrey Anderson, M.D., Ph.D., associate professor of neuroradiology at the University of Utah School of Medicine.
Let’s take a look at some of these brain changes.
The good: some of the changes can help game players respond to new information. This study found that in these gaming-addicted boys, certain brain networks that process vision or hearing are more likely to have enhanced coordination in the “salience network.” The salience network helps to focus attention on important events, helping with a person’s reaction time to enable taking action, if necessary—jumping out of the way of a moving car, for example. In a video game, this enhanced coordination could help a gamer to react more quickly to an oncoming attack from an opponent.
According to Dr. Anderson: “Hyperconnectivity between these brain networks could lead to a more robust ability to direct attention toward targets, and to recognize novel information in the environment. The changes could essentially help someone to think more efficiently.”
The not-so-good: some of these brain changes are also associated with distractibility and poor impulse control. “Having these networks be too connected may increase distractibility,” says Anderson; as we know, people who have quick reflexes and reaction times can be a bit jumpy and unfocused.
The bad: what the researchers found more troublesome is an increased coordination between two brain regions, the dorsolateral prefrontal cortex and temporoparietal junction; this is a brain change also seen in patients with psychiatric and developmental conditions such as schizophrenia, Down’s syndrome, and autism.
Not a good thing.
Distractibility and poor impulse control are also hallmarks of addiction. The researchers point out that, benefits aside, those with Internet Gaming Disorder are so obsessed with and addicted to video games that they often give up eating and sleeping in order to play them.
So let’s recap: kids can get addicted to games and not eat or sleep; they can also run the risk of developing ADHD and schizophrenia-like symptoms—but they can react and shoot targets like nobody’s business!
The question simply comes down to one of cost/benefit: is having a rewired brain that can see patterns and targets better and react more quickly worth the potential for developing impulse-control disorders—like addiction and ADHD—not to mention more serious psychiatric and developmental disorders, such as schizophrenia and autism?
Leaving autism aside for the moment, we saw in the last chapter in the research of Drs. Griffiths and de Gortari that gaming can indeed induce Game Transfer Phenomena and hallucinatory experiences. Is that really a risk that we want our kids taking so that they have better reaction times in a video game simulation?
The researchers concluded that what remains unclear is the chicken or the egg: whether the persistent video gaming caused the rewiring of the brain, or whether people who are wired differently are drawn to video games.
But as I’ll point out in the next chapter, there is indeed a “pre and post” brain-imaging study from the University of Indiana School of Medicine that did brain scans of nongamers, then had them play video games for several weeks, and then did post-video gaming brain scans again. That study clearly showed neurobiological changes in the brain that were indeed a direct result of the video game playing—brain changes that, very interestingly, mirrored those of drug addiction.
Let’s take a look at some other possible technology benefits.
In his pro-tech book Smarter than You Think (2013), Clive Thompson extols quite a few technological virtues. Besides obvious tech-as-tool benefits such as the use of keypads for nonverbal autistic children and many other instances in which tech can be an invaluable tool for the disabled, Thompson discusses instances of tech-human hybrids that work more efficiently than humans can without tech.
As an example, he discusses chess “centaurs” as one form of superior tech-human hybrids. Named after the mythical creature that was half human and half horse, a chess centaur is a chess-playing dyad that consists of a human player collaborating with a computer to form a single player/team. This man-and-machine dynamic is in stark contrast to the earlier man-vs.-machine model, in which, for example, chess world champion Gary Kasparov played against the supercomputer Big Blue in 1997 and suffered a humiliating loss—one so infamous that it led to the Newsweek cover story “The Brain’s Last Stand.”
Yet it was Kasparov himself who in 1998 came up with the if-you-can’t beat-’em-join-’em idea of collaborating with the computer, his former high-tech arch nemesis. Kasparov’s innovative idea of creating a team composed of a human and a computer pitted against other half-human, half-computer teams came to be known as “advanced chess,” with the first such tournament, held in 1998, featuring Kasparov as one half of a centaur team.
After that initial tournament, he analogized the experience with that of learning how to drive a race car: “Just as a good Formula One driver really knows his own car, so did we have to learn the way the computer worked.”
Yet Kasparov’s centaur team lost to a centaur with an inferior human player—one whom Kasparov had easily beaten four times earlier but who was apparently a better “driver” of his technology. In fact, lower-ranked players were often better collaborators with technology than the higher-ranked humans because, according to Thompson, they knew more intuitively when to rely on the computer’s advice and when to rely on their own skill when making the next move.
But does knowing when to defer to a computer enhance a human being’s skills? While the man-machine hybrid might do better, is the human half of that equation diminished by an overreliance on a computer?
In order to answer that question, let’s also look at another human-computer hybrid: pilots and their computer navigation systems, which often fly the plane for the pilot. Question: do pilots who are more reliant on their flight computers—pilots who tend to fly-by-wire more—have better piloting skills than pilots who fly manually more often?
That question was answered in a fascinating 2009 piloting study conducted by Matthew Ebbatson at Britain’s Cranfield University School of Engineering.7 In the study, pilots had been asked to conduct a flight simulator landing of a Boeing jet with a crippled engine during rough weather. Ebbatson then measured indicators of the pilot’s skill—things such as maintaining correct airspeed—during this difficult maneuver.
When he then looked at their actual flight records, he found a correlation between pilots’ reliance on their autopilots and their skill-level; it seemed that the more pilots relied on technology, the more their human piloting skills had eroded. Clearly in this case of man-and-machine integration, technology was not enhancing the skills of the human.
If you don’t believe that study’s findings, just ask yourself this question: if the plane that you’re on gets hit by lightning and has its electrical system debilitated, who would you rather have as your pilot—one who has flown manually before or a fly-by-wire high-tech hotshot?
Thompson also writes about other technology tools that allegedly enhance human ability; he mentions the “extended mind” theory of cognition, which essentially says that humans are so intellectually dominant because “we’ve always outsourced bits of cognition, using tools to scaffold our thinking into ever-more rarefied realms. Printed books amplified our memory. Inexpensive paper and reliable pens made it possible to externalize our thoughts quickly.”
So do printed books amplify our memory? As I noted earlier, Socrates would certainly take issue with that statement, as he believed just the opposite: printed books weakened our memory.
Thompson also mentions today’s digital tools, such as smartphones and hard drives, and talks about their “huge impact on our cognition” and the “prodigious external memory” that is the byproduct of today’s tech.
While we can agree that technology offers incredible data storage and external memory capabilities, do we really believe that external memory tools like hard drives and smartphones actually amplify our human memory or in any way add to our human abilities or skills?
Here’s a quick experiment that you can do if you have a smartphone: can you write down the ten most frequently called telephone numbers on your phone without looking into your phone’s memory? If you’re like most of us, you can’t. We tend to forget our frequently called numbers because we don’t need to remember them any more—our external memory device has it covered. Less than a decade ago, that wasn’t the case; most people knew most of their frequently called numbers from memory.
So what, you might say. Big deal, I can’t remember as many phone numbers as I used to, and I use my memory less because of my handy-dandy gadgets—what harm can that do?
Well, memory, like language, is a skill that requires practice and use, otherwise one’s memory abilities—in true use-it-or-lose-it fashion—begin to atrophy. Conversely, as Socrates understood, memory was also a skill that one could increase by practice. And now, thanks to modern science, we have brain-imaging research that clearly shows that engaging in memory practice can actually strengthen our brains and increase our gray matter.
In a study published in 2011 in the journal Current Biology, Professor Eleanor Maguire and Dr. Katherine Woollett from the Neuroimaging Center at University College London studied the brains of a group of professionals who inspire awe with their prodigious memories: London cabdrivers.8 Those individuals are required to memorize what is called “the Knowledge”—the complex layout of more than 25,000 labyrinthine streets and, additionally, several thousand landmarks including theaters and well-known pubs.
Developed before the prominence of GPS devices, it’s a rigorous and brutal learning process that often took three or four years before the aspiring cabbie felt ready to take the licensing test—the Knowledge of London Examination System. Forget bar exams and medical boards—this test is so difficult that prospective applicants often make up to 12 attempts to pass, and even then, ultimately only half of the cabbie trainees are eventually successful.
So Maguire and Woollett thought that it would be illuminating to study the brains of these memory quasi-savants—“quasi” because these taxi drivers were not born with prodigious memories but, rather, developed them.
The researchers selected a group of 79 cabbie trainees and an additional control group of 31 non–taxi drivers. All 110 participants had their brains scanned at the outset of the study and were administered certain memory tests; initially, the researchers found no discernible differences, as all groups had performed equally well on the memory tasks.
Over the next several years, only 39 of 79 trainees passed the test, which then allowed Maguire and Woollett to break the participants down into three different groups: those who had trained and passed the test; those who had trained and did not pass the test; and those who had neither trained nor taken the test. Then, three to four years after the initial MRIs (magnetic resonance imaging) and memory testing—when the trainees had either passed or failed to acquire “the Knowledge”—the researchers again conducted MRIs and tested everyone on memory tasks.
This time around, Maguire and Woollett found significant changes in the brains of those who had passed the test: they had a greater volume of gray matter in their hippocampus than they had before they started training. The hippocampus, we should note, is essential in memory acquisition; in Alzheimer’s patients, for example, the hippocampus is one of the first brain regions to suffer damage.
Interestingly, this increase in gray matter of the hippocampus wasn’t present in the group who had studied but failed the test. The control group who had not studied also showed no brain increase. Perhaps, we can speculate, the group who studied yet failed—and subsequently did not show an increase of volume in gray matter—simply did not study enough to either pass the test or gain the desired brain change.
Regardless, the research clearly showed that the group who studied rigorously enough to pass the test did indeed change their neurophysiology in a beneficial way. This research also shows us that it’s never too late to change your brain. According to Professor Maguire: “The human brain remains ‘plastic,’ even in adult life, allowing it to adapt when we learn new tasks.”
Yet now we have GPS and don’t have to remember streets or directions. We also have smartphones that remember, well, everything for us. And devices that can do all sorts of things for us—they can cook, clean, make dinner reservations, fly our planes, drive our cars . . . perhaps one day soon they will even be able to think for us.
But as technology advances, does humanity recede?
Does tech that externally remembers things for us—thus lessening our need to utilize our memory muscles—help our human memory or weaken it? Do chess computer-hybrid centaurs make their human halves better—or does the computer do most of the heavy lifting? Do tech devices—like calculators, which help us do math, or computers that help pilots fly on autopilot, convenient as all these things may be—do those technological wonders work as skill-strengthening tools or skill-dampening crutches?
Perhaps it’s as Thoreau once said: “Men have become the tools of their tools.”
As we continue to explore the effects of technology in our lives, we now also have, as we have just seen, the benefit of yet another technological tool—state-of-the-art brain imaging that wondrously and definitively shows us the negative impacts of technology on the brain.
The irony is rich: technology that shows us that technology is bad for our brains.
In Smarter than You Think, Thompson interestingly decides to not look at any of the brain imaging research: “If you were hoping to read about the neuroscience of our brains and how technology is ‘rewiring’ them, this volume will disappoint you.” He goes on to rail against the latest brain imaging research as “premature” and of dubious benefit because our understanding of the brain itself is still a work in progress, as he concludes: “The field is so new that it is rash to draw conclusions, either apocalyptic or utopian, about how the internet is changing our brains.”
While our understanding of the brain is admittedly incomplete, neuroscience, thanks in large part to brain imaging, is fairly far along. Several recent and compelling peer-reviewed brain imaging studies, which we’ll examine in the next chapter—some that have come out after Thompson’s book—do indeed show the damaging effects of tech on the brain, damage that, as mentioned, closely mirrors the effects of drug addiction.
So was Neil Postman prophetically correct in Amusing Ourselves to Death—is the new electronic media our soma? Thirty years later, in the age of iPads, smartphones, tablets, laptops, Google Glass, Twitter, Facebook, Oculus Rift and who-knows-what-else on the tech horizon, does it seem so ludicrous to view tech as a drug—the “electronic cocaine” that Dr. Whybrow considers it to be? Benefits aside, has digital tech become what Chinese researchers are calling electronic heroin?
Indeed, as we’ll read in the next chapter, interactive glowing screens have become such a powerful drug that the U.S. military is literally using them as a form of digital morphine.