The American journalist H. L. Mencken once quipped, “For every complex problem there is an answer that is clear, simple, and wrong.” Agonizing over whether digital technology is “good” or “bad” for the human mind is about as meaningless as arguing over whether a car is “good” or “bad.” Nonetheless, debates on the complex issue of Mind Change are inevitable, as they will question the way we live our lives and the kind of people we might end up being. Rather than adopt simplistic and entrenched stances of “good” or “bad,” “right” or “wrong,” we need first to see where the various battle lines are actually being drawn, and then how we might resolve any resultant conflict in understanding and expectation.
Inevitably the biggest controversy revolves around the basic question of evidence: how strong it is and what it’s actually demonstrating. Two reports in particular, surveying the evidence over the last few years, have suggested a “glass half-full” state of affairs. One was authored by psychologist Professor Tanya Byron in 2008 on the risks that children face from the Internet and videogames.1 Her report came to the unsurprising conclusion that “the Internet and videogames are very popular with children and young people and offer a range of opportunities for fun, learning and development.” However, Byron had concerns over potentially inappropriate material, ranging from violent content to the behavior of children in the digital world. She also drew attention to the notion that we shouldn’t just be thinking about a child with a digital device in isolation, but should realize that the wider lifestyle picture is highly relevant, not least the child’s relation to the parents.
The generational digital divide means that parents do not necessarily feel equipped to help their children in this unfamiliar space, which can lead to fear and a sense of helplessness. This sad state of affairs can be compounded by a wider risk-averse culture that is increasingly disposed to keep children indoors despite their developmental needs to socialize and take risks. While a risk-averse culture cannot by any means be the result exclusively of screen living, it obviously provides an attractive incentive and alternative for a child to be readily persuaded not to venture outside. Another uncontroversial point made by Byron’s report was that while children are confident with the technology, they are still developing critical evaluation skills and need adult help to make wise decisions. In relation to the Internet we need “a shared culture of responsibility.”
Byron’s real emphasis has been on protection, but her report also touched on the wider issue of the empowerment of children: “Children will be children pushing boundaries and taking risks. At a public swimming pool we have gates, put up signs, have lifeguards and shallow ends, but we also teach children how to swim.” All that said, for the time being, anyone reading Byron’s report would feel that there was no immediate need just now for any revolutionary, or even merely interceptive, action.
It was a similar story a little later in 2011, when neuroscientist Dr. Paul Howard-Jones of Bristol University was commissioned to produce a review on the impact of digital technologies on human well-being. Howard-Jones accordingly set about discussing what the field of neuroscience has established regarding the effects of interactive technologies on behavior, the brain, and attitudes, with a special focus on children and adolescents. After all, “the vanguard of our advance into this new world is our children, and especially our teenagers. We know that the developing brain of a child is more plastic, and responds more malleably to experience than an adult’s brain.”2
Commendably, Howard-Jones highlighted the need to understand the uses of technologies in a specific context rather than to label particular technologies, or technology more generally, with a blanket description of “good” or “bad.” He also highlighted the findings that some technology-based training can improve working memory or provide mental stimulation that slows cognitive decline, while some types of gaming can improve visual processing and motor response skills. However, his review also identified three potential risks for children: violent videogames, the use of games and other technology leading to sleep problems, and excessive use of technology having a negative physical or mental impact or interfering with daily life. He went on to point out that any changes in the mindset of the upcoming generations are, most crucially, anticipating changes in society as a whole—so the issues are relevant to all of us, whatever our age.
These snapshots from Byron and Howard-Jones depict an image of the Digital Native that is still currently blurred and uncertain, yet cautiously sanguine. Both reports leave at best an overall feeling of reserved optimism and at worst the usual academic-type conclusion that the jury is still out because “more research is needed.” Both Byron and Howard-Jones paint an equivocal but generally positive picture of work in progress, so long as we are constantly alert to ever-present dangers such as bullying, sexual grooming, and violent gaming. Any concerns either author has have more to do with regulation. On the whole, the conclusions in both cases err on the side of the mildly positive with regard to learning, socializing, and improving mental function. The glass is half full, so long as everyone acts sensibly.
But such comforting assessments seem significantly outnumbered by voices from various professionals around the world who were not commissioned to provide a generalized snapshot of the current moment but instead deal with what happens when the use of digital technologies is not sensible. The glass then appears half empty.
First, there’s the perspective articulated in books such as iDisorder by clinician Larry Rosen3 or Alone Together by MIT psychologist Sherry Turkle,4 who suggest that the more people are connected online, the more isolated they feel. In both cases, the concern is for when Internet use becomes obsessive. Perhaps surprisingly, captains of the digital industries themselves are also worried. Biz Stone, the cofounder of Twitter, made headline news by stating at a conference: “I like the kind of engagement where you go to the website and you leave because you’ve found what you are looking for or you found something very interesting and you learned something.”5 The idea would be that you use Twitter to enhance the quality of your real life. But even he believes that using Twitter for hours at a time “sounds unhealthy,” presumably because it means his invention has become a lifestyle in itself. Then there’s Eric Schmidt, erstwhile CEO and now chair of Google: “I worry that the level of interrupt, the sort of overwhelming rapidity of information … is in fact affecting cognition. It is affecting deeper thinking. I still believe that sitting down and reading a book is the best way to really learn something. And I worry that we’re losing that.”6
This worry is prescient in the light of what many neuroscientific and medical experts are voicing.7 For example, neuroscientist Michael Merzenich, one of the pioneers in demonstrating the incredible adaptability of the nervous system, has concluded, in the typically restrained language required of his profession: “There is thus a massive and unprecedented difference in how their [Digital Natives’] brains are plastically engaged in life compared with those of average individuals from earlier generations, and there is little question that the operational characteristics of the average modern brain substantially differ.”8
Educators are also voicing worries. In a 2012 report that surveyed four hundred British teachers, three-quarters reported a significant decline in their young students’ attention spans.9 In the same year, a survey of more than two thousand U.S. secondary school teachers showed that 87 percent of teachers believed that digital technologies are creating an “easily distracted generation with short attention spans,” whereas 64 percent agreed that these technologies have more of a distracting effect than a beneficial one on students academically.10 The diversity of different professions expressing the drawbacks of digital devices was well illustrated in an open letter written in September 2011 to the respected British newspaper the Daily Telegraph and signed by two hundred teachers, psychiatrists, neuroscientists, and other experts expressing alarm over the “erosion of childhood.”11
However, perhaps one of the most telling surveys has been to target aficionados of cyberspace themselves. The Pew Research Center in the United States, along with Elon University, asked more than one thousand technology experts how the brains of “millennials” (a term pretty much interchangeable with “Digital Natives”) will change by 2020 as a result of being so connected to online digital technologies.12 These professionals were asked which of two predictions was the more likely for the immediate future, as articulated in two contrasting statements. One was extremely positive:
Millennials in 2020 do not suffer notable cognitive shortcomings as they multitask and cycle quickly through personal- and work-related tasks. They learn more and are adept at finding answers to deep questions, in part because they can search effectively and access collective intelligence via the Internet. Changes in learning behavior and cognition generally produce positive outcomes.
The other was more negative:
Millennials in 2020 do not retain information; they spend most of their energy sharing short social messages, being entertained, and being distracted away from deep engagement with people and knowledge. They lack deep-thinking capabilities; they lack face-to-face social skills; they depend in unhealthy ways on the Internet and mobile devices to function.
The group of digital experts was split rather evenly on what they predicted for the future. But perhaps most tellingly, many of those who went along with the positive prediction noted that it was more their hope than their best guess. So even the 50 percent or so of professionals who regard the screen culture in a favorable light overall do so, in many cases, from a stance of wishful thinking rather than of certainty or rational argument.
Further evidence indicating that something might be going awry is perhaps every bit as compelling as expert opinion or epidemiological and experimental research: the very apps and websites that point to clear trends in the tastes and proclivities of current society. One app, paradoxically called Freedom, will block your Internet access for a user-specified amount of time each hour, while Self-Control will enable you to bar yourself from websites that you feel you are following too slavishly but are helpless to resist. Zadie Smith, author of the acclaimed bestseller White Teeth, for instance, credits these two Internet applications in the acknowledgments section of her latest work.13 Apparently she was struggling to maintain her concentration while writing her new book because of the diversions available just a click away on the Internet. So she was grateful to the apps for “creating the time” in which she could write.
And Zadie Smith is not alone. The success of these flourishing enterprises obviously raises the question of why they are doing so well. Why should increasing numbers of people require some external service to stop them from using the Internet, rather than just switching it off for themselves? As with junk food or cigarettes, we become addicted to the distraction of an external input that determines and shapes our actions, choices, and thoughts. The existence of these apps in themselves does not mean that there’s an epidemic of screen addiction, but it does imply that there are enough customers who experience these problems for the apps to be profit-making enterprises. We cannot ignore that even the platforms and users themselves implicitly acknowledge that screen technologies can be something we use compulsively.
Another unprecedented feature of our current society is the lightning-speed dissemination of information. The hyperconnected blogosphere reaches more people more quickly than satellite radio and television: the Pakistani citizen who unwittingly tweeted live updates of the raid on Osama bin Laden’s house was able to access a large audience more quickly than any other form of media. Yet, for precisely that reason, the blogosphere is the perfect medium for spreading misinformation relating to complex issues, or even for just oversimplifying them. Such is the concern of the World Economic Forum’s Risk Response Network, which provides leaders from the private and public sectors with an independent platform to map, monitor, and mitigate global risks. Its 2013 annual Global Risks Report analyzed the perceived impact and likelihood of fifty prevalent global risks over a ten-year time horizon; among those listed is “digital wildfires in a hyperconnected world.”14
I first joined the fray over the impact of digital technologies back in February 2009 with my speech in the House of Lords (described in the preface to this book) on the possible unexpected effects on the human mind of social networking.15 All I did was make the neuroscientific case for the well-accepted plasticity of the brain and point out that new types of screen experience would likely have a new type of impact on mental processes. The reaction, worldwide, was disproportionate to the tentative syllogism I was putting forward. While some seemed to agree with me, others were emphatic in insisting that there was “no evidence” for what I was saying.
While one might think this issue of evidence would be an easy matter to resolve, the problem with a simple negative argument is that even if there were no scientific findings at all to back it up, absence of evidence is not evidence of absence. In science, you can only conclusively establish with experiments that a finding is positively the case, never the reverse. After all, it might simply be that the test you are using isn’t the most appropriate, or that the measuring instruments are not sensitive enough, or that the effects will be delayed or too immediate to fit your particular observation period. The point is that you cannot be conclusive, and you must therefore leave open the possibility that there is indeed an effect, albeit one that you haven’t been able to detect. Thus it is impossible to demonstrate definitively that screen-based activities have no effect at all on the brain or behavior, any more than I or anyone else could prove definitively, to use an age-old example, that there is not a teapot in orbit around Mars.
This constraint poses a problem for both sides, since it is impossible to demonstrate just as conclusively that screen-based activities are having an unequivocal effect on the brain and consequent behavior.16 Let’s assume a finding is reported of some definite effect, good or bad. Even then, in the evaluation of scientific findings, few single peer-reviewed papers, that gold standard of professional probity, are viewed unanimously by all scientists as conclusive. It is normal practice for research to continue, and for interpretations to be revised as results accumulate. Interpretations of the evidence are inevitably subjective, with different scientists placing different emphases on different aspects or priorities within the experimental protocol. There is very rarely a Rubicon that, once crossed, means that a finding is universally accepted as the “truth.” Truth is always provisional in science, waiting for the next discovery to come along that could displace the current view (or, as it would by then be disparagingly called, “the current dogma”). When enough doubt accumulates to challenge this dogma, when accepted patterns of thought are straining to account for just too many anomalies, the reappraisal of what is true amounts to a “paradigm shift”—a concept Thomas Kuhn first introduced in 1962 in his now classic work The Structure of Scientific Revolutions.17
A wonderful example of how scientists can stick rigidly to dogma and have closed minds to highly novel ideas is the revolution in treatment of ulcers that developed in the 1990s. The hero of the story is an Australian physician, Barry Marshall. As part of his training, Marshall was working in a lab with another scientist, Robin Warren, studying bacteria. Contrary to accepted dogma, they found that a certain bacteria, Helicobacter pylori, could survive in a highly acidic environment, such as the stomach. Marshall and Warren started to doubt the well-accepted and established body of knowledge that ulcers were caused by excessive acid and thus were primarily the result of stress. What if ulcers were the result of bacterial infection instead? What would happen to the blockbuster drugs currently on the market for ulcers but perhaps designed for the wrong biological target? The implications for the pharmaceutical industry, as well as for the medical establishment, were huge. “Everyone was against me,” Marshall recalls.18 For many years, good old unscientific prejudice delayed significantly the final acceptance of Marshall and Warren’s theory. Starved of funding but convinced of the merits of their theory, Marshall actually drank a glass of the medium containing the bacteria and duly gave himself an ulcer, which was cured by antibiotics. Vindicated at last, he and Warren won a Nobel Prize.
Even without the need to wait for a seismic paradigm shift, disagreement is fundamental to science: what one individual researcher will see as an exciting discovery, another may view as an epiphenomenon, while a cynic might regard it as unproven. It is not in the act of empirical observation but in the consequent subjective evaluation that there is most room for controversy and doubt. In all branches of science, the explanation that is formulated as scientists pore over the latest data is never conclusive. Any scientist writing the discussion section to wrap up a paper for a peer-reviewed journal will invariably be tentative and provisional, always remembering that not all potentially salient facts and factors are known. Scientists inhabit a hesitant world that is far from absolute, where doubt is as natural as breathing. So while disagreement in science is normal and unavoidable (if not necessarily understandable at first), the flat refusal even to debate and to think about possibilities, as can happen with the question of screen technologies, is not.19 The only realistic way forward is to plow through as many individual papers as possible that each tackle a specific issue and that collectively form a general overall picture.
In the case of cyber-induced long-term changes in the brain and resultant behavior, we are faced with a complex situation, one not amenable to a definitive litmus test or a single smoking-gun experiment. What kind of evidence might one hope for, in a realistic period of time, that could demonstrate to everyone’s satisfaction that screen culture is inducing long-term transformations in wide-ranging phenomena as diverse as empathy, insight, understanding, identity, and risk taking? What single, one-off finding would it take for those who resist the possibility that there just might be something amiss after all, or at least that we are missing opportunities?
Concepts such as Mind Change are, in Kuhn’s terminology, paradigms, not specific single hypotheses that can be empirically tested in highly constrained and specific experiments. An umbrella concept such as Mind Change, as we’re about to see, draws together threads from apparent societal trends and expert professional views, as well as a wide range of direct and indirect scientific findings from different disciplines. The majority of the scientific studies reported in the chapters to come have been peer reviewed; this process ensures that they have demonstrated “statistically significant” findings, which means they are not subjective judgments but the results of a standardized and well-established system of testing.20
Irrespective of the different types of evidence that support it, inevitably the notion of Mind Change as a new paradigm has stirred up allegations of scaremongering and inciting moral panic. But bear in mind that scaremongering is predicated on the notion that there is really nothing to be scared of in the first place. Do we in fact know that this is the case? However, if and when the validity of the scare is irrefutably demonstrated, then the scare turns into an established danger. So now the original prediction would actually have been something very different, a wake-up call. Dismissal on the grounds of scaremongering should be, if anything, a final conclusion and not an opening gambit.
As for moral panic, perhaps any criticism of the digital world could be interpreted by aficionados of cyberspace as an attack on their personal lifestyle and therefore ultimately on them as individuals. But there is no need to panic at the moment. Indeed, if we allow ourselves the opportunity to take stock of where we are and where we wish to go in the twenty-first century, we can work out what our lifestyle and society need to look like in order to get us there. But to do that we first need to unpack the various very different issues that Mind Change embraces.