Chapter 2. Information, Power, and Survival

Most of us don’t remember the moment we began to discover language, to understand words, and to speak—the closest we get to it is watching our children discover it for themselves.

Helen Keller was an exception. The renowned deaf-blind activist didn’t learn about language until she was seven years old. Of the discovery, she wrote:

“We walked down the path to the well-house, attracted by the fragrance of the honeysuckle with which it was covered. Some one was drawing water and my teacher placed my hand under the spout. As the cool stream gushed over one hand she spelled into the other the word water, first slowly, then rapidly. I stood still, my whole attention fixed upon the motions of her fingers. Suddenly I felt a misty consciousness as of something forgotten—a thrill of returning thought; and somehow the mystery of language was revealed to me. I knew then that “w-a-t-e-r” meant the wonderful cool something that was flowing over my hand. That living word awakened my soul, gave it light, hope, joy, set it free! There were barriers still, it is true, but barriers that could in time be swept away. I left the well-house eager to learn. Everything had a name, and each name gave birth to a new thought. As we returned to the house every object which I touched seemed to quiver with life. That was because I saw everything with the strange, new sight that had come to me.”[13]

Keller discovered not just language, but a passion to know, and to tell—a sudden connection to the entire world. She describes the moment as gaining consciousness itself. The moment she began to understand that things had names, she said, was the moment she really began to think.

The origins of language aren’t yet definitive. But many scientists agree that sometime between 50,000 and 200,000 years ago, our brains began to change. We don’t know whether this physiological change was viral (co-evolving with language, spreading from homo sapiens to homo sapiens at a rapid rate), or evolutionary (developing over thousands of years), or a combination of slow evolution turning suddenly into algorithmic adoption. Yet one thing is certain: once we had the capacity to communicate complex forms of information, we had a huge advantage over other species.

The fossil record tells us that when our species discovered language— and thus discovered complex forms of information—human society transformed. Around the same time language was developed, humans began leaving the continent of Africa in droves. This development of language, some scientists suggest,[14] is what enabled humans to organize, move together, and explore beyond the African continent.

Most importantly, at this moment, our species began to become more aware of ourselves. While it’s evident that the mind co-evolved with language, it was at this moment that we were not only able to communicate increasingly complex concepts to one another, but also to store it in our brains effectively. Our cognition advanced.

With language came the ability to coordinate with each other more effectively. Nomadic tribes began to develop symbols to keep themselves better organized. Calendars appear to have been developed about 10,000 years ago, improving our ability to plant seasonal crops. Armed with the seeds of agriculture, we didn’t need to be as transient anymore; gradually, supported by the surpluses of expanding agriculture, nomadic tribes turned into civilizations, and more sophisticated governments emerged.

The Sumerians and then the Egyptians started using glyphs to express the value of currency around 6,000 years ago. Once the Egyptians settled on a standard alphabet years later, they reaped another information technology boom and reached heights no other civilization previously known to man ever had, taking on massive engineering tasks, building new modes of transportation, and acquiring vast power across an empire.

Yet carving symbols into stone tablets was painstaking work, and errors were costly. Stone tablets didn’t travel that well either, and if they broke, weeks, months, or even years of work could be lost in an instant. As such, production of this kind of information was largely relegated to a special class: scribes.

Being a good scribe meant holding significant power in Egypt.[15] Not only were scribes exempt from the manual labor of the lower classes in Egypt, but many also supervised developments or large-scale government projects. They were considered part of the royal court, didn’t have to fight in the military, and had guaranteed employment not only for themselves, but for their sons as well.[16] In the case of one scribe of the third dynasty’s chief, Imhotep, it even meant post-mortem deification.[17]

Later, another era in communication began with the creation of the first form of mass media: Gutenberg’s moveable type, fitted to a printing press, which enabled writing to be produced as type-set books, each copy identical to the last, without scribes.

Again, society was transformed. Literacy spread along with printing. As books became plentiful and inexpensive, they could be acquired by any prosperous, educated person, not just by the ruling or religious classes. This set the stage for the Renaissance, the flowering of artistic, scientific, cultural, religious, and social growth that swept across Europe. Next came the revival and spread of democracy. By the time of the American Revolution, printing had made Thomas Paine’s pamphlets bestsellers that rallied the troops to victory. The modern metropolitan newspaper, radio, television—all were based on the same basic idea: that communication could be mass-produced from a central source.

The latest transformational change came in earnest just three decades ago, when the personal computer and then the Internet converged to throw us firmly into the digital age. Today, five billion people have cell phones. A constantly flowing electron cloud encircles and unites a networked planet. Anyone with a broadband connection to the Internet has access to much, if not all, of the knowledge that came before, and the ability to communicate not just as a single individual but as a broadcaster. Smartphones are pocketsized libraries, printing presses, cameras, radios, televisions—all that came before, in the palm of your hand.

Technical progress always comes with its critics. The greater the speed and power of this progress, the greater the criticism. Intel researcher Genevieve Bell notes that every time we have shifts in technology, we also have new moral panic. Panic? Here’s just one example: there were some who believed, during the development of the railway, that a woman’s uterus could go flying out of her body if she accelerated to 50 miles per hour.[18]

Electricity came with a set of critics, too: the electric light could inform miscreants that women and children were home. The lightbulb was a recipe for total social chaos.

These Luddite folk tales are funny, looking back. But other criticisms have gained traction over the centuries.

Our connection to the teachings of Socrates, for instance, is through the written word of Plato, because Socrates was vehemently against the written word. Socrates thought that the book would do terrible things to our memories. We’d keep knowledge in books and not in our heads. And he was right: people don’t carry around stories like The Iliad in their heads anymore, though it was passed down in a verbal tradition for hundreds of years before the written word. We traded memorization for the ability to learn less about more—for choice.

Critics of the printing press believed that books would cause the spread of sin and eventually destroy the relationship between people and the church. As author and New York University professor Clay Shirky rightfully points out, the printing press did indeed fuel the Protestant Reformation, and yes, growth in erotic fiction.[19]

Though some critiques of the written word have fared better than others, all have faded over time. There just aren’t that many people today who think the printing press was a bad idea—not if five billion of them are voting with their purchases to carry one around with them all day.

Despite this, critiques of technology on moral grounds look very similar today. The strongest critiques (as Bell notes) tend to be about women and children. As if it were the modern-day critique of electricity, the television show “To Catch a Predator” features sexual predators using the Internet to seduce children—the subtext is that this powerful new tool can be used to steal your babies.

Still, there is a serious trend emerging in digital age critiques. Distinguished journalists, acclaimed scholars, and prominent activists are worried about what the information explosion is doing to our attention spans or even to our general intelligence. Bill Keller, former executive editor of The New York Times, equated allowing his daughter to join Facebook to passing her a pipe of crystal meth.[20]

Nicholas Carr’s book The Shallows (W.W. Norton) is full of concerns that social media is making his brain demand “to be fed the way the Net fed it— and the more it was fed, the hungrier it became.”[21] In the Atlantic, Carr’s “Is Google Making Us Stupid” expresses similar fears: “Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory.”[22]

In his book The Filter Bubble (Penguin), my friend and left-of-center activist Eli Pariser warns us of the dangers of personalization. Your Google search results, your Facebook newsfeed, and even your daily news are becoming tailored specifically to you through the magic of advanced technology. The result: an increasingly homogenized view of the world is delivered to you in such a fashion that you’re only able to consume what you already agree with.

These kinds of critiques of the Web are nothing new. They’re as old as the Web itself—older, actually. In 1995, in the very early days of the World Wide Web, Clifford Stoll wrote in Silicon Snake Oil (Anchor), “Computers force us into creating with our minds and prevent us from making things with our hands. They dull the skills we use in everyday life.”

Keller, Stoll, and Carr all point to something interesting: new technologies do create anthropological changes in society. Yet some of these critics seem to miss the mark; the Internet is not some kind of meta bogey man that’s sneaking into Mr. Carr’s room while he sleeps and rewiring his brain, nor did Mr. Stoll’s 1995 computer sneak up behind him and handcuff him to a keyboard.

Moreover, the subtext of these theories—and ones like them—is that there may be some sort of corporate conspiracy to try to, as Jobs put it, “dumb us down.” Somehow I doubt that Larry Page and Sergey Brin, the founders of Google, woke up one morning with a plan to rewire our brains. Twitter CEO Jack Dorsey is probably not a super-villain looking to destroy the world’s attention span with the medium’s 140-character limit. Mark Zuckerberg is likely not trying to destroy the world through excessive friendship-building.

Blaming a medium or its creators for changing our minds and habits is like blaming food for making us fat. While it’s certainly true that all new developments create the need for new warnings—until there was fire, there wasn’t a rule to not put your hand in it—conspiracy theories wrongly take free will and choice out of the equation. The boardroom of Kentucky Fried Chicken does not have public health as its top priority, true, but if everyone suddenly stopped buying the chicken, they’d be out of business in a month. Fried chicken left in its bucket will not raise your cholesterol. It does not hop from its bucket and deep-dive into your arteries. Fried chicken (thankfully) isn’t autonomous, of course, and isn’t capable of such hostility.

As long as good, honest information is out there about what’s what, and people have the means to consume it, the most dangerous conspiracy is the unspoken pact between producer and consumer.

Out of the four critiques—those of Keller, Carr, Pariser, and Stoll— Pariser’s is the one that makes the most sense to me. Personalization today is mostly a technical issue with consequences that the technologists at our major Internet companies are developing in order to keep us clicking. That said, personalization isn’t an evil algorithm telling us what our corporate overlords want us to hear; rather, it’s a reflection of our own behavior.

If right-of-center links are not showing up in your Facebook feed, it’s likely because you haven’t clicked on them when you’ve had the opportunity. Should corporations building personalization algorithms include mutations to break a reader’s filter bubble? Should people be able to “opt out” of tracking systems? Absolutely. But readers should also accept responsibility for their actions and make efforts to consume a responsible, nonhomogenous diet, too. The problem isn’t the filter bubble, the problem is that people don’t know that their actions have opaque consequences.

As with Socrates’ reluctance to embrace the written word, critics like Carr and Stoll are onto something, but they’re attacking the wrong thing. It wasn’t the written word that has stopped most of us from memorizing the epic Odyssey; rather, it is our choice not to memorize it anymore, and to read books instead.

Anthropomorphized computers and information technology cannot take responsibility for anything. The responsibility for healthy consumption lies with human technology, in the software of the mind. It must be shared between the content provider and the consumer, the people involved.

Once we begin to accept that information technology is neutral and cannot possibly rewire our brains without our consent or cooperation, something else becomes really clear: there’s no such thing as information overload.

It’s the best “first world problem” there is. “Oh, my inbox is so full,” or, “I just can’t keep up with all the tweets and status updates and emails” are common utterances of the digital elite. Though we constantly complain of it—of all the news, and emails, and status updates, and tweets, and the television shows that we feel compelled to watch—the truth is that information is not requiring you to consume it. It can’t: information is no more autonomous than fried chicken, and it has no ability to force you to do anything as long as you are aware of how it affects you. There has always been more human knowledge and experience than any one human could absorb. It’s not the total amount of information, but your information habit that is pushing you to whatever extreme you find uncomfortable.

Even so, we not only blame the information for our problems, we’re arrogant about it. More disturbing than our personification of information is the presumption that the concept of information overload is a new one, specific to our time.

In 1755, French philosopher Denis Diderot noted:

Diderot was on target with the continuous growth of books, but he also made a common mistake in predicting the future. He presumed that technology would stay complacent. In this short verse, he didn’t anticipate that with an increasing number of books, new ways to classify and organize them would arise.

A century after Diderot wrote, we had the Dewey Decimal system to help us search for those bits of truth “hidden away in an immense multitude of bound volumes.” Two and a half centuries later, the pages are bound not to bookbindings, but to electronic formats. It has never been faster and easier than with Amazon to find and buy a book in either a print or electronic version. And Google would be delighted if every word of every book were searchable—on Google.

To say, therefore, that the Internet causes our misinformation ignores history. In the modern arms race between fact and fiction, it’s always been a close fight: we’re no better at being stupid or misinformed than our grandparents were. It’s the ultimate ironic form of generational narcissism. History is filled with entire cultures ending up misinformed and misled by ill-willed politicians and deluded masses.

Just like Stoll and Carr, Diderot was onto something, but he was lured into the trap of blaming the information technology itself.

The field of health rarely has this problem: one never says that a lung cancer victim dies of “cigarette overload” unless a cigarette truck falls on him. Why, then, do we blame the information for our ills? Our early nutritionist, Banting, provides some prescient advice. He writes in Corpulence:

“I am thoroughly convinced, that it is quality alone which requires notice, and not quantity. This has been emphatically denied by some writers in the public papers, but I can confidently assert, upon the indisputable evidence of many of my correspondents, as well as my own, that they are mistaken.”[24]

Banting’s letter gives us an idea of what the real problem is. It’s not information overload, it’s information overconsumption that’s the problem. Information overload means somehow managing the intake of vast quantities of information in new and more efficient ways. Information overconsumption means we need to find new ways to be selective about our intake. It is very difficult, for example, to overconsume vegetables.

In addition, the information overload community tends to rely on technical filters—the equivalent of trying to lose weight by rearranging the shelves in your refrigerator. Tools tend to amplify existing behavior. The mistaken concept of information overload distracts us from paying attention to behavioral changes.

The Information Overload Research Group, a consortium of “researchers, practitioners and technologists,” is a group set up to help “reduce information overload.” Its website offers a research section with 26 research papers on the topic, primarily focused on dealing with electronic mail and technology used to manage distractions and interruptions. If they mention user behavior at all, they’re focused on a person’s relationship with a computer and the tools within it.

Now, don’t get me wrong. I appreciate a good spam filter as much as the next person, but what we need are new ways of thinking and of coping.

Just as Banting triggered a wave of concern about diet as we shifted from a land of food scarcity to abundance, we have to start taking responsibility ourselves for the information that we consume. That means taking a hard look at how our information is being supplied, how it affects us, and what we can do to reduce its negative effects and enhance its positive ones.



[15] It’s important to note, in the context of power’s relationship to information, that reading and writing quickly became trade secrets belonging to this set of professionals. Women were quickly excluded. Lower-class citizens needn’t apply.

[16] Barry J. Kemp. Ancient Egypt: Anatomy of a Civilization. Routledge: 2006.

[17] M. Lichtheim. Ancient Egyptian Literature (p. 104). The University of California Press: 1980, vol. 3.

[22] Carr, Nicholas. The Shallows: What the Internet Is Doing to Our Brains (p. 16). W.W. Norton & Company: 2010. Kindle Edition.