Dispelling Misguided Beliefs About Technology
In 1981 when I turned twelve, my parents gave me a Sony Walkman as a birthday present. The casing, made of brushed aluminum and a deep maroon hard plastic, glimmered as I took it out of the packaging. It was a second-generation model – light, sleek, and not much bigger than the cassette tapes it played. The headphone earbuds fit snugly in my ears, and the grooved teeth on the volume control massaged my fingertips.
That day, like hundreds of thousands of other Walkman owners, I discovered that I couldn’t be without music. My biggest life concern became rationing a stash of batteries. I wanted nothing more than to spend every minute of every waking hour listening to Journey and Olivia Newton John – I still blame that Walkman for my unrehabilitated love of 1980s top-40 hits.
The Walkman poses a potential challenge to the Law of Amplification. It seems at first to be a technology that gave birth to a new human desire. Few people imagined before 1979 that they would want to live in their very own cocoons of music. Today, personal music seems to be a permanent feature of civilization. Cassette tapes have become obsolete, but headphones – and the devices they plug into – have proliferated. Didn’t the Walkman change global culture? Didn’t it create something fundamentally new that wasn’t there before? Didn’t the technology transform us in a way that we didn’t previously imagine?
There’s no denying that people act differently when new technologies appear. We certainly didn’t walk around with tiny speakers in our ears prior to the 1980s. But that doesn’t mean these new behaviors were out-of-the-blue creations of the technology, per se.
For reasons that are still not fully understood, human beings are fascinated by music. Weddings have wedding marches. Funerals have dirges. Virtuosity has been celebrated as far back as Orpheus and his lyre, and ethnomusicologists have found music in every culture, including those that ostensibly forbid it. Some traditions of Islam ban recreational music, but the Muslim call to prayer is undeniably musical. So, give people an easy way to listen to tunes – especially those of their own choosing – and it’s no wonder that thirty years after the Walkman, iPods and MP3 players are still going strong. In other words, the Walkman and its descendants have allowed people to do more of something they’ve always wanted to do, even if that desire was never before expressed. You could call it a latent desire.
Alternative explanations hold that the Walkman caused new human behaviors. The business world uses the Walkman as a case study of a shrewd business. They say it created a new market.1 Some sociologists argue that the Walkman changes our environment. It reorganizes space and time, what is private and what is public.2 And as the new owner of a Walkman, I certainly felt the device beckon to me, compelling me to listen.
But statements such as “the Walkman increased sales of cassette tapes,” and “the Walkman caused a portable music revolution,” are shorthand for a more complex process: People have always enjoyed music, and they have personal preferences for when to listen and what to listen to. Sony leaders recognized this desire and built a low-cost, portable device to meet it. Consumers bought hundreds of thousands of units and adapted their listening habits. Other companies entered the market, expanding usage further. Throughout, it’s people taking action. The device is inanimate.
It’s important to keep the real explanation straight even as we use the shorthand. If we don’t, we could mistakenly believe that arbitrary behaviors can be created with the right technology. We’d be tempted by the promise of some new gadget, to, say, try to solve the problem of substandard education in America.
But technologies don’t cause arbitrary behaviors. It would be easy, for example, to design high-tech clothes that make us itchy. Imagine the “Itchman” shirt made of abrasive nano-synthetic textiles and embedded with electronics that heighten static cling. If clever businesses could really create demand at will, or if technologies could cause any desired change in behavior, we could expect a smart entrepreneur to open up a worldwide market for the Itchman.3 With today’s comfort-focused materialism, though, the Itchman won’t catch on anytime soon. Perhaps if we returned to a culture of penance like that of medieval times, when hairshirts were worn for repentance and mourning, we might see the rise of sackcloth fashion.
When technologies go mainstream, it’s because they help scratch itches that people already have, not because they create new itches that people don’t want.
FOMO and Other Four-Letter Words
Latent desires also play a role in how we use technology to connect with other people. In the age of the smartphone, many of us go out with friends and ignore each other while we tap on our gadgets. Sherry Turkle, an MIT sociologist who has studied the relationship between people and their devices for three decades, calls this being “alone together.”4 But, again, if we all seek companionship, and technology amplifies our desires, how could we be growing more apart with technology?
Some people lay the blame on imperfect technologies. Today’s gizmos, they argue, provide only an impoverished form of communication.5 You can’t say much with a 140-character text message, and FaceTime is not as good as real time face-to-face. But technology doesn’t necessarily block meaningful connection, either. Plenty of grandparents spend precious moments on a weekly, even daily, basis with their families over webcams. Since 2009, as many as one in five romantic relationships has started online.6 And Facebook has done much to reconnect long-lost friends.
So it’s not that technology prevents true connection. The problem is that technology also makes it easy to have thin, empty interactions. In the choice between a challenging intimacy and casual fun, some of us choose the latter. One reason why some people can’t stop fiddling with their phones is something called FOMO – “the fear of missing out.”7 The fear of missing out on a better party, a better evening, a better life.
But again, the technology doesn’t cause this behavior; it just amplifies the underlying personality, turning us into caricatures of ourselves. I have friends whose handsets I’ve never seen. When we sit down for a meal, their phones stay in their purses or pockets. If there’s a ring, they ignore it. At the other extreme, I have acquaintances with whom conversation devolves to a few words between interruptions. Even when they’re not texting, their fidgety glances land on their phones like mosquitos seeking a soft patch of skin. Over time, I’ve come to see FOMO as just one of many causes of smartphone obsession. There’s also ATUS, addiction to useless stimulation; PORM, pleasure of receiving messages; SWAP, seeing work as priority; UTSI, the urge to seem important; and any number of other latent emotional tics that are exacerbated by the technology. The fact that owners of the same kind of device display a diverse range of behaviors is another sign that the technology is amplifying what’s already there, not causing the same response in everyone.
Commentators have thought hard about what made the Walkman and the iPhone such successes, but no one asks why there’s no mass market for the Itchman. This lopsided focus on technologies that “succeed” blocks us from seeing the full picture. It’s like the pundits who forgot Bahrain and Saudi Arabia. Claims of the Internet’s democratizing power fail to take into account the many things that the Internet hasn’t democratized, such as wealth, power, and genius.
When evaluating theories of technology, we should look at a wide range of contexts. Our conclusions should come not just from isolated instances or personal experience, but from all kinds of uses in all kinds of circumstances. In this chapter, I examine a smattering of examples: from electronic medical records to corporate knowledge management, from politics in America to Chinese media censorship. It won’t be comprehensive. But out of these scattershot case studies, a pattern will become clear.
Along the way, I’ll also demolish a few persistent myths. It’s often said that technology is a cost-saver; or that “big data” makes business problems transparent; or that social media brings people together; or that digital systems level playing fields. These kinds of statements are repeated so often that few people question them. Yet none of them is a die-cast truth.
If Hippocrates Were an Economist
One of information technology’s great benefits, supposedly, is its ability to lower costs. Walmart, for example, is famous for its digital stock-keeping. Its databases know exactly what’s on the shelves, and they automatically inform suppliers which stores are low on stock. The system keeps inventories razor-thin and costs low. And it all seems to be about technology – databases, barcode readers, RFID-encoded pallets, and so on.
You might think, then, that some of our greatest cost-control challenges could be solved with IT. A conspicuous target in America is our health-care system. In fact, electronic medical records have firm bipartisan support even in an era of political deadlock. President Barack Obama has called for electronic medical records since before his days in the White House, citing efficiency and cost savings.8 And the GOP Doctors Caucus, formed by Representatives Phil Gingrey and Tim Murphy, states, “Health information technology has the potential to save more than $81 billion annually in health care costs. From drastically reducing medical errors to streamlining administration, health IT is the key to transforming our healthcare system.”9
Unfortunately, cost containment also follows the Law of Amplification. In the American health-care system, very few people are really focused on reducing costs. As a result, every new technology is a white elephant – a “gift” you have to keep paying for. Many of us, sadly, are familiar with this state of affairs. A few years ago, I went to see a specialist in neuro-ophthalmology because I had lost partial vision in my right eye. After asking me some questions and peering into my pupil, the doctor said, “Well, there’s no clear problem, so it could be nerve damage. If it is, there’s not much we can do. But,” he said, smiling conspiratorially, “since you have good insurance, let’s do an MRI.” I agreed because I had no reason not to. I was lucky to have insurance with no co-pay. When I saw the invoice for the visit, the line item just for the MRI showed $1,800. I was shocked, though grateful that my insurance covered the expense. The doctor’s office never called me for a follow-up, the MRI scan was never consulted, and my right-eye problems persist.
Unlike at Walmart, where digital tools amplify the company’s zealous pursuit of lower costs, in US health care, technology intensifies all the ways in which spending is encouraged. Our hypochondria as patients, our foibles as doctors, our greed as suppliers, and our myopia as policymakers – all are social forces that the technology regrettably amplifies. Even the employers and governments that foot the bill cast their payments as benefits to employees and citizens. They don’t penny-pinch, for fear of appearing cavalier about people’s lives. On top of everything else, our metrics are off. As Princeton University economist Uwe Reinhardt noted, “Every dollar of health care spending is someone’s health care income.”10 That income flows right into our national gross domestic product (GDP), and we want the GDP to rise, don’t we?
Of course, technology also amplifies good health-care trends, and that’s terrific for those who can afford it. But if lowering costs is the goal, more technology isn’t a surefire solution. In the four decades since 1970 – a period during which digital technologies poured into hospitals and clinics – American health-care costs rose in real terms by a factor of five. The increase has been far greater than in other developed countries.11 Information technology was probably not the main cause, but it certainly didn’t turn the tide. (Nor did we get what we paid for: American life expectancy during that period only increased by eight years. That’s fewer than the nine years gained in the United Kingdom and the eleven gained in Japan, even though they spent a lot less.12)
So lower costs aren’t a function of the technology itself. If anything, digital technologies require additional upkeep. For example, in 2010 when I left Microsoft, the firm employed over 4,000 full-time people to keep its own IT systems running. That’s nearly 5 percent of the company’s workforce. (Similar proportions hold for any large technology company.13) If technology companies – which work hard to automate everything – have to spend 5 percent of their human resources managing IT, imagine how much more difficult it is for other organizations.
Especially in the context of US health care, digital tools just amplify what is already an outrageous system of accounting. Recent exposés show that patients are billed excessive prices routinely: $24 for a niacin tablet that comes to 5 cents at drug stores; $333 for a chest X-ray costing less than $30; $49,237 for a neurostimulator that wholesales at $19,000 and might cost only $4,500 to manufacture.14 In this climate, hospital administrators will be happy to install electronic medical records and pass on the costs to patients and taxpayers at a markup.
Managing “Knowledge Management”
So, contrary to popular belief, digitization by itself doesn’t necessarily reduce costs. What about improving organizational behavior? Won’t computers solve our knowledge-management problems? And won’t “big data” make traditional decision-making obsolete?
As with cost cutting, information technologies can improve knowledge exchange and transparency, but they don’t do so automatically. Curiously, a group of people who I thought would resist that message turned out to be sympathetic. They understood exactly what I was saying.
One of them was Jorge Perez-Luna. He’s held titles like chief information officer and VP of IT at telecommunications companies including AT&T, Motorola, and Nextel. At one company, Perez-Luna was asked by his CEO to implement a computerized order-tracking system for their sales office in Brazil. The office was consistently underperforming, and the boss wanted a fix. He thought that by installing a database to track sales, he could find the problem.
Perez-Luna sent a small team on an exploratory mission. They found that “one employee had a drawer full of signed contracts, all of which were uncollectibles” – outstanding payments due from customers. “And he wasn’t an exception.” It turned out that salespeople were rewarded for signing deals, but they didn’t follow up with customers. Managers set quotas for new contracts, but there were no processes in place to handle uncollectibles. The sales staff didn’t know how much income they were bringing in, and they didn’t have any reason to care. As Perez-Luna put it, “payments to the company just weren’t being prioritized.”
Perez-Luna reported back. Without plugging that managerial hole first, he told his boss, technology wouldn’t do much good. He recommended more oversight and a shift in priorities. Not only had he saved the company money by avoiding an expensive digital solution, but he had identified the true problem. “I’m an IT guy,” he said, “but some of my best friends have training in anthropology. They are good at seeing the human issues behind technology.”
New laptops don’t necessarily make employees more productive. State-of-the-art data centers don’t cause better strategic thinking. And knowledge-management systems don’t cause rival departments to share information with one another. Yet CIOs everywhere are asked to perform exactly that sort of wizardry. The more experienced ones are careful not to promise too much. Technology can improve systems that are already working – a kind of amplification – but it doesn’t fix systems that are broken. There is no knowledge management without management.
In large organizations such as universities, governments, and corporations, one hand frequently doesn’t know what the other hand is doing. To break down silos, it’s tempting to set up Web portals and internal social media sites, but the real issues are almost always those of management, internal politics, and even limited human attention. Unless those social problems are dealt with, technology doesn’t have a base to amplify. Especially in a world where everything is already digitized, knowledge-management systems and online clearinghouses are rarely the bottleneck. To clear organizational obstacles, the counterintuitive solution in an age of bountiful technology is to focus on building effective human relationships.
Reach Out and Touch Your Tribe
Speaking of relationships, technology is often believed to enhance them. Nokia’s tagline is “Connecting People,” and AT&T once used the slogan “Reach out and touch someone.” There’s no doubt that communication technologies help people connect, but there are at least two ways in which this could happen. Option A says that better tools help us communicate with people we are already inclined to communicate with. Option B says that better tools cause communication to occur where none previously existed or was desired.
Amplification votes for option A: We use new tools to communicate more with people we want to connect with anyway. A host of evidence supports this conclusion. For example, a Pew Research Center study shows that, on average, about 92 percent of our Facebook friends are real-world acquaintances, not random people we’ve connected with because of the Internet.15 Other studies show that people collaborate more with those they are physically close to already.16 Despite email and Twitter, a single flight of stairs between offices can inhibit working together. All of this is to say that we use electronic communication to strengthen – amplify – preferred relationships.
Option B leads to the misguided belief that more connectivity brings everyone closer together. As one utopian put it, “People will communicate more freely and . . . the effect will be to increase understanding, foster tolerance, and ultimately promote worldwide peace.”17 This may sound horribly naïve, but the author, Frances Cairncross, is hardly an intellectual lightweight. She has been a journalist for The Guardian and The Economist and has held top posts at Britain’s Economic and Social Research Council as well as the British Science Association.
It’s easy to see that more communication tools don’t lead to better relationships or mutual understanding where neither previously existed. Consider that the United States has never before had as many communication options as it has now. In the 1970s, having a television meant access to ABC, CBS, NBC, and maybe a staticky PBS. Today it means cable service, Internet streaming, and access to hundreds of channels. In the 1970s, most households had landlines, but only the Yellow Pages for one town or city. Today you can look up just about anyone online and call them on the move. In the 1970s, only the geek elite used email. Today everyone texts, tweets, and posts to Instagram. Yet none of this extra connectivity seems to be bridging the chasm between the political left and right. If anything, the gulf is widening.
What is actually happening was predicted by MIT professors Marshall Van Alstyne and Erik Brynjolfsson as early as 1996 – two years before Google and eight years before Facebook. “Internet users,” they wrote, “can seek out interactions with like-minded individuals who have similar values” while minimizing interactions with those whose values differ.18 Van Alstyne and Brynjolfsson called this phenomenon “cyberbalkanization”; psychologists call it “selective exposure.”19 Online, you can find self-reinforcing groups of white supremacists on the one hand, and free-loving hippies on the other. And the effect goes well beyond the Internet. Thus liberals watch Jon Stewart, while conservatives watch Glenn Beck. Gone are the days when Americans all tuned into Walter Cronkite and heard the same news with the same commentary. The danger of cyberbalkanization is that people become radicalized, intolerant, and “less likely to trust important decisions to people whose values differ from their own.”20
It’s true, of course, that communication tools can bring people closer. Olympic broadcasts help unify countries with pride. In the week after I got on Facebook, I happily connected with friends from the third grade. But these are examples of people using technology to do more of what they already want to do, not making friends with old enemies.
How Not to Bridge the Digital Divide
The target of many social causes is some kind of inequality – of wealth, education, political voice, social status. Another is the “digital divide,” a phrase coined in the 1990s to describe unequal technology access between rich and poor Americans. The term was quickly extended to global disparities, and soon bridging the digital divide became a rallying cry. One response was to develop low-cost technologies – to make things only rich people could own affordable for everyone. This was the idea behind One Laptop Per Child. Its early media buzz was based on a projected $100 price tag.21 The Indian government rejected OLPC and proposed instead its own low-cost tablet, the Aakash, for $35.22 And as early as 1999, there was Free-PC in the United States. The company offered PCs for $0; they were paid for by on-screen advertising.
Free-PC was discontinued, and the other products never hit their target prices. But bad business models aren’t the real problem with these efforts. The problem lies in the concept itself. Some people speak of low-cost access to goods as a kind of “democratization,” but in a real democracy, it’s one person, one vote. In a free market, it’s one dollar, one vote, which is a totally different beast. Richer people can always afford more technology. It’s not as if new technologies stop appearing while existing ones are made cheaper. By the time there are low-cost PCs, there are high-cost smartphones. By the time there are low-cost smartphones, there are high-cost phablets.23 And by the time there are low-cost phablets, there will be high-cost digital glasses. There is no technological keeping up with the Joneses.
But suppose that an even distribution of technology were actually possible. What then? To answer this question, consider the following situation. Imagine the poorest person you can think of who is involuntarily poor. (The involuntary part is important – I’m not asking you to imagine a contented monk.) It might be a homeless person in your city, or a poor migrant worker in a remote area. Now imagine that you and that person were asked to raise as much money as you could for a charity of your choice, using nothing other than unlimited access to email for one week. Who would be able to raise more money? For most readers, it will be you. Because you have richer friends. You probably have more education and can write more persuasive emails. You likely have better organizational skills and could rally more people to the cause. And depending on the poor person you imagined, you might also be far ahead in basic skills such as literacy.
In this thought experiment, the technology is identical, but the outcome is different because of what you each started with. The differences are all about people – who you are, whom you know, and what you’re capable of. These are the same factors, incidentally, that allow you to be richer in the first place. Imagine repeating the same experiment, but not with someone who’s poor. Do the experiment with Bill Clinton or Bill Gates. Who would be able to raise more money, you or one of them? One of the Bills would, for the same reasons.
You could repeat these experiments with different information technologies (e.g., mobile phone calls, Twitter) and with different tasks (e.g., finding a job for your friend, seeking investment advice), and, for the most part, the results would be the same. In each case the technology is fixed, but the outcomes differ in proportion to the underlying advantages. Low-cost technology is just not an effective way to fight inequality, because the digital divide is much more a symptom than a cause of other divides.24 Under the Law of Amplification, technology – even when it’s equally distributed – isn’t a bridge, but a jack. It widens existing disparities.25
The Chinese Elephant
Harvard political scientist Gary King, who studies, among other things, the Chinese Internet, says it is the site of the “most extensive effort to selectively censor human expression ever implemented.” King has uncovered exactly what the Chinese government censors on its country’s social media platforms, and what he has found has unexpected lessons far beyond the digital realm.26
According to King, “the Chinese Internet police force employs an estimated 50,000 censors who collaborate with about 300,000 Communist Party members. In addition, private firms are required by law to review the content on their own sites,” and for this they hire staff. King has reported that the overall censorship effort is so large that “it’s like an elephant walking through a room.” To track and measure its footprints, he conducted two subversive studies with colleagues Jennifer Pan and Margaret Roberts that offer new insights into the Chinese Leviathan.
In the first study, the team built a network of computers that watched 1,382 Chinese websites, monitoring new posts to see if and when they were censored. Eleven million posts covering eighty-five topics were chosen for investigation. The subjects ranged in political sensitivity from popular video games to the dissident artist Ai Weiwei. The researchers included online chatter resulting from real-world events.27 In the second study, King and his team went undercover. They created fake accounts on over one hundred sites. They submitted posts to see which ones were censored. They even set up their own social media company in China.28
Two of their findings stand out. First, China’s online censorship mechanisms are panoptic and efficient. Objectionable items are removed with a near-perfect elimination rate, typically within twenty-four hours of their posting. The researchers wrote, “This is a remarkable organizational accomplishment, requiring large scale military-like precision.”
Second, King and his team found what Chinese censors don’t like. They’re quick to act on anything that refers to, instigates, or otherwise links to grassroots collective action. Posts about protests, demonstrations, and even apolitical mass activities vanish quickly.29 But the regime is comparatively comfortable with criticism of the government. For example, this passage was not censored:
The Chinese Communist Party made a promise of democratic, constitutional government at the beginning of the war of resistance against Japan. But after 60 years that promise has yet to be honored. China today lacks integrity, and accountability should be traced to Mao. . . . [I]ntra-party democracy espoused today is just an excuse to perpetuate one-party rule.
Meanwhile the following post, which refers to a man who responded to the demolition of his home by carrying out a suicide bombing, was nixed:
Even if we can verify what Qian Mingqi said on Weibo that the building demolition caused a great deal of personal damage, we should still condemn his extreme act of retribution. . . . The government has continually put forth measures and laws to protect the interests of citizens in building demolition.
This comment was supportive of the government, but it was censored because it referred to a known source of public agitation. The distinction contradicts conventional ideas about totalitarian states. In George Orwell’s 1984, Big Brother dealt quickly with any expressed disloyalty. But King’s findings reinforce more subtle theories of autocratic power, like that of his colleague Martin Dimitrov, who has argued that “regimes collapse when its [sic] people stop bringing grievances to the state.”30 The real danger to a state comes when its citizens no longer complain in the open.
In fact, as King noted, a certain amount of public criticism may serve the Communist Party’s interests. It mollifies citizens who want to blow off steam, and it alerts the central government to issues requiring attention. It’s when the criticism spills over into calls for action that the censorship machine – and sometimes also the police – kicks in. The government is continually calibrating its tactics. In October 2013, a man in Shaanxi Province was detained for having a critical comment re-tweeted 500 times on Sina Weibo, China’s version of Twitter.31 You can almost hear bureaucrats debating where the line should be: How many shares pose a collective-action threat – 250, 500, 1,000?
King’s study of Chinese social media censorship, then, reveals a lot more than just a strategy for online speech suppression. It provides clues to the Communist Party’s deepest fears and its sophisticated program of control. As we’ve seen with its heavy-handed response to uprisings in Xinjiang and Tibet, China is serious about suppressing physical protest. That intention carries over online, where censors are sensitive even to seemingly innocuous posts if they contain a seed for mass action. In a phone conversation, King told me that “political actors in any country use whatever means of communication they have to advance their goals. If technology allows them to do it faster, they’ll use technology.”
“In some ways, it’s the same in America,” King continued. Indeed, large technology companies in the United States are legally required to monitor and censor illegal content such as child pornography. And we know from recent revelations about the National Security Agency that our government is willing to strong-arm firms for the purposes of digital surveillance. “Functionally, that’s the same as what happens in China, though I won’t say it’s morally the same,” King said. In both countries, technology acts like a lens, magnifying and amplifying how governments act on their gravest concerns. By examining large-scale technology, you can ferret out hidden motivations.
Predicting Is Believing
The Law of Amplification enables us to make certain types of predictions. Under some conditions, it’s possible to gauge the future of a technology that doesn’t even exist yet. For example, imagine that scientists come up with the following inventions. In each pair, which one do you think would be more popular?
a) A robot that cleans up after you, washes your dishes, and does all of your laundry.
b) A robot that follows you around and verbally points out each of your personal flaws.
a) A holographic device that projects the realistic illusion that your house is bigger than it is, outfitted with expensive furniture, and decorated by a professional interior decorator.
b) A holographic device that projects the realistic illusion that your house is smaller than it is, outfitted with used furniture, and decorated by a college student.
a) A novel device you wear on your belt buckle that guarantees a slim, fit figure, regardless of what you eat or how much you exercise.
b) A novel device you wear on your belt buckle that guarantees an overweight figure, regardless of what you eat or how much you exercise.
None of these devices exists today, but you will have no trouble picking which of each pair would sell better. That’s because you already have a good sense of what most people want. Your ability to predict a technology’s success is based on an intuitive grasp of the human condition. Consistent with amplification, human preferences, more than technological design, decide which products succeed. Or, to put it another way, good design is the art of catering to our psyches.
You might quibble about which way these options would go. You might say that the outcomes depend on culture or the moment in history in which they occur. And you’d be right. What many Americans now consider an undesirable weight has been in other times and places a sign of wealth and status – for example, in the time of Peter Paul Rubens, who painted what we now call Rubenesque women.32 Back then, device (b) would have done better than device (a). But that again proves that the technology doesn’t decide its outcome.
Similarly, we can predict that in future revolutions, all sides will use or abuse the communication technologies at their disposal. In the nineteenth century, rebels distributed pamphlets, autocrats closed printing presses, and the world heard about it months later by word of mouth. Here in the twenty-first century, rebels organize on Facebook; autocrats shut down the Internet; and the world watches events unfold on YouTube. Perhaps in the twenty-third century, rebels will rally on brain-to-brain transmitters; autocrats will scramble neuro-signals; and the world will watch it all through their synaptically projected awareness modules (known in the future as “SPAM”). The digital world is undoubtedly different from the analog and the postdigital, yet for so much of the social order . . . plus la technologie change, plus c’est la même chose.
Most importantly, amplification provides a guide as to whether social-change dollars should be spent on undeveloped technologies or on something else. We’ve seen how struggling schools aren’t turned around by digital technologies, but tech utopians will insist that the right technology just hasn’t been invented yet. So let’s entertain their reverie for a moment and imagine a world with a powerful teaching machine like that from The Matrix.
I plug myself in, and, within seconds, “I know kung fu,” just like Keanu Reeves’s character in the movie. It’s an amazing technology that could teach just about anything, but will it eliminate inequities in education? In any world politically like ours, wealthy, influential parents will secure the best hardware for their own children, while the children of poor, marginalized households will have access to older models in need of repair. Rich kids will effortlessly learn quantum physics. Poor kids might come out quacking like a duck. Yet again, the technology will amplify the intentions (both explicit and implicit) of the larger society. And the same will be true of gamified e-textbooks, humanoid teaching robots, or any other novel technology. So, one prediction is this: If you’re interested in contributing to a fair, universal educational system, novel technology isn’t what will do the trick.
The broader lesson applies well beyond education and summarizes what we’ve seen so far. A government without genuine motivation to eradicate corruption will not become more accountable through new technologies of transparency. A health-care system with a shortage of well-trained doctors and nurses won’t find its medical needs met with electronic medical records. A country unwilling to address the social underpinnings of inequality won’t see an end to inequities regardless of how much new low-cost technology it produces. In general, technology results in positive outcomes only where positive, capable human forces are already in place.
In Chapter 6, I’ll show how all of this offers guidance for the best use of technology, but for now, let me mention that the Law of Amplification’s predictive power is one of its strengths as a theory. Want to know where free speech is most likely to thrive online? It will be where it thrives offline. Want to know when new technology will actually cut costs? It will be when management is focused on cost control. Want to know how to ensure that your children will learn productively on an iPad? It will be if they have good learning habits independent of the tools at their disposal and adult guardians monitoring proper use.
Whither the Unintended Consequences?
Some people might protest that technology outcomes are fundamentally unpredictable because of unintended consequences. They’d be right up to a point, but only up to a point. Nothing I’ve said so far says that human history is more predictable just because it involves technology. Who knew in 2010 that the Middle East would be transformed by popular uprisings, with or without Facebook? People are complicated and hard to predict; adding technology doesn’t change that.
But where social situations are well understood, some technology outcomes can be predicted, and even partial or imperfect knowledge is valuable. Most of us consult weather reports even though we know they’re sometimes wrong. Any information that is better than a random guess is still useful. Similarly, superintendents can act on predictions that educational technologies will help good schools but not struggling ones, even if borderline cases are harder to assess.
In addition, whether a consequence is unintended often is in the eye of the beholder. Officials at the US Department of Defense or the National Science Foundation who sponsored precursors of the Internet probably didn’t mean to pave the way for either mass electronic commerce or the global proliferation of cat videos. So it could be said that those outcomes were unintended. But websites don’t just build themselves. Everything online is someone’s intent acted out, even if those contributions weren’t foretold by Internet founders. More often than not, the unintended consequences of technology spring from someone else’s unpredictability. One man’s unintended consequence is another man’s mission.
But what about cases in which a technological result was absolutely impossible to foresee? Pure examples are hard to find, because technological skeptics have vivid imaginations about what can go wrong. But for the sake of argument, consider teenage texting. It seems unlikely that either the engineers behind the SMS text-messaging standard or the parents who welcomed mobile phones into their households imagined that their children would one day send thousands of text messages a month. Yet, on average, American teens send and receive 60 texts a day, or 4 per waking hour. One 13-year-old girl in California exchanged 14,528 texts in a month, which comes to about 1 every 3 minutes, 24 hours a day.33 So obsessive texting could be considered an unintended consequence of mobile-phone proliferation.34 But now that we know about it, it’s no longer unintended. It’s up to adults – as parents and consumers, as voters and citizens, as nuclear families or as collective communities – to decide whether this consequence is desirable, and, if not, to curtail it. To do nothing is to be complicit in – to passively intend – the undesired outcome. In the long run, there are no unintended consequences.35
Deus in Machina
The Law of Amplification explains how technology can be both good and bad, and how its effect is ultimately up to individuals and societies. The law’s corollaries dispel myths about technology’s inherent powers, whether to lower costs, improve organizations, or decrease inequality.
Amplification also pegs the responsibility for technology’s impact squarely on us. Techno-utopians see a world where technology saves us from ourselves. Cyber-skeptics imagine our creations running rampant. And contextualists often sound like apologists for luck. All of these views, however, smack of humanity’s naïve youth, when we thought our lot was up to the Fates, to nature, or to God. Both excessive faith in and frantic fear of technology are regressions to childhood, denials of human responsibility. In our post-existential adulthood, shouldn’t we own our destinies?
To adapt Jean-Paul Sartre, technology is nothing else but what we make of it.36 And as Sartre noted, that responsibility is both a blessing and a curse – on the one hand, we can decide what to do with technology; on the other hand, we must decide what to do.