A Simple but Powerful Theory of Technology’s Social Impact
Nakkalbande is a small slum community in the southern part of Bangalore. Hidden within the upper-middle-class neighborhood of Jayanagar, it’s formed around a single, straight alley that is covered by a canopy of grand old trees that have survived the city’s aggressive road construction. The unpaved alley is strewn with plastic debris and the occasional dead rat. As slums go, though, it’s doing all right. Instead of the improvised tarp-and-tree-branch shelter you might see elsewhere, most of the houses in Nakkalbande are one- or two-room cinder-block structures. Residents have lived there for decades.
Nakkalbande is where I spent my Saturdays soon after moving to India in late 2004. I volunteered for a nonprofit called Stree Jagruti Samiti, the Society for Women’s Empowerment. Its leader was a middle-aged matriarch named Geeta Menon, who had a mischievous chuckle and a gleam in her eye that wouldn’t be brought down by the tired droop in her shoulders. For over fifteen years, she had worked as an activist, organizing the women and girls of several slum communities. She was known to storm into police stations with groups of women. They would demand that the officers take action against, say, a corrupt rations dealer. (Rations shops in India are licensed to sell subsidized food and kerosene to households below the poverty line, but they often profit from sale of inventory to other retailers.)
At Menon’s suggestion, I taught a computer literacy class for girls. I didn’t speak any of the languages they spoke – Hindi, Kannada, or Tamil – so I recruited a college student to translate and assist me. On the first day, eight or nine teenage girls dressed in pastel salwar kameez gathered into a small, windowless building that was reserved for community activities. I brought a laptop and set it up under a framed picture of a blue-skinned Krishna playing his flute.
When my assistant and I told the girls they were going to learn how to use a computer, their eyes widened, and a collective shriek filled the room. Over several weeks, we showed them the basics of word processing, PowerPoint, spreadsheets, and other software. At first they gawked at simple things such as moving the cursor, using the touchpad, and clicking to cause action on the screen. Novelty, though, quickly gave way to familiarity. Soon they were fighting over who would get to draw next using a painting application. Like computer novices everywhere, they took delight in converting words into every conceivable color and font. Their enthusiasm was infectious, and I looked forward to the classes.
By the third or fourth session, though, we hit a wall in what they could learn. Everyone was able to type her name in English as well as in Kannada, but the girls weren’t interested in writing anything beyond that. PowerPoint became known as the software that allowed them to create fancy 3D text. And spreadsheets thoroughly bored everyone except for two girls who sensed something extraordinary in self-computing arrays of numbers. We contrived activities to both entertain and educate, but, in practice, it was hard to go beyond entertainment.
I began to understand why this was the case as I learned more about the girls’ personal lives. Their days were crammed with school and chores. They worked part-time as servants in middle-class homes. With so many adult responsibilities, they saw the computer class as a break from a life of constraints. Some would linger afterward to teach me folk games as a way to extend their freedom. No one mentioned any serious hobbies, though, and the one thought of their future was about who would be arranged for them as husbands. Despite Menon’s best efforts, fourteen or fifteen wasn’t an unusual age for marriage. The girls expected to become housewives in short order, and few would continue school beyond eighth or ninth grade.
Originally, Menon and I had vague hopes that the computer classes would help the girls gain access to work other than as household servants. But even for entry-level positions, employers wanted a solid education first, white-collar soft skills second, and then only on top of that, computer literacy. With just one class per week – their parents didn’t allow more – we couldn’t have taught them more employable skills such as programming or data entry.
At the end of the course, we took the girls to visit a local Internet café, but little of lasting value came of the trip. Like many such spots in urban India, this was a dingy place with two or three old desktop computers running outdated versions of Windows. (Even as of 2013, Windows 98 was a common sight in Indian Internet cafés.) For about 10 rupees (roughly 20 cents), you can use an Internet-connected PC for an hour, but you get what you pay for. It can take half a minute, for example, to load the bare-bones Google home page. In formal studies later on, Nimmi Rangaswamy, a member of my research team, found that Internet café clientele is dominated by young men chatting, playing video games, and consuming pornography; many owners install private booths for the purpose.1 As a result, Indian families think of cybercafés as sleazy places. Women and girls aren’t encouraged to visit them.
Still, the exposure to computers did have some unexpected effects. The two girls who found spreadsheets fascinating vowed to stay in school for as long as they could, in spite of parental pressures to take on more chores. They recognized that they needed to know more in order to take advantage of the technology. But then, another girl dropped out of the class within a few weeks. She told me her parents didn’t want her to learn too much because that would raise her dowry. Families with sons expect dowries as something like a down payment for the costs of keeping a wife. The fear is that a more educated bride will have higher expectations and require more upkeep. (Apart from its patriarchal conception, this traditional calculus doesn’t account for the possibility that an educated wife could bring in her own income, as happens more and more across India.)
I didn’t think of the computer course as a formal research project, so I didn’t keep detailed track of the outcomes. When I look back, though, I realize that the class foreshadowed what I’d soon find in my own research: the initial optimism that surrounds technology, the doubt as reality hits, the complexity of outcomes, and the unavoidable role of social forces.
The Ferocious Field of Technology and Society
Technology is powerful, but in India it became clear to me that throwing gadgets at social problems isn’t effective. When I came back to the United States, I sought to understand why.
As a computer scientist, my education included a lot of math and technology but little of the history or philosophy of my own field. This is a great flaw of most science and engineering curricula. We’re obsessed with what works today, and what might be tomorrow, but we learn little about what came before.
So at the University of California, Berkeley, I met with dozens of professors who had studied different aspects of technology and society. I spent hours tracking down dusty, bound volumes in the stacks of libraries across campus. And here is what I learned.
Theorists, despite many fine shades of distinction, fall roughly into four camps: technological utopians, technological skeptics, contextualists, and social determinists. These terms will be defined in a moment, but one thing that jumped out was that the scholars fought like Furies. For example, the economic historian Robert Heilbroner wrote, “That machines make history in some sense . . . is of course obvious.”2 This view is called technological determinism, because it implies that technology determines social outcomes. But if some find it obvious, it is nevertheless ridiculed by critics. Philosopher Andrew Feenberg responded with sarcastic sympathy, writing that “the implications of determinism appear so obvious that it’s surprising to discover that [its premises do not] withstand close scrutiny.”3
Yet for all the debate, there is plenty of agreement, too. Utopians accept that there can be negative consequences of technology, and skeptics concede its benefits. What separates the four camps most is not facts but temperamental differences.
How to Spot a Utopian
In the Star Trek future, technological advances have liberated Earth from war, famine, illness, and conflict, at least among human beings. Thanks to matter replicators and dilithium crystals, food and energy are free. With nothing to fight over, peace and egalitarianism reign. (That’s why the series needs an ample supply of aliens as plot devices.) As Captain Jean-Luc Picard explains in the movie First Contact, “the acquisition of wealth is no longer the driving force in our lives.”4 That is to say, in a few more centuries, advanced technology makes economics itself obsolete. Instead, people are free to focus on greater ends: “We work to better ourselves and the rest of humanity.”
Star Trek is fiction, but its technological utopianism is very real. MIT Media Lab founder Nicholas Negroponte clearly shares it. So does Google chairman Eric Schmidt. In The New Digital Age, he and coauthor Jared Cohen wrote, “The best thing anyone can do to improve the quality of life around the world is to drive connectivity and technological opportunity.”5 And then there are technology cheerleaders like Clay Shirky, who shakes pom-poms for Team Digital in a book subtitled How Technology Makes Consumers into Collaborators.6 Many engineers and computer scientists also hold this view. A generation ago, when young people said they wanted to “change the world” or “make an impact,” they joined the Peace Corps. Now they move to Silicon Valley. They envision laying a foundation for Captain Picard’s greedless future.
Utopians believe that technology is inherently a positive force, that technology shapes civilization, and that more of it is a good thing. And they have what seems like irrefutable evidence. Thanks to advances such as modern medicine, air conditioning, cheap transport, and real-time communication, middle-class people today enjoy a quality of life that kings and queens didn’t have a century ago. There’s a reason, utopians argue, why historical epochs are named after technologies – the Bronze Age, the Iron Age, the Industrial Age, the Information Age – and why human culture flourished after the invention of the printing press.
But whatever they say and write, what most unites utopians is how they feel about technology. They love it, and they want more. Many believe that every kind of problem can be solved by some invention, often one that is right around the corner. Whether the issue is poverty, bad governance, or climate change, they say things like, “[There] is no limit to human ingenuity,” and “When seen through the lens of technology, few resources are truly scarce.”7 Besotted with gadgets, technological utopians scoff at social institutions like governments, civil society, and traditional firms, which they pity as slow, costly, behind the times, or all of the above.
I sympathize with the utopians because I was one myself. When I started the computer class in Nakkalbande, it was in the hopes that exposure to the technology would improve lives. And my research looked for ways to use technology to alleviate poverty.
A Curmudgeonly Skepticism
But time after time, I realized that technology alone never did the trick. Whether it was MultiPoint in India or laptops in America, inventing and spreading new devices didn’t necessarily cause social progress.
Technology skeptics would harrumph and point out that aspects of the Star Trek future are already with us. Thanks to agricultural technologies, America produces more than enough food to feed everyone in the country, and the food is cheap. Yet, almost 5 million children in the United States suffer from food insecurity in any given year.8 Indeed, there is enough food to feed the whole world, but hunger persists. About one in eight people is malnourished; that’s 840 million people eating less than they need.9 Evidently, technological plenty doesn’t mean plenty for everyone.
Skeptics believe that technology is overhyped and often destructive. Nicholas Carr, author of The Shallows, suggests that the fast-twitch, hyperlinked Internet not only erodes our ability to think deeply, but also traps us like a Siren: “We may be wary of what our devices are doing to us, but we’re using them more than ever.” His book is ominously subtitled What the Internet Is Doing to Our Brains. In The Net Delusion: The Dark Side of Internet Freedom, Evgeny Morozov catalogs the myriad ways in which the Internet boosts, rather than contains, the power of repressive regimes: in China, social media is a tool for disseminating Communist Party propaganda; in Azerbaijan, webcams installed at election stations frightened citizens into voting for state-sponsored incumbents;10 in Iran, the chief of national police acknowledged a chilling fact of their anti-protest efforts: “The new technologies allow us to identify conspirators.”11
Technology skeptics like to point out unintended consequences. Jacques Ellul, for example, warned of the dangers of information overload back in 1965. “It is a fact that excessive data do not enlighten the reader or the listener,” he wrote. “They drown him.”12 Neil Postman suggested that broadcast media have created a culture that is “amusing itself to death,”13 like mythological lotus-eaters, or the soma-sedated characters of Aldous Huxley’s Brave New World. And Harvard professor Sheila Jasanoff has voiced the concerns of many in calling out climate change as a by-product of fossil-fuel-driven technologies.14 Incidentally, digital technologies play a shockingly large part in carbon emissions. One study estimated that in 2007, electronics accounted for 3 percent of carbon emissions globally and 7.2 percent of all electricity usage.15 In the United States in 2013, the data centers that store and distribute online content accounted, on their own, for about 2 percent of total electricity use.16 All of these figures are projected to grow.17
If skeptics are pessimistic, though, many of them share the utopians’ belief that technologies embody moral and political values. But where utopians see the promise of greater freedom and prosperity, skeptics see weakness, folly, and corruption. The economic efficiency of factories and assembly lines leads to a dehumanized society. High-tech entertainment prompts us to judge everything by its marketability. Social media turns us into zombies of “continual partial attention.”18
As for practical action, skeptics are less united than their utopian counterparts. They span a spectrum from neo-Luddites who would destroy technology to those who can’t quite give up their smartphones. At one extreme is author and activist Derrick Jensen, who wrote, “Every morning when I wake up I ask myself whether I should write or blow up a dam.”19 Carr invoked a poet’s call for resistance, hoping that “we won’t go gently into the future our computer engineers and software programmers are scripting for us.”20 And some just throw up their hands. Ellul could see no easy solution: “It is not a matter of getting rid of it, but, by an act of freedom, of transcending it. How is this to be done? I do not yet know.”21
Not Good, Not Bad, Not Neutral
Utopians and skeptics have catchy rhetoric, but most reasonable people can see that the truth is neither Star Trek nor Brave New World. It’s probably a mixture of both. Melvin Kranzberg, a historian of technology, embraced technology’s apparent contradictions. “Technology,” he wrote in 1986, “is neither good nor bad; nor is it neutral.”22 This enigmatic statement captures what is probably the most common view among scholars of technology today: Its outcomes are context-dependent. Technology has both positive and negative impacts because technology and people interact in complex ways.
But contextualist explanations are also unedifying. To stop at context dependency is to say very little altogether. The lessons tend to follow the lines of “more research is needed”; “it’s case by case”; or “it’s nuanced” – ivory-tower code for “it’s so complicated, there couldn’t possibly be any worthwhile generalizations.” As a proponent of one contextualist theory claimed, “explanation does not follow from description.”23
The Human Factor
Utopians, skeptics, and contextualists are each right in limited ways. The fifty-odd technology projects I oversaw in India produced a range of outcomes. A few improved people’s lives. The utopians would have cheered. A few wasted time and resources. The skeptics would have said, “I told you so.” The majority fell into a middle ground where they succeeded as research projects, but benefits beyond that were limited. The contextualists would have nodded in sympathy.
But was there some other way to interpret these outcomes? As I looked for some structure to our findings, three factors emerged as necessary for real impact.
The first is the dedication of the researcher, not to research outcomes but to concrete social impact. Of all the projects I oversaw, the one that continues to affect the most lives is called Digital Green. It uses how-to videos featuring local farmers as a teaching aid to help instruct other farmers about better agriculture. Today the Indian Ministry of Rural Development is taking Digital Green to 10,000 villages, and the Ethiopian government has begun experimenting with it as well. None of this would be happening without Rikin Gandhi, who led the project. Gandhi has many talents, but what stands out is his single-minded focus on supporting smallholder farmers. Instead of designing the electronic version of a Rube Goldberg machine – which is what feature-happy technologists tend to do – he stuck with simple, off-the-shelf devices. Then, after we established Digital Green’s effectiveness, he left his research job to start a nonprofit organization. Without Gandhi’s devotion to social impact, Digital Green wouldn’t be much more than a research paper.
The second factor is the commitment and capacity of the partner organization. In my research group, we looked for capable, well-intentioned partners who had rapport with the communities we wanted to work with. Sometimes, though, we’d misjudge an organization and find ourselves stymied by its dysfunctions. In one project, we partnered with a sugarcane cooperative in a rural district three hours away from Bombay. We upgraded its communication infrastructure by replacing a creaky network of old personal computers with low-cost mobile phones. The new system worked, and farmers loved it. Had the cooperative rolled it out to all of its villages, it would have saved them tens of thousands of dollars every year.24 Yet an internal rivalry kept us from expanding beyond the pilot. (And as researchers, we lacked the patience and charm to iron out the discord.) The technology worked perfectly, but institutional politics hampered deployment. Good partners were important, even with good technology.
The third factor lies with intended beneficiaries. They must have the desire and the ability to take advantage of the technology provided. Sometimes they don’t. In India, we worked with poor people who lacked basic health care and hygiene, so we thought it would be useful to offer the right information at the right time. But would-be beneficiaries hesitated to follow even the simplest advice. Women wouldn’t take iron pills because of the bitter taste. Households wouldn’t boil water because of the extra effort. Fathers would lose infants to minor illnesses because they balked at hospital charges of as little as 50 rupees (about $1, potentially a day’s wages). In other words, they were like any of us who fail to exercise and eat well despite knowing that we should. It didn’t matter whether we delivered the information via text messages, automated voice calls, entertaining videos, or interactive apps. Technology by itself didn’t budge social and psychological inertia.
These factors suggest that the contextualists are right. Context definitely matters. All three factors, though, point to human context as what matters most. Or, to put it another way, the technology isn’t the deciding factor even in a technology project. Of course, good design trumps poor design, but beyond some level of functionality, technical design matters much less than the human elements.25 The right people can work around a bad technology, but the wrong people will mess up even a good one.
This is consistent with a fourth camp of technology-and-society scholarship sometimes called social determinism.26 Versions of it are known as “the social construction of technology” and the “instrumental view” of technology. These and related theories emphasize that technology is molded and wielded by people. People decide the form of technologies, the purposes of their use, and the outcomes they generate. Social determinism rests on the plain fact that it is people who act and make decisions – technologies do not.27
But if social determinism is commonsense, it’s not quite enough. It says little about how much change follows in the wake of invention. So while I felt close kinship with social determinists, something was still missing.
It’s All Geek to Me
If you’ve ever landed on a webpage in a language you can’t read, you have an idea of what it means to be illiterate in a digital world. You can see that there’s a whole universe bursting with possibility, but none of it makes sense. You might recognize a few photos here and there, but your curiosity is piqued only to bang into a wall of indecipherable gibberish.
That was the experience of those we worked with who couldn’t read. It was true of the mothers of the students I taught in Nakkalbande, some of whom would pop into an occasional class to see what their children were up to. So one agenda for our research was digital interfaces for nonliterate users. In 2005 I hired Indrani Medhi, a designer who threw herself into the research and emerged within a few years as the world’s expert on what we called “text-free user interfaces.”
Medhi conducted much of her research in Nakkalbande. She got along well with Menon and shared her combination of toughness and empathy. Medhi was quick to befriend her research subjects – mostly women from poor families who earned $20 to $40 a month doing informal household work. Through them, Medhi found that illiteracy didn’t always mean innumeracy, at least in those communities. Many of the women could read numbers, even if they sometimes confused “2” and “5.” With a colleague, Archana Prasad, she also found that respondents understood cartoon drawings best, finding them less confusing than either simplified icons or photographs.28 These and other discoveries fed directly into Medhi’s designs.
Medhi and I had frequent discussions about her work, and some themes came up repeatedly. One was that illiteracy wasn’t black and white – it was a spectrum. Some people couldn’t read at all, others knew the alphabet, and still others could sound out words but couldn’t read a newspaper. Another point was that users differed considerably in their responses to the same interface. Some people zipped through Medhi’s text-free interfaces and even seemed to enjoy the process. Others were hesitant and slow and required encouragement to continue.
These traits seemed correlated: More literate people were more adept with computer interfaces, even when the interfaces contained no text. To investigate further, we ran a study in which participants were first given tests of literacy and abstract reasoning and then asked to perform a simple task on a computer.29 The task was to navigate a menu interface, which we knew would be a challenge. The respondents were asked to find specific household items among cartoon graphics organized in one of two ways. In the first, the objects were laid out so as to be visible all at once, but in a random order. In the second, they were organized as a series of nested items, similar to files put into folders on a computer. In the nested interface, bangles, for example, could be found first by clicking a graphic indicating things you wear (versus things you use), and then jewelry (versus clothing), and then hands (versus face or feet).
The research validated our hunches. First, the degree of literacy correlated with the measure of abstract reasoning capacity. Second, all of the participants were quicker to find items in the single unorganized list than in the nested hierarchies. And third, on both navigation tasks – flat list and nested hierarchies – those who scored higher on the tests of literacy and reasoning outperformed their lower-scoring peers.
So whatever level of intelligence and education a person already had correlated with their facility with simple computer tasks. People with greater education and cognitive capacity were better able to use the technology. It would be careless to generalize too much from this one finding, but over the years, I saw many similar results. In a related study, supplying textual hints along with audio and graphics helped the literate more than the semiliterate and the semiliterate more than the nonliterate. Another group examined mobile phones and Indian women micro-entrepreneurs. The researchers found that the most ambitious and self-confident women benefited most from mobile phones. And a study of Tanzanian health-care workers showed that their visits to patients increased with text-message reminders, but only if they were also overseen by human supervisors.30
In other words, what people get out of technology depends on what they can do and want to do even without technology. In retrospect this seems self-evident, but it wasn’t a major theme in technology and society literature.31
The Eureka Moment
So theories of social determinism say that technology is put to use according to underlying human intentions. At the same time, the degree to which technology makes an impact depends on existing human capacities. Put these ideas together and technology’s primary effect is to amplify human forces.32 Like a lever, technology amplifies people’s capacities in the direction of their intentions. A computer allows its user to perform desired knowledge tasks in a way that is faster, easier, or more powerful than the user could without technology. But how much faster, more easily, and more powerfully is in some proportion to the user’s capacity. A mobile phone allows people to perform desired communication tasks across greater distances, with more people, and at greater frequency than would be possible without one. But whom one can communicate with and what one can expect of them depends on one’s existing social capacity.
The idea is so simple and so widely applicable that I have come to think of it as technology’s Law of Amplification. It was at work among the girls in Nakkalbande. Most had little conscious intention to learn or improve knowledge skills, and social forces such as the expectation to marry early impeded their interest. As a result, there was little productive force for the technology to amplify. But the two older girls who recognized the value of education had some inner flame fanned by the laptop. I could imagine that, with luck and persistence, they might have a chance at a different life.
Amplification also resolved some of the apparent paradoxes of my research. Why did MultiPoint, for example, work in our pilots, but not when we took it to other schools? It was because our positive results relied on special conditions that we had imposed. For our trials, we had deliberately chosen partner schools with capable teachers and principals. As a result, the students were focused on learning. They followed instructions without too much distraction. Another critical factor was our own presence as researchers. We set up the technology ourselves. And where we found teaching capacity wanting, we filled in. In other words, we had lined up all of the social conditions favorably so that the technology had a chance to work. And on that firm base, MultiPoint increased the number of students learning from computers.
But, for expansion, we targeted subpar schools. They needed the most help, after all. We also reduced our personal involvement, since the schools would eventually have to operate without us. In the absence of good teaching and IT support, however, the technology didn’t do much.
In the worst cases, technology was detrimental. More times than I’d like to admit, I’d visit a class involved in one of our projects, and something would go wrong with the technology. With no IT staff, the teacher would fumble to figure things out. The children would grow distracted. Sometimes I’d jump in to help. By the time the power was back on, the PCs rebooted, and the children once more settled into their seats, half of a fifty-minute class period was lost. It would have been better if they had stuck to pencil and paper.
In these cases, we see vividly that technologies don’t have fixed additive effects. They magnify existing social forces, which themselves can be good, bad, or neutral. Thus, technological utopians and skeptics are both partially right and partially wrong. Of course, this means it’s the contextualists and the social determinists who are closest to the truth. But the Law of Amplification says something more specific and therefore more useful.
For example, amplification offers clues as to why large-scale studies of educational technology rarely show positive results. In any representative set of schools, some are doing well and others poorly. Introducing computers may result in benefit for some, but it distracts the weaker schools from their core mission. On average, the outcome is a wash. An even bigger problem is that administrators rarely allocate enough resources to adapt curricula or train teachers.33 Where teachers don’t know how to incorporate digital tools appropriately, there is little capacity for the technology to amplify.
If a private company is failing to make a profit, no one expects that state-of-the-art data centers, better productivity software, and new laptops for all of the employees will turn things around. Yet, that is exactly the logic of so many attempts to fix schools with technology.
And what about computers outside of school? What happens when children are left to learn on their own with digital gadgets, as so many tech evangelists insist we should do? Here technology amplifies the children’s propensities. To be sure, children have a natural desire to learn and play and grow. But they also have a natural desire to distract themselves in less productive ways. Digital technology amplifies both of these appetites. The balance between them differs from child to child, but on the whole, distraction seems to win out when there’s no adult guidance. This is exactly what Robert Fairlie and Jonathan Robinson’s 2013 study of laptops in the home shows: If you provide an all-purpose technology that can be used for learning and entertainment, children choose entertainment.34 Technology by itself doesn’t undo that inclination – it amplifies it.
Amplifying Power
Back in Bangalore, I once hosted a political science professor, whom I’ll call Padma. She was interested in technology and governance. Padma was in the city to study a program that made the municipal government’s finances transparent to the public. A nonprofit group had convinced the government to set up a tool that let anyone with Internet access see how city money was spent. Citizens were able to see, for example, that 5,000 rupees (~$100) was spent repairing a pothole (expensive, but not unreasonable) or that 500,000 rupees ($10,000) was spent cutting down a tree (not likely; a sign of kickbacks). The nonprofit would complain to the government about egregious spending it discovered. Sometimes it organized citizen protests. Padma had a hypothesis that technology promoted transparency and accountability, and here was a system that seemed to prove it.
When I asked her how the project turned out, though, she said the government had shut it down within a few months. Officials didn’t want their graft schemes open to public inspection.
If a computer system for government transparency was taken down by the very bureaucrats it was meant to monitor, then what accountability did the technology really bring? The project showed exactly the opposite of Padma’s claim. Instead of technology trumping politics, politics trumped technology. At first the technology amplified the nonprofit’s activism, but the organization’s power to affect government was overcome by the crooked bureaucracy’s greater power to turn off the technology.
Looking back at experiences like this, I saw that the Law of Amplification explained much more than just the fate of technology in education. It applies to a host of other situations. In 2011, earthshaking world events provided a unique testing ground.
Facebook Devolution
In what is now a well-worn story, Wael Ghonim, a thirty-year-old Google executive, used Facebook to help organize the protests that toppled Hosni Mubarak in Egypt. Today, it’s hard to speak of the Arab Spring without calling to mind the phrase “Facebook revolution.”
In early 2011, Facebook had roughly 600 million users. Almost 10 percent of the world population was on it.35 Speculations about its initial public offering swirled, and The Social Network, a movie telling one version of its origins, was in theaters. With this buzz ringing in their ears, journalists and bloggers were agog over Facebook’s role in Egypt. A day before the January 25 protests, Time asked, “Is Egypt about to have a Facebook Revolution?” It cited the 85,000 people who had pledged on Facebook that they would march.36 Days after the first protest in Tahrir Square, Roger Cohen wrote in the New York Times, “The Facebook-armed youth of Tunisia and Egypt rise to demonstrate the liberating power of social media.”37 One Egyptian newspaper reported that a man named his firstborn daughter Facebook.38
On February 11, 2011 – the day the regime folded – Ghonim told a CNN interviewer, “I want to meet Mark Zuckerberg one day and thank him. . . . This revolution started on Facebook . . . in June 2010 when hundreds of thousands of Egyptians started collaborating content. We would post a video on Facebook that would be shared by 60,000 people on their walls within a few hours. I’ve always said that if you want to liberate a society, just give them the Internet.”39
If you want to liberate a society, just give them the Internet. This is a classic statement of technological utopianism. As in Star Trek, where technology eradicates hunger, Ghonim is saying that the Internet eradicates autocracy. Coming from someone directly involved in the revolution, it seems impolite to refute. But just as with the hype around technology for education, the case for social media as an important cause of democratic change vanishes under critical inspection.
First, let’s accept that in Egypt, and earlier in Tunisia, social media contributed to the overthrow of dictators. We’ll come back to exactly what that contribution was, but there’s no doubt that YouTube videos and Facebook posts played a part.
But in other Middle Eastern countries, events unfolded differently. Consider Libya, for example. On February 18, 2011, just days after the rebellion started, Muammar Gaddafi dimmed communication networks in his country.40 Maybe he had heard about the Facebook revolution next door and didn’t want one in his own country. He disabled most of the Internet in Libya and did the same for phone services, mobile and landline.41 The rebels, though, managed to coordinate nevertheless. Far from ceasing their activities, they kept fighting. Soon after, they overwhelmed Gaddafi’s forces, tracked him down, and executed him in the streets.
In Syria, President Bashar al-Assad took a cue from Gaddafi. When protests began, he shut down the Internet nationwide and selectively disabled phone networks to hinder rebel communications.42 Protests, though, continued, leading to an all-out civil war, with the rebels showing no signs of quitting even four years later. Media portrayal of Syria has long since stopped mentioning Facebook, Twitter, or YouTube.
Meanwhile, in Bahrain and Saudi Arabia, something very different happened. A few public protests were quashed in Bahrain, and the Western press hardly noticed the feeble activism in Saudi Arabia. Importantly, it wasn’t for a lack of social media organization. Encouraged by the Tunisian and Egyptian revolutions, a spate of petitions and videos circulated in Saudi Arabia via Facebook and Twitter. They called for an end to absolute monarchy. But these online actions were squelched by offline forces, as reported by Madawi Al-Rasheed, a specialist in Islamist movements and Middle Eastern civil society.43 One young activist, Muhammad al-Wadani, uploaded a YouTube video urging democracy. He was promptly arrested. Two online petitions demanding constitutional monarchy received thousands of signatures. They were ignored. A group calling itself the National Coalition and Free Youth Movement attempted to organize online. It ended up in a game of virtual Whac-A-Mole as regime security took down its websites one after the other. Protests planned on social media led nowhere.
The goal of these and other Web-based appeals was a physical protest, a “Day of Rage” on March 11, 2011. But, as Al-Rasheed wrote, “things were quiet” on the day itself: “Security forces spread through every corner and street. An unannounced curfew loomed over Riyadh and Jeddah.” No protests worthy of international attention materialized.
Al-Rasheed argued that the Saudi monarchy has starved civil society in the kingdom for decades. There are no trade unions, no political parties, no youth associations, and no women’s organizations. Demonstrations themselves are forbidden outright. As a result, grassroots organizational capacity is stunted. This is in direct contrast to Egypt, for example, where trade unions, nongovernmental organizations, and the Muslim Brotherhood all simmered as potent political forces despite Mubarak’s oppression.44
The absence of protest is a non-event, so it goes unreported by mainstream news organizations. But an accurate understanding of social media’s role in revolution must account for the stillborn protests of Bahrain and Saudi Arabia as much as for the successful uprisings in Tunisia and Egypt.
Did America Have a Lantern Revolution?
Combining the lessons of Tunisia, Egypt, Libya, Syria, Bahrain, and Saudi Arabia, we come to an undeniable conclusion: Social media is neither necessary nor sufficient for revolution. Claims of social media revolutions commit the classic conflation of correlation and cause. To say that the Arab Spring was a Facebook revolution is like calling the events of 1775 in America a lantern revolution thanks to Paul Revere: “One, if by land, and two, if by sea.”
Actually, the tale of Revere’s lanterns is itself the stuff of myth. In reality, the lantern signal was just a backup plan in case Revere was arrested and unable to sound the alarm.45 What this story actually illustrates is that revolutionaries are contingency planners who exploit every tool at their disposal. In a telling interview about his involvement in Egypt, Ghonim noted, “They shut down Facebook. But, I had a backup plan. I used Google Groups to send a mass-mail campaign.”46 Presumably, had email also been blocked, he’d have resorted to phone calls, paper notes, and word of mouth – using the same communication tools as the 80 percent of Egyptians who have never been online. Of course, for Ghonim, “technology played a great role” – it would have been much harder for him to organize door to door. But it seems unlikely that the absence of Facebook would have prevented his activism altogether, or could have kept the rest of the country mute. Taking the broader view, Facebook was a tool of convenience for angry activists spreading the word by every available channel.
A few people tried to debunk social media’s revolutionary powers. For example, just as Mubarak’s regime was crumbling, Morozov’s book The Net Delusion was released.47 Though Morozov couldn’t have known of Egypt’s fate while he was writing the book, he provided some of the most insightful commentary on technology’s role in Middle Eastern uprisings. His first chapter mocks the breathless hype around a supposed Twitter revolution in 2009 Iran – hype that led Hillary Clinton’s State Department to ask Twitter to postpone routine maintenance during the height of protests. (Twitter complied, and Clay Shirky wrote, “This is it. The big one. This is the first revolution that has been catapulted onto a global stage and transformed by social media.”48) Morozov, however, cites low numbers of actual Twitter users in Iran at the time (perhaps all of sixty) and Iranian denials that Twitter had much of a role in organizing protests. He argued that Twitter was less an effective tool of protest and more a way for the outside world to eavesdrop on the events. For him, the social media narrative recalled Cold War ideas that capitalist technology would triumph over communist inefficiency, as if people in the Middle East couldn’t have rebelled on their own without the gifts of American entrepreneurs. In the end, whatever was tweeted, there was no Twitter revolution in Iran.
Also among the skeptics was Malcolm Gladwell, who had previously picked a fight with Shirky over the latter’s rhapsodies about social media. Gladwell pointed out that in the 1980s, East Germans barely had access to phones, much less the Internet, and they still organized, protested, and brought down the Berlin Wall. Of the Arab Spring revolutions, Gladwell wrote, “Surely the least interesting fact about them is that some of the protesters may (or may not) have at one point or another employed some of the tools of the new media to communicate with one another.”49
As the critics gained momentum, social media proponents fought back. Most were chastened but insisted that social media still mattered in some important way. One reporter writing for CNN hedged, “Yes, of course, technology alone doesn’t make revolutions. . . . But that doesn’t mean social media cannot provide wavering revolutionaries with vital aid and comfort.”50
Amplification’s Eternal Recurrence
What none of the commentators were providing, though, was a good framework for understanding technology’s role, and that’s where the Law of Amplification comes in. It explains how social media contributed to successful revolutions in some countries but not others, and how it can simultaneously be a supporting factor without being a primary cause.
In Tunisia and Egypt, citizen frustration and organized groups existed long before social media. Mubarak was in power for nearly thirty years, overseeing a stagnant economy under a “democracy” that had no one fooled. That frustration coalesced within existing civil society organizations and found amplified expression on Facebook. Leaders of the rebellion saw their organizing power extended by social media. Overall, technology probably accelerated the pace of revolution.
In Bahrain and Saudi Arabia, civil society was crippled, so no amount of Facebook organizing made a difference. Technology doesn’t amplify human forces that aren’t there.
Even Ghonim later acknowledged, “I am no hero. . . . The heroes were the ones who were in the streets, those who got beaten up, those who got arrested and put their lives in danger.”51 There is no protest without citizen frustration. There is no rebellion without a sacrifice of personal safety.
Amplification as an idea is hardly novel. My colleague Jonathan Donner, an expert on mobile phones in the developing world, pointed me to a 1970 paper on the “knowledge gap hypothesis,” in which the authors reported that public-service messaging delivered through mass media was better absorbed by wealthier, more educated households.52 Lewis Mumford, a prominent twentieth-century technology critic who was part skeptic and part contextualist, wrote a two-volume work called The Myth of the Machine, in which he mentions in passing that technology “supported and enlarged the capacities for human expression.”53 And Philip Agre, another computer-scientist-turned-technology-analyst, wrote prescient articles about the Internet in politics. “The Internet changes nothing on its own,” he told us, “but it can amplify existing forces.”54
But if amplification isn’t new, it is completely underappreciated.