We are as gods and might as well get good at it.
—STEWART BRAND, “We Are as Gods”
LONG BEFORE THE MILITARY CONVOY ARRIVED in the muggy town of Dara Lam, news of the meeting between the U.S. Army colonel and the unpopular governor of Kirsham province had seeped into social media. Angry with the American presence and the governor’s corruption, local citizens organized for a demonstration. Their trending hashtag—#justice4all—soon drew the attention of international media. It also drew the eyes of some less interested in justice: the notorious Fariq terror network. Using sockpuppet accounts and false reports, the terrorists fanned the flames, calling for the protesters to confront the American occupiers.
But this wasn’t the full extent of Fariq’s plan. Knowing where a massive crowd of civilians would gather, the terrorists also set an ambush. They’d fire on the U.S. soldiers as they exited the building, and if the soldiers fired back, the demonstrators would be caught in the crossfire. Pre-positioned cameramen stood ready to record the bloody outcome: either dead Americans or dead civilians. A network of online proxies was prepared to drive the event to virality and use it for future propaganda and recruiting. Whatever the physical outcome, the terrorists would win this battle.
Luckily, other eyes were tracking the flurry of activity online: those of a U.S. Army brigade’s tactical operations center. The center’s task was to monitor the environment in which its soldiers operated, whether dense cities, isolated mountain ranges, or clusters of local blogs and social media influencers. The fast-moving developments were detected and then immediately passed up the chain of command. The officers might once have discounted internet chatter but now understood its importance. Receiving word of the protest’s growing strength and fury, the colonel cut his meeting short and left discreetly through a back entrance. Fariq’s plan was thwarted.
Try as you might, you won’t find any record of this event in the news—and it is not because the battle never took place. It is because Dara Lam is a fake settlement in a fake province of a fake country, one that endures a fake war on a fake internet that breaks out every few months in the very real state of Louisiana.
The Joint Readiness Training Center at Fort Polk holds a special place in military history. It was created as part of the Louisiana Maneuvers, a series of massive training exercises held just before the United States entered World War II. When Hitler and his blitzkrieg rolled over Europe, the U.S. Army realized warfare was operating by a new set of rules. It had to figure out how to transition from a world of horses and telegraphs to one of mechanized tanks and trucks, guided by wireless communications. It was at Fort Polk that American soldiers, including such legendary figures as Dwight D. Eisenhower and George S. Patton, learned how to fight in a way that would preserve the free world.
Since then, Fort Polk has served as a continuous field laboratory where the U.S. Army trains for tomorrow’s battles. During the Cold War, it was used to prepare for feared clashes with the Soviet Red Army and then to acclimatize troops to the jungles of Vietnam. After 9/11, the 72,000-acre site was transformed into the fake province of Kirsham, replete with twelve plywood villages, an opposing force of simulated insurgents, and scores of full-time actors playing civilians caught in the middle: in short, everything the Army thought it needed to simulate how war was changing. Today, Fort Polk boasts a brand-new innovation for this task: the SMEIR.
Short for Social Media Environment and Internet Replication, SMEIR simulates the blogs, news outlets, and social media accounts that intertwine to form a virtual battlefield atop the physical one. A team of defense contractors and military officers simulate the internet activity of a small city—rambling posts, innocuous tweets, and the occasional bit of viral propaganda—challenging the troops fighting in the Kirsham war games to navigate the digital terrain. For the stressed, exhausted soldiers dodging enemy bombs and bullets, it’s not enough to safeguard the local population and fight the evil insurgents; they must now be mindful of the ebb and flow of online conversation.
From a military perspective, SMEIR is a surreal development. A generation ago, the internet was a niche plaything, one that the U.S. military itself had just walked away from. Only the most far-sighted futurists were suggesting that it might one day become a crucial battlefield. None imagined that the military would have to pay millions of dollars to simulate a second, fake internet to train for war on the real one.
But in the unbridled chaos of the modern internet, even an innovation like SMEIR is still playing catch-up. Thwarted by an eagle-eyed tactical operations officer, actual terrorists wouldn’t just fade back into the crowd. They’d shoot the civilians anyway and simply manufacture evidence of U.S. involvement. Or they’d manufacture everything about the video and use armies of botnets and distant fanboys to overwhelm the best efforts of fact-checkers, manipulating the algorithms of the web itself.
Nor can such a simulation capture the most crucial parts of the battlefield. The digital skirmishes that would have determined who actually won this fight wouldn’t have been limited to Louisiana or SMEIR. Rather, they would have been decided by the clicks of millions of people who’ve never met a person from Dara Lam and by whatever policy that social media company executives had chosen for how to handle Fariq propaganda. The reality of what took place in the (fake) battle would have been secondary to whatever aspects of it went viral.
Just as soldiers in Louisiana are struggling to adjust to this new information conflict, so are engineers in Silicon Valley. All the social media powers were founded on the optimistic premise that a more close-knit and communal world would be a better one. “[Facebook] was built to accomplish a social mission, to make the world more open and connected,” wrote Mark Zuckerberg in a 2012 letter to investors, just as his company went public. Yet as we’ve seen, these companies must now address the fact that this very same openness and connection has also made their creations the place for continual and global conflicts.
This duality of the social media revolution touches the rest of us, too. The evolutionary advantages that make us such dynamic, social creatures—our curiosity, affinity for others, and desire to belong—also render us susceptible to dangerous currents of disinformation. It doesn’t even help to be born into the world of the internet, as is the case for millennials and Generation Z. Study after study finds that youth is no defense against the dangers we’ve explored in this book. Regardless of how old they are, humans as a species are uniquely ill-equipped to handle both the instantaneity and the immensity of information that defines the social media age.
However, humans are unique in their ability to learn and evolve, to change the fabric of their surroundings. Although the maturation of the internet has produced dramatic new forces acting upon war and politics—and, by extension, upon all of society—these changes are far from unknown or unknowable. Even LikeWar has rules.
First, for all the sense of flux, the modern information environment is becoming stable. The internet is now the preeminent communications medium in the world; it will remain so for the foreseeable future. Through social media, the web will grow bigger in size, scope, and membership, but its essential form and centrality to the information ecosystem will not change. It has also reached a point of maturity whereby most of its key players will remain the same. Like them or hate them, the majority of today’s most prominent social media companies and voices will continue to play a crucial role in public life for years to come.
Second, the internet is a battlefield. Like every other technology before it, the internet is not a harbinger of peace and understanding. Instead, it’s a platform for achieving the goals of whichever actor manipulates it most effectively. Its weaponization, and the conflicts that then erupt on it, define both what happens on the internet and what we take away from it. Battle on the internet is continuous, the battlefield is contiguous, and the information it produces is contagious. The best and worst aspects of human nature duel over what truly matters most online: our attention and engagement.
Third, this battlefield changes how we must think about information itself. If something happens, we must assume that there’s likely a digital record of it—an image, video, or errant tweet—that will surface seconds or years from now. However, an event only carries power if people also believe that it happened. The nature of this process means that a manufactured event can have real power, while a demonstrably true event can be rendered irrelevant. What determines the outcome isn’t mastery of the “facts,” but rather a back-and-forth battle of psychological, political, and (increasingly) algorithmic manipulation. Everything is now transparent, yet the truth can be easily obscured.
Fourth, war and politics have never been so intertwined. In cyberspace, the means by which the political or military aspects of this competition are “won” are essentially identical. As a result, politics has taken on elements of information warfare, while violent conflict is increasingly influenced by the tug-of-war for online opinion. This also means that the engineers of Silicon Valley, quite unintentionally, have turned into global power brokers. Their most minute decisions shape the battlefield on which both war and politics are increasingly decided.
Fifth, we’re all part of the battle. We are surrounded by countless information struggles—some apparent, some invisible—all of which seek to alter our perceptions of the world. Whatever we notice, whatever we “like,” whatever we share, becomes the next salvo. In this new war of wars, taking place on the network of networks, there is no neutral ground.
LikeWar isn’t likeable. This state of affairs is certainly not what we were promised. And no matter how hard today’s technologists try, their best efforts will never yield the perfect, glittering future once envisioned by the internet’s early inventors.
Yet recognizing the new truths of the modern information environment and the eternal aspects of politics and war doesn’t mean admitting defeat. Rather, it allows us to hone our focus and channel our energies into measures that can accomplish the most tangible good. Some of these initiatives can be undertaken by governments, others by social media companies, and still others by each of us on our own.
For governments, the first and most important step is to take this new battleground seriously. Social media now forms the foundation of commercial, political, and civic life. It is also a conflict space of immense consequence to both national and individual citizens’ security. Just as the threat of cyberwar was recognized and then organized and prepared for over the past two decades, so, too, must this new front be addressed.
This advice is most urgent for democratic governments. As this book has shown, authoritarian leaders have long since attuned themselves to the potential of social media, both as a threat to their rule and as a new vector for attacking their foes. Although many democracies have formed national efforts to confront the resulting dangers, the United States—the very birthplace of the internet—has remained supremely ill-equipped. Indeed, in the wake of the episodes you have read about in this book, other nations now look to the United States as a showcase for all the developments they wish to avoid. So far, America has emerged as one of the clearest “losers” in this new kind of warfare.
A model to respond comes from a number of countries that have moved beyond the previously discussed military reorganization to the creation of “whole-of-nation” efforts intended to inoculate their societies against information threats. It is not coincidental that among the first states to do so were Finland, Estonia, Latvia, Lithuania, and Sweden, all of which face a steady barrage of Russian information attacks, backed by the close proximity of Russian soldiers and tanks. Their inoculation efforts include citizen education programs, public tracking and notices of foreign disinformation campaigns, election protections and forced transparency of political campaign activities, and legal action to limit the effect of poisonous super-spreaders.
In many ways, such holistic responses to information threats have an American pedigree. One of the most useful efforts to foil Soviet operations during the Cold War was a comprehensive U.S. government effort called the Active Measures Working Group. It brought together people working in various government agencies—from spies to diplomats to broadcasters to educators—to collaborate on identifying and pushing back against KGB-planted false stories designed to fracture socie-ties and undermine support for democracy. There is no such equivalent today. Neither is there an agency comparable to what the Centers for Disease Control and Prevention does for health—an information clearinghouse for government to connect with business and researchers in order to work together to battle dangerous viral outbreaks.
It would be easy to say that such efforts should merely be resurrected and reconstituted for the internet age—and doing so would be a welcome development. But we must also acknowledge a larger problem: Today, a significant part of the American political culture is willfully denying the new threats to its cohesion. In some cases, it’s colluding with them.
Too often, efforts to battle back against online dangers emanating from actors at home and abroad have been stymied by elements within the U.S. government. Indeed, at the time we write this in 2018, the Trump White House has not held a single cabinet-level meeting on how to address the challenges outlined in this book, while its State Department refused to increase efforts to counter online terrorist propaganda and Russian disinformation, even as Congress allocated nearly $80 million for the purpose.
Similarly, the American election system remains remarkably vulnerable, not merely to hacking of the voting booth, but also to the foreign manipulation of U.S. voters’ political dialogue and beliefs. Ironically, although the United States has contributed millions of dollars to help nations like Ukraine safeguard their citizens against these new threats, political paralysis has prevented the U.S. government from taking meaningful steps to inoculate its own population. Until this is reframed as a nonpartisan issue—akin to something as basic as health education—the United States will remain at grave risk.
Accordingly, information literacy is no longer merely an education issue but a national security imperative. Indeed, given how early children’s thought patterns develop and their use of online platforms begins, the process cannot start early enough. Just as in basic health education, there are parallel roles for both families and schools in teaching children how to protect themselves online, as well as gain the skills needed to be responsible citizens. At younger ages, these include programs that focus on critical thinking skills, expose kids to false headlines, and encourage them to play with (and hence learn from) image-warping software. Nor should the education stop as students get older. As of 2017, at least a dozen universities offered courses targeting more advanced critical thinking in media consumption, including an aptly named one at the University of Washington: “Calling Bullshit: Data Reasoning in a Digital World.” This small number of pilot programs point the way, but they also illustrate how far we have to go in making them more widely available.
As in public health, such efforts will have to be supported outside the classroom, targeting the broader populace. Just as in the case of viral outbreaks of disease, there is a need for everything from public awareness campaigns to explain the risks of disinformation efforts to mass media notifications that announce their detected onset.
Given the dangers, anger, and lies that pervade social media, there’s a temptation to tell people to step away from it altogether. Sean Parker created one of the first file-sharing social networks, Napster, and then became Facebook’s first president. However, he has since become a social media “conscientious objector,” leaving the world that he helped make. Parker laments not just what social media has already done to us, but what it bodes for the next generation. “God only knows what it’s doing to our children’s brains,” he said in 2017.
The problem is that not all of us either want to, or even can, make that choice. Like it or not, social media now plays a foundational role in public and private life alike; it can’t be un-invented or simply set aside. Nor is the technology itself bad. As we’ve repeatedly seen in this book, its new powers have been harnessed toward both good and evil ends, empowering both wonderful and terrible people, often simultaneously. Finally, it is damned addictive. Any program advising people to “just say no” to social media will be as infamously ineffective as the original 1980s antidrug campaign.
Instead, part of the governance solution to our social media problem may actually be more social media, just of a different kind. While the technology has been used to foment a wide array of problems around the world, a number of leaders and nations have simultaneously embraced its participatory nature to do the opposite: to identify and enact shared policy solutions. Such “technocracy” views the new mass engagement that social networks allow as a mechanism to improve our civic lives. For instance, a growing number of elected governments don’t use social media just to frighten or anger their followers; they also use it to expand public awareness and access to programs, track citizen wants and needs, even gather proposals for public spending. Some also are seeking to inject it more directly into the political process. Switzerland, for instance, may be the world’s oldest continuous democracy, but it has been quick to use social networks to allow the digitization of citizen petitions and the insertion of online initiatives into its policy deliberations. In Australia and Brazil, the Flux movement seeks to use technology to return to true political representation, whereby elected leaders commit to a system allowing parliamentary submission of and digital voting on key issues, moving the power from the politician back to the people.
What is common across these examples of governance via network is the use of social media to learn and involve. It is the opposite of governance via trolling—the all-too-frequent use of social media to attack, provoke, and preen.
This points to perhaps the biggest challenge of all: it will be hard to overcome any system that incentivizes an opposite outcome. Not just our networks but our politics and culture have been swarmed by the worst aspects of social media, from lies and conspiracy theories to homophily and trolling. This has happened for the very reason that it works: it is rewarded with attention, and, as we have seen, this attention becomes power.
Super-spreaders play a magnified role in our world; there is no changing that fact now. But it is how they are rewarded that determines whether their influence is a malign or benevolent one. When someone engages in the spread of lies, hate, and other societal poisons, they should be stigmatized accordingly. It is not just shameful but dangerous that the purveyors of the worst behaviors on social media have enjoyed increased fame and fortune, all the way up to invitations to the White House. Stopping these bad actors requires setting an example and ensuring that repeat offenders never escape the gravity of their past actions and are excluded from the institutions and platforms of power that now matter most in our society. In a democracy, you have a right to your opinion, but no right to be celebrated for an ugly, hateful opinion, especially if you’ve spread lie after lie.
Indeed, social media actions need to be taken all the more seriously when their poisonous side infects realms like national security, where large numbers of people’s lives are at stake. Those who deliberately facilitate enemy efforts, whether it be providing a megaphone for terrorist groups or consciously spreading disinformation, especially that from foreign government offensives, have to be seen for what they are. They are no longer just fighting for their personal brand or their political party; they are aiding and abetting enemies that seek to harm all of society.
We must also come to grips with the new challenge of free speech in the age of social media—what is known as “dangerous speech.” This term has emerged from studies of what prompts communal violence. It describes public statements intended to inspire hate and incite violent actions, usually against minorities. Dangerous speech isn’t merely partisan language or a bigoted remark. These are, alas, all too common. Rather, dangerous speech falls into one or more of five categories: dehumanizing language (comparing people to animals or as “disgusting” or subhuman in some way); coded language (using coy historical references, loaded memes, or terms popular among hate groups); suggestions of impurity (that a target is unworthy of equal rights, somehow “poisoning” society as a whole); opportunistic claims of attacks on women, but by people with no concern for women’s rights (which allows the group to claim a valorous reason for its hate); and accusation in a mirror (a reversal of reality, in which a group is falsely told it is under attack, as a means to justify preemptive violence against the target). This sort of speech poses a mortal threat to a peaceable society, especially if crossed with the power of a super-spreader to give it both validation and reach.
Cloaking itself in ambiguity and spreading via half-truths, dangerous speech is uniquely suited to social media. Its human toll can be seen in episodes like the web-empowered anti-Muslim riots of India and the genocide of the Rohingya people in Myanmar. But what the researchers who focus on the problem have grown most disturbed by is how “dangerous speech” is increasingly at work in the U.S. Instances of dangerous speech are at an all-time high, spreading via deliberate information offenses from afar, as well as via once-scorned domestic extremists whose voices have become amplified and even welcomed into the mainstream. The coming years will determine whether these dangerous voices will continue to thrive in our social networks, and thus our politics, or be defeated.
This challenge takes us beyond governments and their voters to the accountability we should demand from the companies that now shape social media and the world beyond. It is a strange fact that the entities best positioned to police the viral spread of hate and violence are not legislatures, but social media companies. They have access to the data and patterns that evidence it, and they can more rapidly respond than governments. As rulers of voluntary networks, they determine the terms of service that reflect their communities’ and stockholders’ best interests. Dangerous speech is not good for either.
This is just one dimension of the challenges these companies must confront. Put simply, Silicon Valley must accept more of the political and social responsibility that the success of its technology has thrust upon it. “The more we connect, the better it gets” is an old Facebook slogan that remains generally representative of how social media companies see the world. As we’ve seen, that slogan is neither true nor an acceptable way for these firms to approach the new role they play in society.
Although figures like Mark Zuckerberg have protested at various times that they should not be considered “arbiters of the truth,” this is exactly what they are. The information that spreads via their services—governed by their legal and software codes—shapes our shared reality. If they aren’t the arbiters of truth, who is?
Accordingly, these companies must abandon the pretense that they are merely “neutral” platform providers. It is a weak defense that they outgrew many years ago. Bigots, racists, violent extremists, and professional trolls do not have to be accorded the same respect as marginalized peoples and democratic states. In turn, the authoritarian governments that exploit their networks and target their users must be treated as adversaries—not potential new markets.
In the process, Silicon Valley must also break the code of silence that pervades its own culture. Our past research has brought us into contact with soldiers, spies, mercenaries, insurgents, and hackers. In every case, they proved oddly more willing to speak about their work—and how they wrestle with the thorny dilemmas within it—than those employed at big social media companies. As technology reporter Lorenzo Franceschi-Bicchierai has similarly written of his experience reporting on Facebook, “In many cases, answers to simple questions—are the nude images blurred or not, for instance—are filtered to the point where the information Facebook gives journalists is not true.”
These companies should walk their talk, embracing proactive information disclosures instead of just using the word “transparency” ad infinitum in vague press releases. This applies not just to the policies that govern our shared online spaces but also to the information these companies collect from those spaces. It’s unacceptable that social media firms took nearly a year after the 2016 U.S. election to release data showing definitive proof of a Russian disinformation campaign—and even then, only after repeated demands by Congress.
It is perhaps especially troubling that, despite all the political and public pressure, most are still dragging their feet in disclosing the full extent of what played out across their networks. Of the major social media companies, Reddit is the only one that preserved the known fake Russian accounts for public examination. By wiping clean this crucial evidence, the firms are doing the digital equivalent of bringing a vacuum cleaner to the scene of a crime. They are not just preventing investigators and researchers from exploring the full extent of what occurred and how to prevent a repeat. They are destroying what should be a memorial to a moment of mass manipulation and misinformation that very much altered world history.
Just as all companies have a role in public health, so does Silicon Valley have a responsibility to help build public information literacy. Owning the platforms by which misinformation spreads, social media companies are in a powerful position to help inoculate the public. The most effective of these initiatives don’t simply warn people about general misinformation (e.g., “Don’t believe everything you read on the internet”) or pound counterarguments into their heads (e.g., “Here are ten reasons why climate change is real”). Rather, effective information literacy education works by presenting the people being targeted with specific, proven instances of misinformation, encouraging them to understand how and why it worked against them. Here again, the firms have mostly buried this information instead of sharing it with victims. Social media firms should swallow the fear that people will abandon their services en masse if they engage in these sorts of initiatives (we’re too addicted to quit anyway). In their quest to avoid liability and maintain the fiction that they’re blameless, social media firms have left their customers unarmed for battle.
This battle will only become more intense. Therefore, these companies should steal a lesson from the fictional battlefields of Dara Lam. It’s not enough to experiment and train for today’s battles of LikeWar; they must look ahead to tomorrow’s.
The companies must proactively consider the political, social, and moral ramifications of their services. It is telling that, across all the episodes described in this book, not a single social media firm tried to remedy the ills that played out on their networks until they became larger problems, even though executives could see these abuses unfolding in real time. Even when these firms’ own employees sounded alarms about issues ranging from hate groups to harassment, they were consistently ignored. Similarly, when outside researchers raised concerns about emerging problems like neo-Nazi trolling and Russian disinformation campaigns during the 2016 U.S. election, the firms essentially dismissed them.
Changing to a proactive strategy will require the firms to alter their approach to product development. In the same way that social media companies learned to vet new features for technical bugs, any algorithmic tweak or added capability will require them to take a long, hard look at how it might be co-opted by bad actors or spark unintended consequences—before the feature is released to the masses in a chaotic “beta test.” Much like the U.S. Army playing war games at Fort Polk, these companies should aggressively “game out” the potential legal, social, and moral effects of their products, especially in regard to how the various types of bad actors discussed in this book might use them. The next time a malicious group or state weaponizes a social media platform, these companies won’t be able to beg ignorance. Nor should we allow them to use that excuse any longer.
Amid all this talk of taking responsibility, it’s important to recognize that this is the appropriate moment in both the internet’s and these companies’ own maturation to do so. As internet sociologist Zeynep Tufekci has reminded us, “Facebook is only 13 years old, Twitter 11, and even Google is but 19. At this moment in the evolution of the auto industry, there were still no seat belts, airbags, emission controls, or mandatory crumple zones.” The critics of social media should remember that the companies aren’t implacable enemies set on ruining the social fabric. They’re just growing into their roles and responsibilities. By acting less like angry customers and more like concerned constituents, we stand the best chance of guiding these digital empires in the right direction.
And this points to our own individual role in a realm of escalating online war—that is, recognizing our burgeoning responsibilities as citizens and combatants alike.
Like any viral infection, information offensives work by targeting the most vulnerable members of a population—in this case, the least informed. The cascading nature of “likes” and “shares” across social networks, however, means that the gullible and ignorant are only the entry point. Ignorance isn’t bliss; it just makes you a mark. It also makes you more likely to spread lies, which your friends and family will be more inclined to believe and spread in turn.
Yet the way to avoid this isn’t some rote recommendation that we all simply “get smart.” That would be great, of course, but it is unlikely to happen and still wouldn’t solve most problems. Instead, if we want to stop being manipulated, we must change how we navigate the new media environment. In our daily lives, all of us must recognize that the intent of most online content is to subtly influence and manipulate. In response, we should practice a technique called “lateral thinking.” In a study of information consumption patterns, Stanford University researchers gauged three groups—college undergraduates, history PhDs, and professional fact-checkers—on how they evaluated the accuracy of online information. Surprisingly, both the undergraduates and the PhDs scored low. While certainly intelligent, they approached the information “vertically.” They stayed within a single worldview, parsing the content of only one source. As a result, they were “easily manipulated.”
By contrast, the fact-checkers didn’t just recognize online manipulation more often, they also detected it far more rapidly. The reason was that they approached the task “laterally,” leaping across multiple other websites as they made a determination of accuracy. As the Stanford team wrote, the fact-checkers “understood the Web as a maze filled with trap doors and blind alleys, where things are not always what they seem.” So they constantly linked to other locales and sources, “seeking context and perspective.” In short, they networked out to find the truth. The best way to navigate the internet is one that reflects the very structure of the internet itself.
There is nothing inherently technological about this approach. Indeed, it’s taught by one of the oldest and most widely shared narratives in human history: the parable of the blind men and the elephant. This story dates back to the earliest Buddhist, Hindu, and Jain texts, almost 4,000 years ago. It describes how a group of blind men, grasping at different parts of an elephant, imagine it to be many different things: a snake, a tree, a wall. In some versions of the story, the men fall to mortal combat as their disagreement widens. As the Hindu Rigveda summarizes the story, “Reality is one, though wise men speak of it variously.”
When in doubt, seek a second opinion—then a third, then a fourth. If you’re not in doubt, then you’re likely part of the problem!
What makes social media so different, and so powerful, is that it is a tool of mass communication whose connections run both ways. Every act on it is simultaneously personal and global. So in protecting ourselves online, we all, too, have broader responsibilities to protect others. Think of this obligation as akin to why you cover your mouth when you cough. You don’t do it because it directly protects you, but because it protects all those you come in contact with, and everyone whom they meet in turn. This ethic of responsibility makes us all safer in the end. It works the same way in social media.
That leads us to a final point as to how to handle a world of “likes” and lies gone viral online. Here again, to succeed in the digital future is to draw upon the lessons of the past, including some of the oldest recorded. Plato’s Republic, written around 520 BCE, is one of the foundational works of Western philosophy and politics. One of its most important insights is conveyed through “The Allegory of the Cave.” It tells the story of prisoners in a cave, who watch shadows dance across the wall. Knowing only that world, they think the shadows are reality, when actually they are just the reflections of a light they cannot see. (Note this ancient parallel to Zuckerberg’s fundamental notion that Facebook was “a mirror of what existed in real life.”)
The true lesson comes, though, when one prisoner escapes the cave. He sees real light for the first time, finally understanding the nature of his reality. Yet the prisoners inside the cave refuse to believe him. They are thus prisoners not just of their chains but also of their beliefs. They hold fast to the manufactured reality instead of opening up to the truth.
Indeed, it is notable that the ancient lessons of Plato’s cave are a core theme of one of the foundational movies of the internet age, The Matrix. In this modern reworking, it is computers that hide the true state of the world from humanity, with the internet allowing mass-scale manipulation and oppression. The Matrix came out in 1999, however, before social media had changed the web into its new form. Perhaps, then, the new matrix that binds and fools us today isn’t some machine-generated simulation plugged into our brains. It is just the way we view the world, filtered through the cracked mirror of social media.
But there may be something more. One of the underlying themes of Plato’s cave is that power turns on perception and choice. It shows that if people are unwilling to contemplate the world around them in its actuality, they can be easily manipulated. Yet they have only themselves to blame. They, rather than the “ruler,” possess the real power—the power to decide what to believe and what to tell others. So, too, in The Matrix, every person has a choice. You can pick a red pill (itself now an internet meme) that offers the truth. Or you can pick a blue pill, which allows you to “believe whatever you want to believe.”
Social media is extraordinarily powerful, but also easily accessible and pliable. Across it play out battles for not just every issue you care about, but for the future itself. Yet within this network, and in each of the conflicts on it, we all still have the power of choice. Not only do we determine what role we play, but we also influence what others know and do, shaping the outcomes of all these battles. In this new world, the same basic law applies to us all:
You are now what you share.
And through what you choose, you share who you truly are.