FOR CENTURIES, governments have been charged with the all-important task of verifying that we are who we say we are. They issue the primary documents—birth certificates, driver’s licenses, ID cards, and passports—that have long been the benchmark for authenticating identity. In recent years, the world’s five biggest managers of identity were the governments of the five biggest countries: China, India, the United States, Indonesia, and Brazil. Well, these days, China’s and India’s governments are still in the top five, but they are behind a newcomer in the identity management business—Facebook, which manages 1.5 billion identities—and are followed by two more newcomers: Google (500 million) and Twitter (320 million). Sick of managing multiple passwords, people are embracing the convenience of single-sign-on (SSO) access that these services provide to third-party web sites. But I find this trend toward concentrated identity authentication alarming. In a capitalist system that requires us to prove our identity to open bank accounts, travel, sign contracts, even enter buildings, this handful of private companies now wields profound power over all of us.
Facebook’s vast network reach comes from it having been the first online social media platform that encouraged users to present personas as close as possible to their “real” selves. That has given it an all-encompassing, worrisome power. Consider how Facebook controls and manipulates the newsfeeds that, for more than a billion people worldwide, are a daily source of important information. What once was a chronological flow of postings is now deliberately tailored to emphasize those that have better odds of going viral and serving advertisers’ needs, all coordinated by a predetermined software algorithm. The same algorithm will serve up unsolicited “memories” to us, repackaging albums of previously posted photos with whatever friend it believes we want to commemorate a “Facebook anniversary” with—sometimes oblivious to the fact that that friend or family member has died. It also prevents users from clicking through to the original source of an embedded YouTube or other video so as to boost native uploads into Facebook. That limits the interchange among social media platforms, constrains the flow dynamics of the Organism, and prevents creators from monetizing their work.
Worst of all, Facebook takes a Gestapo-like approach to what it deems to be offensive material. I discovered this when it abruptly turned off my account because of a joke I had sent to a friend in a private message. The Facebook Thought Police, watching my private conversations, determined that the image I sent to an Icelandic friend of a forty-seven-year-old-man with a “micro penis”—an image I’d lifted from a Google-sourced medical textbook—was unacceptable. To be sure, the joke was ribald, but there was no harm done to anyone and it was no worse than millions of exchanges that take place between friends every day. And that’s the key point, of course: This was a private conversation. The upshot was that I was accused of “international child pornography” and my account was instantly shut down with a note that said little more than there is nothing you can do and no one to contact. Period. Because Facebook controlled my login through its powerful role as an SSO identifier, this move meant that my Spotify stopped, I couldn’t use Uber, and my SoundCloud was shut off, forcing me to re-establish direct password credentials with each. In effect, my identity was on hold. For a time, I was the invisible man.
With a centrally controlled program that can’t be audited, Facebook creates its own, subjective version of the truth, an idealized picture of life that it deigns to be the one we should experience. For a long time, the only button a user could click to express an opinion about a post was the iconic thumbs-up “Like.” The platform has since added “Love,” “Haha,” “Wow,” “Sad,” and “Angry” emojis, but there is still no “dislike.” Facebook is not to be a community of discord, it seems; we can be moved to tears or anger by the information that someone shares, but we’re not supposed to disagree with that person. (Arguably, Facebook leaves that job to its own censors.) And, boy, do we play ball with it. Most people use Facebook as a place solely for happy news and images. They create idealized personas: perfect lives, perfect children, happy marriages, and professional successes. Facebook-land is like Disneyland, one that we’ve obediently created ourselves.
Facebook’s goal is to ensure that as many eyeballs as possible are looking over its platform to justify its advertising charge-out rates. It’s in the business of selling “you” to advertisers. Because the algorithm knows your real identity, your expressions, your affinities, your private conversations, and your desires, you constitute a valuable packaged item. The same algorithm also figures out which content “sticks,” so that it can pitch it to companies as something to attach to. Sure, it will argue that its “community”-managed censorship policies—no nudity on Instagram, no “dick pic” jokes in Facebook messages—are imposed for our welfare, but its greatest interest is in selling positivity.
Debates around restricting content can be thorny. There’s an understandable public outcry when hate speech, direct threats, and graphic, violent images are included in social media. So, companies like Facebook and Twitter feel a very strong public relations imperative to impose some kind of order. But if it was tough enough for newspaper editors to grapple with the ethics of publishing their own journalists’ content, it’s many magnitudes harder for social media platforms to make black and white decisions around the incomparably wider array of uncontrolled user content.
We emailed Facebook requesting comment and received no reply—a non-response that means nothing in particular. To be fair, social media providers like Facebook are between a rock and a hard place. They often face a clamor from those who feel wronged by offensive content to remove it but then come under fire from free-speech advocates for taking heavy-handed actions. Platforms have responded by trying to devise consistent policies—in Facebook’s case, under its “community standards”—about when to remove content and when not. There is genuine effort there to resolve a difficult problem through standards, due process, and deliberation. Also, by publishing data on the requests it gets for data by governments around the world, Facebook is clearly trying to shine some transparency on the decisions it makes to curb content. This is commendable. But the problem is that the real world is far more complicated than any rules and procedures for “fair” censorship could ever accommodate. Special interests have the ability, and incentive, to distort the consensus process to remove or protect content in ways that serves them alone. And as it turns out, there are not only frequent cases of overreach, there have also been glaring situations where content was reasonably removed because the standards couldn’t address the specific context in which the subject matter was presented.
The case of troubled Marine Daniel Rey Wolfe, who used Facebook to document his suicide with a series of graphic self-portraits and comments, underscores the dilemma. Facebook initially didn’t comply with the requests of grieving fellow Marines that it take down those photos. Wolfe’s posts didn’t violate letter-of-the-law reading of its community standards. By those rules, Facebook vows to remove any “promotion or encouragement of self-mutilation, eating disorders, or hard drug abuse,” along with “graphic images shared for sadistic effect or to celebrate or glorify violence.” But Wolfe was not technically “promoting” or “encouraging” self-violence; he was promising to actually carry it out, which meant that while he was still alive the proposal to remove the material contravened another policy: the priority of keeping the lines open so that friends and family can try to intervene. Once Wolfe had died, another policy kicked in: that immediate family members are solely responsible for choosing to close the deceased’s account or to “memorialize” it. When they take no formal action, the latter becomes the default, which means the person’s feed stays intact and Facebook’s algorithm keeps obliviously sending out auto-generated reminders of their birthdays and other details. Without action from the next of kin, Facebook was policy-bound to keep Wolfe’s account open, replete with its disturbing account of his last hours—at least until the case got media attention and senior management reversed the decision.
It’s pretty hard not to sympathize with Wolfe’s family and friends. Who would want to confront all that? However, all the sanitized perfection that’s created by Facebook’s censoring algorithm and by users’ own self-censored “highlight reels” is the kind of thing that contributes to the angst of people like this Marine. Research by the University of Houston has shown that increased use of Facebook contributes to an uptick in depression via a phenomenon known as “social comparison.” Imagine how PTSD-suffering veterans returning from Afghanistan and Iraq struggle to assimilate back into peacetime American society when all they see on their Facebook feeds is an unrealistic Disneyland diametrically opposed to the hell they’ve come from.
We are by no means saying that society is served by encouraging hate speech or graphic depictions of violence—quite the opposite. We recognize that many people feel that these platforms should limit such material. However, as we discussed in the last chapter, the best way for our Social Organism to beat back pathogens of hate and violence is to confront them. That way we can absorb the antigens they deliver, and our shared social immune system can quash them with an antidote of love and compassion. Remember, it took a mass shooting and the wide dissemination of an image of a redneck with a Confederate flag to spawn the #TakeItDown movement. Would this have been possible if Facebook had Disney-Landed that same image?
This isn’t an idle question. One really worrying aspect of Facebook’s “community standards” approach toward protecting us from ourselves is that it inevitably bleeds into political censorship. In one of many documented cases, Facebook took down a link to OneWorld’s “Freedom for Palestine” video that UK band Coldplay had posted on its Facebook page after pro-Israeli groups reported the URL to the song as “abusive” to Facebook’s standards police. Many more controversial Facebook censorship decisions can be seen on the website onlinecensorship.org, which is managed by two nonprofits, the Electronic Frontier Foundation and Visualizing Impact. In just one snapshot, every item on the site’s weekly report on social media censorship for March 23, 2016, pertained to Facebook. It included reports that the company had done the following: It removed an Indian cartoon criticizing a government minister’s statement that “the concept of marital rape doesn’t exist in the Indian context”; it blocked a photo from a private group’s page of a woman holding her newly born child who was still attached by the umbilical cord because it contained references to nudity and “sexually explicit imagery”; it suspended the page of a filmmaker reporting on anti-fracking activists; and it barred an indigenous Australian writer because she’d posted images of two topless Australian Aboriginal elder ladies engaged in a traditional ceremony. These behaviors from Facebook are schizophrenic at best.
Most of us accept the sound reasons why the First Amendment prevents the U.S. government from restricting our right to say things, even stupid or harmful things. Why don’t we extend that thinking to social media platforms and make our voices heard to these companies? Yes, they are private companies and legally permitted to control content as they see fit. But, as we’ve hopefully made very clear throughout this book, these entities now have a profound responsibility to the rest of society.
While cases of “dangerous speech” clearly exist—the Dangerous Speech Project is trying to systematically identify them on social media as a precursor to outbreaks of violence—we take a firm position that it is always best to err against censorship. We should demand that these new gods of social media wield their immense gatekeeping power with extreme caution and transparency. The algorithms with which they manipulate the presentation and delivery of our content aren’t accessible to outsiders. Given how much influence it has on our lives, we deserve to know how Facebook weights objects by type, source, and so forth. The company has beaten the old media companies at their own game. It has hijacked the direct relationship with customers and advertisers and imposed a toll-road mentality toward “boosting a post”—even as so many users complain that they’re not seeing their real friends or the things they care about. There must be transparency in these algorithms.
Why not just quit Facebook? you might say. The problem is that we are social animals, driven to where the social networks exist. Equally important, as we’ve noted, much of our digital identity is tied up in these platforms. As a society, it’s impossible to ignore Facebook, a fact that affords it almost unprecedented control over what we say and hear. Worse, it exerts this power in its own interest. Facebook takes the content that you and I produce, pays us nothing for it—a luxury traditional news organizations can only dream of—and then organizes, censors, and repackages it for sale to advertisers, whose fees it keeps for itself. As the saying popularized by security expert Bruce Schneier goes, “you are not Facebook’s customer, you are its product.” Facebook has no right to censor us. It is hurting the Social Organism, restricting our cultural evolution. It needs to stop.
Facebook isn’t the only guilty party here, by any stretch. Such abuse comes with the territory when centralized controllers of information are subject to shareholders’ demands for quarterly profit growth. So we also see Twitter veering toward censorship. Though the platform takes a more laissez-faire approach to sexual content and in letting people use avatar identities, Twitter will respond to requests to block content, most controversially from governments. In February 2016, it established the Twitter Trust and Safety Council made up of forty nonprofit advocacy groups such as the Jewish interest-focused Anti Defamation League and the formerly named Gay and Lesbian Alliance Against Defamation, which now simply goes by its acronym, GLAAD. Twitter cites the noble goal of having the council help develop a “strategy to ensure that people feel safe expressing themselves on Twitter.” Yet one can imagine this ostensibly constructive concept of free but safe speech opening the door to abuse, especially if members of the council use their seat to control speech that affects their special interests.
The problem of centralized control over content sharing is not limited to social media platforms. It’s also evident among email service providers, which are now concentrated within two dominant entities: Google’s gmail and Microsoft Exchange’s Outlook. People who use independent email servers complain that their messages will end up in the spam folders of their Gmail-using contacts. Google’s algorithm dominates all Web search, Google Chrome is the most popular browser, Google software runs the main navigation and mapping services, Google’s YouTube service monopolizes Internet video, and almost half the world’s smartphones use Google’s Android operating system. You have the makings of a very powerful gatekeeper of information. Google’s original motto might have been “Don’t be evil,” but how can we be sure that this profit-seeking company will always be respectful of its immense power?
One way to conceive of the power of these new media giants is to contemplate the data that passes through their servers, data about us, our habits, our interests, our messages to each other. Back in 2014, Google chairman Eric Schmidt said that at that time, every two days the Internet was generating and storing the same amount of data that human beings had accumulated between the dawn of civilization and 2003. And Google itself is party to a huge chunk of that. For Internet searches alone, Google processes an average 3.5 million inquiries each minute. The honeypots of information that this traffic generates are enormous. Google, Facebook, Twitter, Tumblr, and others are sitting on hugely valuable troves of data. Can we trust them with it?
The irony is that in a world that is flattened and decentralized by near-universal Internet access, the prime machinery for using it as a social network is now run by highly concentrated, centralized units of power. Partly, that’s a function of capitalism; as firms become dominant, gaining market share and network effect, they seek to assure profitability in the face of competitive threats by consolidating ownership. This tendency raises concerns about pricing power and the risk that large incumbents will quash innovation by outsiders, which is why anti-trust laws exist. But in the digital world, the stakes are different, because the big Internet players will do anything to earn the widest possible network effect, which typically means assuring free access to their platform. Instead of pricing, then, we should worry about the broader issues surrounding Google’s capacity to control a medium that has become the primary means through which society communicates.
We know from Edward Snowden’s revelations that the presence of just a handful of dominant firms in this industry meant the U.S. government could lean on them to secretly feed it information about their users. Facilitating Big Brother–like snooping in this way is naturally a major concern to everyone. But the bigger risk to the healthy growth and evolution of the Social Organism is not from government intervention but from the platforms’ own constraints on free expression.
How do we reconcile this picture of a small group of institutions wielding monopolistic control over our online interactions with the rosier story we’ve told throughout this book of an evolving, leaderless Social Organism that’s reached a higher plane of communication? Is this Facebook/Twitter/Google universe better or worse than the top-down hierarchies of the old media world? How does our biological analogy hold up against these centralized bodies? And will the evolution of the Social Organism combat the censorship problem or make it worse?
First, it’s important to define some terminology. Let’s not confuse “social media platform” with “social media.” Some people wrongly conflate the former with the services that facilitate it, much as many assume that the “financial system” is composed of the banking intermediaries that manage it. These systems—in this case, the communications system—are made up of populations of interconnected human beings. The platforms and service providers are just the pipes and access points, the infrastructure over which human beings exchange information and value. Our concern should be with how well the pipes function. If that infrastructure—which includes a platform’s driving algorithm and policies—is poorly designed, or if it suffers from blockages, or experiences other forms of friction, then the human system that runs within it will suffer. Social media users are like the leaves of a tree; if you sever a limb, you cut off the flow of water and nutrients to those leaves, which prevents them from carrying out photosynthesis to spur growth.
So, while the centralization and self-interested censorship instincts of social media platforms create a suboptimal situation, it doesn’t negate the fact that the technology has delivered a much more open and decentralized architecture for communication. The unleashing of the core social media technology—initially the Internet protocol, followed by the various applications that led to offerings like Friendster and MySpace—set the Social Organism free from its dependence on old-style media. That established a different, organically managed distribution system. To my mind, social media services that excessively control information such as Facebook are acting like modern-day book-burners. Yet even they aren’t powerful enough to undermine the liberating effect of this newly distributed, holonic system, the foundation of our twenty-first-century Social Organism.
Let’s also be fair to Facebook and acknowledge that it only blocks a tiny sliver of the content contributed by its users. (Reorganizing their feeds is a different matter.) That’s incomparable to traditional media organizations, for which deciding what news to exclude is arguably their most important task. Old media companies have no choice but to exclude information; the economics of an expensive production and distribution model demand it. But the combination of global, instantaneous Internet connectivity, low-cost bandwidth connections, virtually limitless cloud-based storage, and, most important, a billion-strong labor force working for free creates a completely different economic dynamic. Those are the features that define social media as an organism and which give rise to the massive outpouring of new content from hundreds of millions of people who’d never contributed to mass media before. That’s a decentralizing phenomenon and it’s happening regardless of Facebook’s interventions.
Second, let’s not forget that the platforms are themselves engaged in a survival-of-the-fittest competition. As we discussed in chapter 1, the industry has already gone through big waves of disruption, with SixDegrees, Friendster, and Myspace all wiped out by it. There’s no guarantee that the same won’t happen to Facebook or Twitter.
Still, we shouldn’t be complacent. We need a set of consensually developed natural laws that protect the flow dynamics of the Social Organism. We can’t just hope that market pressures will quickly force Facebook and its peers to create more open platforms. These firms’ dominant market positions, giant cash stores, and instinct toward acquiring each new disruptive start-up mean that evolution is not going to happen automatically. I don’t want to wait a lifetime for a truly free social media system. These companies’ monopolistic control over information can have lasting, dangerous effects on society. Let’s not let that harm build up for too long.
What should we be demanding from these social media behemoths? I think it’s useful to look back at how we’ve historically dealt with the core issues at stake. I’ve come to believe we need a Thomas and Teddy solution. Thomas (Jefferson) gave us a constitutional commitment to the right to free expression. Teddy (Roosevelt) aggressively used the Sherman Antitrust Act to break up powerful corporations, establishing the public’s clear interest in restricting monopolies—which in the social media context gets us thinking about the dominant platforms, those of Twitter, Facebook, Tumblr, Snapchat, and their various competitors. My point here is not to call for federal prosecutions—probably the last thing we need—and in any case both the existing antitrust laws and the First Amendment have dubious legal application to a social media marketplace that is privately managed, highly dynamic, and supports ongoing, rapid innovation. Rather it is that the principles embodied in those legal foundations that can guide us in prioritizing the development of an open, robust, and positively evolving social media environment.
The developers at Google and Facebook present themselves as the smartest guys in the room, the ones who know what’s best for the rest of us. There’s a real irony, and I would say societal risk, to the fact that a group of geeks who are often viewed as socially awkward are imposing their worldviews on us via the technology they build. But, like it or not, we’re dependent on these companies to encourage an open social media framework that’s inclusive, constructive, and positively reinforcing. Of course, they are not necessarily incentivized to build it, not when their shareholders are focused on short-term returns and a business model that’s built around exploiting free content and accumulating user data to sell to advertisers. Our challenge, then, is to promote a different business model, whether through market forces or by legal efforts, that incentivizes social media platforms to pare back their data accumulation and confer users with more control over how to present and monetize their content. We must build an ecosystem of users, content, and functionality that is open and provides pari passu rewards for each participant’s contributions.
Market forces may take care of some of it. Young people in particular are leaving Facebook in droves, or never joining it in the first place. Millennials and Generation Z kids are going to Snapchat, which by April 2016 was seeing 10 billion video views a day. Because of its privacy settings, Snapchat has less (though not zero) ability to censor. More important, it allows people to regain control over who sees their content and for how long. They’re creating communities around Snapchat that were impossible to create on Facebook after its ubiquitous reach gave visibility to their parents, their teachers, and their ex-boyfriends and girlfriends. Snapchat’s uncensored model of fleeting human exchanges has also spawned greater creativity around the communication of emotion. Its users are living in the moment together. And they do so without fear of actions or moods or words coming back to haunt them, or that they are remembered in perpetuity. No one feels entirely comfortable looking back all the time at their bad hair phases, outbursts, drunkenness, or failures—Snapchat gives people a chance to avoid having to do that. While Facebook and Twitter have struggled with the limits of the “Like” button—how do you show solidarity with a friend whose parent has died with that button?—kids on Snapchat have figured out that their own faces and reactions are the medium. In this way they overcome the lack of subtle information in text. This kind of creativity provides more and richer nutrients to the Organism.
Most young people aren’t closing their Facebook accounts. They can’t, because they need those accounts to assert their digital identities at other sites. But they are using them much less frequently and in different ways. So, it’s not totally clear how much damage this migration will do to Facebook. Still, long-term, demographics are not in the incumbents’ favor. This is entirely consistent with the evolutionary forces we’ve outlined elsewhere in this book. If the Social Organism doesn’t get what it wants from a particular platform, it will move. Eventually, Facebook will itself have to evolve or face extinction.
Watertight censorship is impossible, in any case. Creative people will always find a way around it, whether to avoid limitations by a government or a private business. Just look at how Chinese “netizens” deal with the problem. Many use VPNs to get over the “Great Firewall of China” that blocks sites like Facebook, Twitter, and Instagram. On their own heavily controlled social media sites, such as Weibo, Chinese users have created a unique language of subtle plays on words to bypass the key word searches used by government censors.
Meanwhile, in the United States and Europe, concern over institutional control of our data is growing. Snowden’s revelations, though somewhat polarizing, have left many uncomfortable with breaches and manipulation of the private information that’s managed by gatekeeping entities. While the focus is more on data privacy than censorship concerns, the NSA-spying story is building consciousness of the dangers we face in centralizing control over communication, whether within government or private bodies. That’s prompting people to crowdsource efforts to expose these problems. The onlinecensorship.org website, for example, asks people to recount experiences of social media censorship, reviews them, and then compiles reports with which to lobby for more openness. Countless other individual sites and, ironically, Facebook pages and groups have also sprung up to protest perceived invasions of free speech.
One reason why these initiatives might succeed in pushing social media platforms toward more open strategies, even if it defies their Wall Street funders, is that all these firms have disruption and openness in their DNA—yes, even Facebook. Restrictive, proprietorial policies that curtail frictionless sharing and derivative-work creation inherently impede content evolution, which works against the platforms’ long-term interest in network growth. This is why the social media business community came out forcefully against the Hollywood- and big media–backed Stop Online Piracy Act (SOPA), which would blacklist sites facing copyright actions. It was a serious threat to Internet free expression and Silicon Valley rose up to tell the world as much. While Facebook and Twitter didn’t join in the “blackout” strike—in which Google, Tumblr, Reddit, Wikipedia, and many other sites set their home pages to a black background and included links for people to register their protest at SOPA—the two big social media platforms did support the movement. This unified voice against censorship is the main reason why the hotly contested bill failed.
In fact, many of these firms have corporate mission statements acknowledging the need for an open Internet. In a way, that holds them to account. It makes them sensitive to criticism and lobbying efforts, and means that when issues such as net neutrality raises its head, the hypocrisy of metering and throttling different types of content is exposed. Still, even if there is a natural check on their monopolizing instincts, the general public needs to think long and hard about how much power these institutions are amassing. These are very important, fundamental issues. We must educate ourselves on them if we are to shape a more secure, stable society for the future.
Trying to push Facebook and its ilk to do the right thing is one strategy. Building an alternative to them might be better. The good news is that powerful new technologies are emerging that could help audacious start-ups to do just that. Ad blocking tools, which allow readers and viewers to block ads at will, could so severely challenge the revenue model of incumbent media providers that it renders it unworkable. Meanwhile, there’s a new decentralizing opportunity that comes with digital currencies such as bitcoin and their underlying technology, known as the blockchain ledger. These make sending money across the Internet less costly and allow for more automated transactions; they also open the possibility of what is essentially an ownerless network infrastructure that frees content providers from dependence on the big social media platforms. This new era of decentralized media presages a world in which all users—be they individuals or corporate entities—have direct control of their content, breaking the advertising dependency that has fed the centralized model of personal data-mining and censorship. It’s potentially the next major evolutionary phase in the social media architecture and it could have as dramatic an impact on the Social Organism and its culture as did the emergence of the Internet protocol and the first social media platforms.
Consider the potentially devastating effect that ad blocking could have on the advertising model for financing content delivery. This third-party software strips out unwanted ads and the tracking beacons that go with them, ensuring the visitor to a website sees only the content they want. A January 2016 survey by GlobalWebIndex found that 38 percent of Internet users had used ad blocking in the fourth quarter of 2015. With a host of new smartphone apps coming on the market with a blessing from Apple’s iPhone—seen as a ploy to undermine Google Android’s ad-driven model—the practice, which was previously confined mostly to desktop computing, is poised to grow. As you’d expect, surveys show usage rates by Millennials are considerably higher, well above 50 percent. The already embattled news industry is most at risk by this. Some news sites now block their own content when they detect someone using an ad blocker, betting that readers will disengage the ad blocker rather than miss out on news. That’s a big gamble. Similar strategies for combating revenue loss in the online era have proven very hard to impose. And it’s not just the old news industry that’s worried. Facebook itself captured the mood of the social media platforms in an SEC statement accompanying its third-quarter 2015 earnings: “[T]these technologies have had an adverse effect on our financial results and, if such technologies continue to proliferate, in particular with respect to mobile platforms, our future financial results may be harmed.”
At stake is a business model that dates back to 1704, when the Boston News-Letter published, for a fee, a small announcement advertising the sale of a local property. From those humble beginnings a symbiotic relationship between the business community and the providers of information emerged, one that spawned much of the modern media world: advertising agencies; marketing as a profession; the very idea of a “brand”; and the explosive growth of mass media. It also fed the flawed presumption that the only way to finance the gathering and production of information was for business interests to subsidize it. There is not yet a working replacement for this model, which means the institutions we’ve relied upon to cover wars, probe the wrongdoings of politicians, or just update us on the state of our local government’s sanitation service may not be able to pay for that work. But the existence of this problem opens the door for innovators to come up with an alternative to break what had always been an unholy relationship between news and supposedly objective information providers.
Invasive banner ads and unwanted TV commercials have always acted as a kind of tax on our consumption of news and entertainment. And, despite the earnest attempts of quality publishing firms to create ethical boundaries between their work and that of their advertising divisions, the business of selling readers’ eyeballs to commercial interests has always created the perception of sellout journalism, if not the practice. In a way, society has always been “paying” for the content it receives. It’s just that it has done so in a hidden and unequally distributed way. So, what if we broke the paradigm and simply asked readers or viewers to pay directly for whatever content they receive?
Until recently, attempts to shift the financing burden onto the consumer have mostly failed. Only the most established media brands, such as The Wall Street Journal and The News York Times, have gained traction with paywall models, where readers pay a monthly subscription. Workarounds are usually pretty easy—information is cheap to replicate and share on the Internet—and if readers can’t achieve that they’ll just ignore the pay site and shift to a free competitor. But if ad blocking forces content providers out of business, the hole in our information supply might prompt different behavior. What’s more, the arrival of another powerful new technology portends an opportunity for real disruption to the business models of both traditional media companies and the dominant social media platforms: bitcoin. Discussing this strange new digital currency—and more important, the powerful technology that underpins it—can lead people down rabbit holes. Nevertheless, it’s vital that we at least explore it in some depth; it is shaping up to become the core system of code with which digital society will govern itself in the future. It may well determine the direction that its evolution will take.
Bitcoin is widely misunderstood by the general public. Too many mis-associations with drug dealings, hacking attacks, and other problems have sullied the digital currency’s reputation. But despite that, the smart money is recognizing that its underlying technology, the blockchain, offers a revolutionary way to share value across the Internet.
It’s not easy to explain the blockchain in a few paragraphs. But we have to start somewhere, so we’ll try this one-sentence explanation for starters: The blockchain is a cryptographically protected, incorruptible ledger that’s distributed among a network of independently owned computers, all of which are tasked with verifying and updating its contents according to a set of software-regulated rules that incentivize them to act honestly and to agree on the veracity of the shared information. Got that? Never mind if it’s too hard to get your head around; the bottom line is that it resolves a five-hundred-year-old problem by which human beings, who were always unable to trust each other to fairly and honestly share information, depended on so-called “trusted third parties” to intermediate their exchanges of value. It is a huge evolutionary leap. Now, these exchanges—whether of money or some form of potentially monetizable digital asset, such as a video clip, a song, or a piece of unique art—can be done directly, peer to peer. Total strangers on either side of the world can exchange value without either side having to trust that the other isn’t digitally counterfeiting the money or secretly copying and sharing the song or artwork with someone else.
This groundbreaking concept is drawing thousands of new start-ups to explore a multitude of ways to exploit its disruptive effect. From creating peer-to-peer decentralized stock exchanges to managing the flow of electrons over solar microgrids, blockchain-inspired innovators are rethinking the way that information and value is shared. There’s still a lot of work to be done to build out the base infrastructure and it’s possible that much of the promises of sweeping changes are more hype than reality. But the idea of what’s best described as a system of decentralized trust points to such a huge paradigm shift in the way society is governed that Silicon Valley pioneers like Marc Andreessen and LinkedIn’s Reid Hoffman talk about bitcoin and its blockchain infrastructure as Internet 2.0.
What does it mean for social media? Well, in the field of creative content, the blockchain is inspiring technologists and activist-minded artists to reimagine the underlying ownership and rights structure to the content on which the Social Organism depends for its lifeblood. Their proposed models would restructure the power relationships within this new communications architecture, further diminishing the capacity for centralizing institutions to control the interests and activities of the users. In empowering the Social Organism’s autonomous cells, it moves the system toward an even flatter, holonic structure. It would mean that we, the producers of content, would decide how it is used and monetized, not Facebook.
How would this happen? One way is by creating new methods for paying producers of content—in particular, by facilitating micropayments. Before digital money came along, it was prohibitively expensive to pay, say, a few cents for an article, because the inefficient intermediary-dominated banking and credit card system can’t profitably process such small amounts. Now, innovators are looking at preloading special browser extensions with a store of digital currency to quietly work in the background paying small amounts for content under some pre-arranged deal with the publisher. Multiplied over billions of such transactions, digital-currency micropayments could offer a viable, non-advertising-dependent revenue stream for the media industry. It could create a healthier relationship between consumers of content, who are doing all they can to avoid unwanted ads, and the producers of it, who need to be compensated.
Another key innovation is the blockchain’s capacity for creators of digital content to prove that they and only they are the owners of an original work forever. With that power, with the programmability of digital currency, and with the use of software-based legal agreements known as smart contracts, they can set controls over their content so that it is used as they dictate. By irrefutably identifying data in this way, the artist can turn their work into a true digital asset, versions of which can be bought, sold, and owned as distinct items—as we used to do with vinyl records and still do with physical books. In theory, it means that creative work producers will no longer face the impossible task of tracking down and suing the countless people who wantonly copy and paste articles, post unattributed images, or share music files. Now they will treat content as an isolated digital asset that can be directly controlled through software and attached to automatic digital currency-payment contracts. You buy it, you own it. But you can’t replicate it if the smart contract is designed to stop you from doing so. This could overturn the long-standing licensing model for managing copyright in the digital environment. It’s a potential game-changer for the creative works industry and for how the underlying economy of the Social Organism functions.
Trailblazers in this field include award-winning singer-songwriter Imogen Heap, who turned her “Tiny Human” release into a digital asset that was released over the blockchain. In this case, Heap made the music free to use and sought bitcoin donations to cover it, but the point was to demonstrate the many possibilities of how the content could be creatively controlled, as well as to study how it might be used in the future. She has also launched an initiative to explore a wider application of this model that she has called Mycelia. (That’s a fitting name, given what I’ve come to learn about the fungi for which her project is named and which we discuss in the next and last chapter.) Meanwhile, start-ups like Monegraph and Mediachain are providing blockchain-based registration services that help artists track the usage of their work. And with an even larger mission, the Berklee School of Music in Boston is working with Michael’s outfit, the MIT Media Lab’s Digital Currency Initiative, along with a host of industry players such as record companies Universal Music, Sony, Warner, and BMG; online services like Spotify, Pandora, YouTube, and Netflix; and radio stations such as SiriusXM and WBUR of Boston, on a project called the Open Music Initiative. It’s aimed at deploying blockchain technology to redefine how music is used, shared, and paid for.
More explicitly within the realm of social media, companies are starting to see ways to use blockchain and digital currency technology to challenge the centrally managed platforms of Facebook, Vine, and others. Video upload service Reveal, for example, allows users to monetize their content via a digital currency called Reveal Coin that’s distributed according to how much they grow the network—by joining, by getting others to join, and by producing and spreading viral content. Prospective advertisers must buy Reveal Coins to pay for an ad, and those coins can only be purchased from the content providers who’ve accumulated them. Meanwhile, Taringa!, an Argentine-based social media provider, is now paying bitcoin to many of its 75 million users in return for content. Both systems are aimed at creating a positive feedback loop where content providers are incentivized to produce material that draws in more users and increases the payout for both the original creator and the platform.
We could also go one step further: an entirely decentralized social media platform, one that’s not controlled by any one person or company but by an ownerless, software-driven system known as a decentralized autonomous organization (DAO). The model would submit control over the payments and digital assets to a set of “smart contract” software instructions, all regulated by the decentralized network of computers that validate all the transactions that occur on the blockchain. There’s no company per se that’s responsible for any of this, which gets around the need to trust a central management structure staffed by humans. That might sound pretty Jetsons-like and you may need to do some further reading to get your head around it. A few early malfunctions have raised questions about how ownerless DAOs can exist in the real world of laws.* Still, DAOs are already being built to manage everything from decentralized ride-sharing communities (which could force the taxi-disrupting company Uber into its own disruption) to nonprofit charities to a more decentralized version of Wikipedia. And by far one of the most exciting possibilities would be a social media DAO. I see it as a universal agar to grow an unimaginably large array of new applications for the Social Organism.
The data that Twitter and Facebook tightly control and charge researchers like us for would now be controlled by those who produce it—you and I. And with encryption, distributed computing-based security and opt-in/opt-out clauses, we could release large amounts of it in metadata form to help further knowledge of how the Organism functions while still protecting our privacy. We could also use these transparent systems to design real-time audits that make it clear whether “likes,” “retweets” and “shares” are coming from real participants, automated “bots,” or clusters of low-wage “paid likers” in places like Bangladesh (a country that accounts for 40 percent of the world’s paid likes). By taking away the centralized social media platforms’ gatekeeping power over money and rights to creative content, we will neutralize the last major constraining power over the collective output of the Organism.
This decentralized social media economy won’t arrive immediately. To be effective, blockchain services will not only need a more robust base infrastructure; they’ll also require network effects—which big players like Facebook and Twitter have already accumulated as a form of capital to protect their market share from those that don’t have it. But with so much systemic and technological change chipping away at the incumbents’ models, from the exodus of young customers to ad blocking, it’s not inconceivable that a tipping point can be reached that pushes centralized platforms into extinction. The lesson from Silicon Valley history is that communications technology is subject to a highly accelerated evolutionary process. A decentralized media environment could come sooner than you think. The blockchain is the evolutionary advance that makes the old model obsolete.
How will people behave in this environment? Once they have control over it, will individuals be as proprietorial and restrictive with their content as big media companies and brand owners are now? Maybe. The technology will allow them to be so if they choose. But as we’ve documented in our Seven Rules of Life, the Social Organism wants to be fed. And in a far more distributed content production environment, where the MGM studios, the News Corps., and the Viacoms are less powerful, that kind of approach will likely be competed away by those who enjoy the wider reach of an open-access approach. The value will come from having control over your material and your data, but if you want your content to compete for attention you’ll still have to let it go.
In this decentralized world, corporate brand managers might ultimately become no different from everyone else: They will, like us, simply compete for the attention of the Social Organism’s cells, trying to implant memes in the system to get a message across. Advertising would not be treated as something walled off and categorically separate from other information, as if it is a somewhat illegitimate content provider sneaking in the backdoor as an unwelcome guest. It will have as much chance at legitimacy as any other publisher, but it will need to earn it. To gain attention, it will compete for the Organism’s love. Lessons should be taken from those makers of promotional content who’ve already discovered how to go viral on their own terms. In cases like Oreo’s “You can still dunk in the dark” Super Bowl tweet or Dove’s “Beauty Sketches” campaign, the content didn’t imbed itself into some, more “legitimate” piece of news or entertainment material; it succeeded in gaining widespread audience on its own terms. That’s the world of competitive marketing that I see companies facing in the future. In this world their goal will be to make content that matters, make it discoverable by those who matter, and seek the endorsement of those same people.
At the heart of these new ways of organizing society is the idea that software can govern human behavior without any one person or institution able to unilaterally alter its code. It is a form of cyberspace-based community governance. The first benefit for the Social Organism is that if we can build platforms for communicating on these principles, we can significantly reduce their owners’ capacity to censor our content.
The concept of smart contracts based on blockchain technology is also important here. It helps us think of software code as a tool with which a community can automatically rule—in place of, say, a court—on when and how a set of agreements are to be executed when pre-ordained conditions are met. A simple social media smart contract might respond to someone opening and embedding a particular artist’s work in a tweet, recognizing that act as fulfillment of a condition of the underlying usage agreement, and then irrevocably transferring an amount from the user’s store of digital currency to the artist. Small pieces of software code act like a mathematical key to unlock a response. (Note: This matches how catalytic enzymes and other organic agents unleash biochemical reactions along an organism’s metabolic pathways when they connect via a snug lock-and-key structure with a molecular substrate.) From that basic foundation, highly complex, intricate smart contracts can be built, making software code a powerful tool for designing trustworthy governance systems in cyberspace. To quote Harvard law professor Lawrence Lessig, we’re creating a world in which “code is law.”
This approach offers a way for communities to “legislate” for appropriate behavior on peer-to-peer online networks without any government intervention in the process. Importantly, it can also transcend borders, which is critical given the transnational nature of Internet activity. In effect, we can create rules for the Social Organism to govern itself and then allow it to evolve within those rules. Primavera De Filippi, a colleague of Lessig’s at Harvard Law, goes so far as to say that the blockchain and the peer-to-peer models it engenders are tools for humans to mimic nature’s “cooperation” model, the same one that keeps termites working together in the interests of the whole. Even though they have no idea of their peers’ needs and have no instructions from a leader, agents in these natural systems pursue their self-interest yet do so in a way that’s in concert with the needs of the whole. In short, we need governing software like the blockchain to create the ideal, fairest, most vibrant, dynamic yet stable version of a holarchy. We want social media behavior to be bound by codes (legal, moral, and software codes) that create the optimal conditions for the Social Organism to thrive.
Let’s think of how we might design such a cyber-legal system. And as we do so, let’s keep in mind that if the code we design pushes the Organism toward unhealthy censorship we will be hindering its evolution and potentially encouraging the uglier elements of our culture. First, we might figure out how technology could be harnessed to incentivize positive behavior that offsets the more destructive aspects of social media—the trolling, the hate speech, the vigilante justice mobs. In contrast to the counterproductive strategy of censoring antisocial behavior, encouraging pro-social behavior can be conducive to the overall interests of the Social Organism.
Governments, nonprofits, even civic-minded companies and individuals could bake certain incentives into automated responses when someone says or does something positive in a social media setting. This need not be financial. Studies have shown that publicly commending people for doing good deeds or for simply acting as good citizens, whether in a digital environment or in the physical world, can have positive reinforcing effects. We’ve heard innovators in at least one U.S. city government float the idea of tweeting out digital “herograms” to call out good-deed doers over social media. Companies and charities can also leverage social media by encouraging communal participation in charitable work. While it’s hard not to feel a little cynical about self-serving promotional projects like Anheuser-Busch’s #ABGivesBack Thanksgiving campaign, in which the Budweiser maker promised to provide a meal every time that hashtag was used, their viral effect can have a positive, community-building effect.
One innovative approach for improving Social Organism governance could tap into another big area of tech and Internet culture: video and online gaming. The use of game mechanics and design—often referred to as gamification—to motivate people to behave in certain ways and achieve their goals is already trendy in business schools. Now people are recognizing that in a world in which communities are formed around networks of computers, coders can try to promote pro-social behavior by adjusting the governance rules inside the games people play. The potential is borne out by the numbers: 1.2 billion people play video games worldwide, according to a 2013 survey from Spil Games, with 700 million of those online. Since then the numbers are thought to have grown even more with both the gender and age ratios of players, once dominated by young males, now close to parity.
For evidence of video games’ growing impact on our social landscape look no further than the remarkably successful crossover game, Pokemon Go, the game that uses a camera-interfacing smartphone app for chasing virtual prizes in the real world. The estimated 9.5 million smartphone-toting users who were actively using it after just one week into its existence in mid-July 2016 were from all ages and dozens of different countries. Phenomena like this are convincing many people that the gaming world offers a rich opportunity to shape humanity.
The New York–based Games for Change festival, which is now in its thirteenth year, has spawned an entire movement with a comprehensive website that fills up year-round with contributions and discussions. For its annual event, the festival has teamed up with the Tribeca Film Festival, giving it an audience of 275,000 people. In two packed days of keynotes and panels at its 2016 conference, the festival covered three tracks of interest: games for learning (including a summit of the same name backed by the U.S. Department of Education), games for promoting health and neuroprogramming, and games for civics and social impact. Speakers include people like the developers of Minecraft, which has energized the minds of tens of millions of kids worldwide, and Jenova Chen, cofounder of Thatgamecompany, whose website describes its mission as to “[c]reate timeless interactive entertainment that makes positive change to the human psyche worldwide.”
This pro-social experimentation isn’t confined to educational games. The most popular game of 2015, according to many polls, was Undertale, a quirky story of a young girl trying to escape a monster-infested underworld that’s presented in an unsophisticated 8-bit format. Its success has something to do with the way it directly challenges the moral fiber of the player, offering a choice of either the “genocide route” or the “pacifist route” in determining how to get past all the somewhat lovable monsters they encounter. Under the genocide route, the battles become increasingly unsatisfying as the monsters start avoiding the fights and a message is displayed in an increasingly small font size that says “But nobody came.” At this point, the previously cheery music is distorted into an ominous and spooky-sounding ambient track. What’s more, you can’t undo your past. A restart of the game doesn’t just take you back to the standard beginning as most games do. Rather, the permanent game file is remembered, leaving you with a reminder of your evil deeds. The entire experience is infused with a sense of ethical responsibility.
But how can we encourage positive behavior among real people online? A strategy adopted by League of Legends, the world’s most popular online game, offers some clues. With a rapidly increasing number of women, people of color, and players of different sexual orientation joining the 67 million who play League of Legends each month, clashes were occurring in an environment previously dominated by young white males. The game’s owner, Riot Games, didn’t want to take Facebook’s strategy of demanding users’ real names, recognizing that anonymous avatars help protect privacy and encourage inclusion among people who don’t want their sexual, religious, or other orientation exposed. So instead, management founded a “Tribunal,” a forum where players could create case files of chat logs that documented inappropriate behavior; anyone could discuss and vote on what language was unacceptable and what was positive for the community. A whopping 100 million votes were cast, the vast majority demonstrating an overwhelming aversion to hate speech and homophobic slurs.
In the next step, Riot Games took key words and phrases from the Tribunal data and threw them into a machine-learning algorithm that automatically flagged unacceptable behavior and positively highlighted any that encouraged conflict resolution. According to a July 2015 Op-ed by lead game designer Jeffrey Lin, “[a]s a result of these governance systems changing online cultural norms, incidences of homophobia, sexism and racism in League of Legends have fallen to a combined 2 percent of all games. Verbal abuse has dropped by more than 40 percent, and 91.6 percent of negative players change their act and never commit another offense after just one reported penalty.” It turns out that much of the prior verbal abuse had come not from serial, irredeemable bigots but by people who were having a bad day. Given the right incentives, they checked themselves and bit their tongues. Note that this remarkable experiment in human conditioning did not involve censorship, did not ban people from playing, did not “out” them with their real names, and was built upon a democratic model of what the community wanted. It was a form of codified, positively reinforcing peer group pressure, one that reminds us that in order to evolve we must learn from our mistakes. Imagine the potential this contains to encourage the positive evolution of the Social Organism.
It’s no coincidence that we’ve arrived at gaming as a potential tool for encouraging the positive behavioral development of the Social Organism, since game theory has long been a part of how biological evolution is studied and understood. One relevant application was used in Richard Dawkins’s The Selfish Gene, a book that has shaped the thinking behind this one. Seeking to explain the apparent contradiction in his argument that evolution is driven by “selfish” genes that use organisms as “survival machines” to replicate, Dawkins turned to the famous Prisoner’s Dilemma game to show how species-wide cooperation and empathy could evolve out of a process of otherwise apathetic self-interest.
The Prisoner’s Dilemma involves two prisoners from the same criminal gang held in separate cells. Unaware of the other’s testimony, they each negotiate with a prosecutor who offers them a bargain. If one prisoner defects and confesses to a serious crime while the latter cooperates and refuses to confess, then the cooperating prisoner will serve three years while the defecting prisoner goes free. If both confess, they each serve two years. If both stay silent, they each serve one year on a lesser charge. Dawkins noted how in computer simulations of the game in which each computer learns from the results of prior plays, what might start out as an inclination toward defection soon shifts to constant and repeated cooperation, since math will result in that being the best result, not only for everyone but for the individual. This, Dawkins suggested, could be how the ongoing algorithm of evolution leads to altruism. So long as there are no other agents with an interest in getting organisms to defect—he suggests divorce lawyers as an example—then the relentless math of evolution will drive populations toward cooperation. It’s not a very romantic interpretation of how love, compassion, and empathy emerged, but it does show that in the great game of life, human communities comprised of autonomous, self-centered individuals can collectively evolve toward pursuing a common interest. Combine that with the idea of pro-social gamification strategies and we have a compelling new goal for designers of decentralized social media platforms: to devise rules that foster the evolution of a harmonious human culture.
Of course, we don’t solely live in a digital world. In real life, we still need analog governance systems to manage society. We also need those same real-life systems—that is, national, state, and local governments—to create and maintain the right legal framework for a healthy digital society. If we return to the theme of how cultural evolution can be harmed by censorship and proprietorial control, it implies that we must also induce governments to implement policies that resist those instincts.
For one, governments must uphold the long-standing principle of net neutrality. This way we maintain a level playing field and prevent the deepening of the digital divide. For the time being, the battle to prevent Internet Service Providers, or ISPs, from using their gatekeeper positions to give privileged bandwidth access to the highest-paying customers has been won in the United States with President Obama’s support. But the corporate proponents of “traffic prioritization” and tiered network access may get a friendlier administration in the future. This net neutrality debate might one day be moot if blockchain-based micropayments create a paradigm in which everyone pays for data on a per-bit basis, but we’re definitely not there yet.
The prospect of an Internet hierarchy of privilege is not just a U.S. issue, nor solely one that deals with traditional ISPs like cable and telecom companies. Facebook found itself embroiled in a cross-cultural brouhaha when it offered poorer Indian communities “Free Basics,” an Internet service for which they would pay no money but only gave them access to a limited selection of websites. Facebook argued that it was giving the poor an opportunity they wouldn’t otherwise have had. But Zuckerberg’s company would cherry-pick which websites the Free Basics customers could see. To the Indian techie community’s credit, they successfully beat back an avalanche of Facebook PR and got regulators to ban Free Basics on the grounds that it violated Indian rules prohibiting discriminatory tariffs in data services. It’s in everyone’s interest to expand the poor’s access to the Internet, but we need to provide them with the complete Internet, not one that Silicon Valley’s gods of social media have made in their likeness.
The reason I care so much about net neutrality among the Internet’s service providers is because they make up the foundational substrate upon which the living Social Organism depends. This substrate must be viewed as a public good. If we instead treat it as private property to be parceled out to the highest bidder, the meme pool from which innovative ideas move forth will be distorted and diminished. We will not evolve as efficiently, and the old, retrograde order dominated by big media companies, cable providers, and telecoms will continue to hold us back.
A healthy Social Organism also depends on governments maintaining a covenant of freedom with their citizens. That means the civil rights embodied in documents such as the U.S. Bill of Rights and the U.N. Declaration of Human Rights need to be re-affirmed, strengthened, in some cases expanded, and certainly updated for digital society. Given the transnational nature of social media, we need worldwide governmental commitment to the free speech principles embodied in the U.S. First Amendment. More specifically, citizens’ rights to privacy must be embraced. Ideally, we’d have international treaties that barred government “backdoors” into databases of people’s private online lives. While that’s admittedly an impossible demand right now, given the tit-for-tat world of international espionage and the politics of fear in the age of terrorism, a good starting point would be to build greater public awareness of the vital role that encryption plays in protecting our rights. We also need high-level public debate about how these free speech and privacy principles apply to private entities such as social media platforms when they act as de facto governors of our communication standards.
Most important, we need an open-source approach toward the development, maintenance and upgrading of the software that governs the major social media platforms. We need algorithm transparency so that people can understand how the information they provide is being curated, controlled and used in the managing companies’ interests. Without that currently unavailable information it’s impossible to design pro-social solutions to improve the functioning of the Social Organism. Should governments be the ones to set such standards? Maybe. Here might be one application of the “Teddy” approach to the biggest social media platforms, those that we can label as monopolies. Laws could be framed as a trade-off: if your platform has such sweeping influence on society, you must at least make parts of its governing software open for all to see. If Microsoft can be forced to share its operating system with competing browsers and applications, can we not force monopoly-like social media platforms to share details about their algorithmic information management?
Getting domestic and international policymakers to understand the Social Organism will be hard, let alone to devise policies for it. But as its evolution proceeds, they must. A consensus is needed among policymakers, NGOs, academics, and, most important, the owners, managers, and developers of the core communications platforms on how best to promote positive social behavior in service of the common good. Thankfully, we have a great new mobilizing tool, in social media itself, to build that awareness—as we’ve documented with the many influential memes and hashtag movements cited in this book.
In this new world, powerful people like Facebook’s Zuckerberg and Google’s Sergey Brin and Larry Page have a great responsibility to encourage an open framework for social media that’s inclusive, transparent, constructive, and positively reinforcing. We would add that, in the long run, they also have a well-aligned interest in achieving that same goal—they just need to persuade their shareholders to be patient. A Social Organism that can continue to grow in a stable fashion serves everyone’s interests, especially those of the companies that service it.
Many will find it hard to trust that the Organism, a highly complex system with no command center, can organically create its own checks and balances to create growth opportunities for all. It demands that every one of us, including the titans of social media, try to overcome our most selfish instincts. The purely self-interested node can make itself heard but will not ultimately thrive in an Organism whose holonic relationships are forged on mutual respect and inclusion. We must strive for that goal, to forge an evolutionary path that allows the genius of human invention and artistry to coalesce into a collaborative global idea machine. This bold vision of the future is the subject of the last chapter.