CHAPTER 6
Reviving the Public Interest
Perhaps the most troubling aspect of the wide range of concerns that continue to emerge around the operation and use of social media platforms is the sense that, given the scale and scope at which these platforms operate, these problems may ultimately be unsolvable. As Siva Vaidhyanathan, author of Antisocial Media: How Facebook Disconnects Us and Undermines Democracy, grimly put it in an interview with the Atlantic, “There is no reform…. So we are screwed.”1 While he may ultimately be right, this chapter begins from the premise that it is too soon to throw in the towel. Thus, in this chapter I will review ongoing efforts and initiatives, and outline some general and some more specific media-governance proposals that are intended to bring a stronger public-interest orientation to the operation of social media platforms and that hopefully would at least have a positive effect.
Platform Self-Governance
First, it is important to have a sense of the types of actions that the platforms have taken so far. The criticisms and controversies surrounding Facebook, Twitter, YouTube, and other social media platforms that have continued to mount in the wake of the 2016 election have certainly prompted a stronger public-interest orientation on the part of these companies. These platforms appear to be moving in the direction of “understand[ing] themselves as a new kind of media company, with obligations to protect the global public good.”2 These responses have included a variety of efforts to reduce the dissemination and consumption of disinformation, counteract the tendency toward filter bubbles, and better protect user data.
Some of these efforts have focused on undermining the ad-revenue support that fake news sites can receive from social media platforms. On this front, we have seen platforms such as Google and Facebook ban known fake-news-disseminating publishers from their ad networks; with Google extending their criteria for exclusion to sites that impersonate legitimate news sites.3
However, research has shown that fake news sites have migrated to other ad networks that are less stringent, and that advertisements designed to look like news stories are the most common type of ad format being utilized by these sites.4 Sites have also focused their efforts on blurring the line between satire and misinformation in order to meet the criteria for inclusion outlined by many ad networks.5
Another approach has been to better inform users about the sources and content available to them. For instance, Google, Facebook, and Twitter have all begun displaying “trust indicators”—standardized disclosures developed by a nonpartisan consortium of news organizations—in an effort to better inform platform users about individual outlets and thus make it easier for them to identify reliable, high-quality content.6 News sources that meet the established criteria for trustworthiness essentially get to display a badge saying so; the goal is that these badges will become an indicator of quality for news consumers. YouTube has offered workshops to young people on how to identify fake news and avoid filter bubbles.7 It has also begun offering what YouTube executives call “information cues,” by displaying links to fact-based content (primarily Wikipedia entries) alongside conspiracy-theory videos.8 Twitter has employed a policy of user notification, notifying more than half a million users who followed, retweeted, or liked tweets disseminated by more than 50,000 automated accounts found to be linked to the Russian government, to inform them that they were exposed to Russian misinformation during the 2016 election period.9 Initiatives such as these address the growing need for “tools of truth recognition” that operate “independent of the market in order for the market to be optimal.”10
However, such efforts to affect behavior by providing more information do not always work as planned. In the wake of the 2016 election, Facebook partnered with third-party fact-checkers to label individual stories as “disputed” in a user’s news feed if two separate fact-checking organizations independently rated the story as false. Users would not only see the disputed label accompanying a fact-checked story in their news feed, but would also receive a pop-up notification of the story’s disputed status before being able to share the story. They would also receive a notification if any story they shared was subsequently fact-checked as false. It is also important to emphasize that this system relied (at least in in part) on users’ reporting individual stories as disputed to trigger the fact-checking process.
After a year, however, Facebook abandoned this labeling approach for a variety of reasons.11 Perhaps most importantly, the labeling by fact-checkers did not appear to be curbing the consumption or sharing of disinformation. In fact, in some instances the labeling was having the opposite effect. As Facebook researchers noted, “disputed flags could sometimes backfire.”12 According to one independent assessment, in some instances conservative news consumers became more proactive in their efforts to share “disputed” stories, in an effort to combat what they perceived as Facebook’s efforts to muffle conservative viewpoints.13
This reaction reflects long-standing problems related to content labeling in media, in which ratings systems sometimes facilitate behaviors that are contradictory to the system’s intent. For instance, the V-chip policy that Congress developed for U.S. television regulation was found to have what researchers labeled a “forbidden fruit effect.” In this case, boys in particular often used the program-rating information to more effectively locate adult content.14 As a result, their consumption of such content actually increased, because most parents were not bothering to utilize the V-chip’s program blocking feature.15
In response to their evaluation of the problems with the “disputed” tag program, Facebook went in a different direction. The company reconfigured its Related Articles system, which was intended to make users aware of additional stories on a topic that had appeared in their news feed. In its new configuration, the platform placed links to articles on the same topic from other publishers (including third-party fact-checkers) below articles in a user’s news feed identified by fact-checkers as fake. Users who shared a news story subsequently labeled as false by a fact-checker, or who were about to share such a story, continued to receive notifications of the story’s disputed status. Users could continue to report stories as “false news” in order to trigger the fact-checking process.16 The goal is to diversify the sources of information a user consults, thereby breaking down filter bubbles and undermining the consumption and dissemination of fake news.17
A key question that emerged in the wake of the 2016 election was how to make algorithms better curators of news. That is, are there ways in which protections against filter bubbles, disinformation, and generally low-quality content can be incorporated into algorithmic design? Essentially, how can these algorithms better serve the public interest? Toward this end, Twitter has been working to improve its automated tools for identifying and suspending accounts identified with malicious actors and bots.18 YouTube has adjusted its search algorithm to prioritize “authoritative” sources in its search results.19 As previously noted, Facebook has integrated the results of third-party fact-checks into its News Feed algorithm, in an effort to reduce the distribution of stories identified as false,20 and began shrinking the size of links to content that fact-checkers have verified as false.21 News stories identified as false are generally not removed (only downranked in news feeds)22 unless they are judged to meet certain criteria, such as potentially leading to violence or containing false information regarding voting.23 As was discussed in chapter 5, Facebook has also begun adjusting its algorithm to prioritize more “trustworthy” sources. Whereas the mechanisms behind the YouTube algorithm adjustment to prioritize authoritative sources remain unclear,24 Facebook has described an approach based on a survey of a sample of Facebook users.
The lead-up to the 2018 midterm elections saw social media platforms becoming increasingly aggressive in identifying and shutting down “inauthentic” accounts. In the first quarter of 2018, Facebook shut down 583 million fake accounts,25 many of which were focused on disseminating fake news and disinformation related to the 2018 election.26 Twitter similarly deleted tens of millions of fake accounts, including many engaged in coordinated dissemination of election-related disinformation;27 it deleted more than nine million accounts a week, on average, in the weeks approaching the 2018 election.28
The scale at which this policing is being conducted is staggering, and is well beyond what human screeners could accomplish without automated tools. These and other efforts will persist, in a process of trial and error, in what has been described as a never-ending game of “whack-a-mole” between these platforms and those actors trying to use them to undermine an informed citizenry.29
A number of app developers have also entered the fray, creating news apps and browser extensions designed to “burst” filter bubbles and diversify users’ social media news consumption. These apps track users’ news-consumption behaviors, evaluate where the news consumption is oriented in terms of the left-to-right political continuum, and, through story recommendations, seek to “nudge” users’ news consumption in whatever direction takes them outside of their ideological comfort zone.30 Such apps, however, require news consumers to take the affirmative steps of recognizing a partisan bias in their news consumption and deciding to do something to counteract it.
These efforts reflect the perspective put forth by former FCC chairman Tom Wheeler: “Algorithms got us into this situation. Algorithms must get us out.”31 However, if there is one lesson to be learned from the evolution of our algorithmic news ecosystem, it is that complete reliance on algorithmic systems would be a mistake. As Facebook’s chief AI scientist, Yann LeCunn, has stated, “AI is part of the answer, but only part.”32 This is why the social media platforms have also dramatically increased their staffs devoted to identifying disinformation.33 For instance, between 2017 and 2018 Google sought to add ten thousand content-screening staff to evaluate content on YouTube.34 In addition, there are third-party providers such as NewsGuard, which offers a browser plug-in that rates more than 4,500 online news sources based on their trustworthiness, as determined by a team of journalists that evaluates each site.35
In the long run, the most important and effective actions that social media platforms take to address their shortcomings as news purveyors may be those actions that diminish their significance as news distributors.36 Any initiatives that, either intentionally or unintentionally, at least partially “put the horse back in the barn” may represent an improvement over the status quo. Any path back to a more pull than push and less algorithmically mediated and hypertargeted relationship between news consumers and news producers represents a step in the right direction. The key caveat here is that this process must not be one in which the presence of—and reliance on—legitimate news organizations in social media declines while the prevalence and reach of purveyors of disinformation on social media persists or grows.
There have been some high-profile ways in which social media platforms have retreated a bit from the provision of news. Facebook, for instance, has eliminated its Trending feature,37 and has adjusted its News Feed algorithm to slightly diminish the overall quantity of news in its News Feed.38 Facebook’s Trending feature became a political hot potato in the wake of accusations that editors for the feature suppressed conservative-leaning news stories. The announcement of the feature’s closure, however, was accompanied by an announcement that the company was developing a number of “future news experiences,”39 suggesting that Facebook’s long-term goal may not be to extract itself in any meaningful way from its position as a news intermediary.
One trend that may be partially a function of efforts by social media platforms to recalibrate their operation in the news ecosystem is that, for the first time in the past decade, reliance on social media for news appears to be declining. According to the 2018 Digital News Report from Oxford University’s Reuters Institute for Journalism, in a study of thirty-seven countries, growth in the use of social media to access news had leveled off or reversed.40 In the United States, for instance, reported reliance on Facebook as a news source dropped nearly ten percentage points from 2017 to 2018.41 Data from the online audience measurement firm Chartbeat indicates that from January 2017 to July 2018, Facebook’s role in driving traffic to news sites declined by roughly 40 percent. These declines were more than offset by increases in direct access and search engine referrals.42
Of course, the horse is never going completely back into the barn. Even as social media usage for news declines, these declines are being replaced, to some extent, by a reliance on social messaging apps like WhatsApp (owned by Facebook). WhatsApp already has a fake news problem that has been described as “epidemic.”43 And, as the Reuters Institute study illustrated, social media platforms serve as a point of origin for much of the news sharing that takes place on social messaging apps.44 Thus, an additional layer of complexity and interconnectivity is folded into to the always-evolving digital news ecosystem, and yet another category of tech company may need to start thinking like—and being treated like—a media company.
IN PURSUIT OF ALGORITHMIC DIVERSITY
Underlying many of the efforts described here is the principle of diversity, and an embracing of the marketplace-of-ideas perspective that exposure to diverse sources and content facilitates better-informed decision-making. The diversity principle has a long tradition in media governance.45 Thus, as it begins to emerge as a more prominent principle of social media governance, it is worth looking at how it has been defined and put into practice within the traditional media realm.
There are three fundamental diversity components: source, content, and exposure. Source diversity refers to the distinguishing characteristics of the sources of news and information. This can include ownership or structural characteristics of media outlets, or even the demographic diversity of the personnel within outlets. Content diversity can take a variety of forms, including the kinds of story topics/types or political viewpoints. Greater diversity of sources has long been presumed to lead to a greater diversity of content. So, for instance, the FCC has regulated broadcast-station ownership in ways that were intended to increase the number of female and minority owners, under the presumption that these owners would bring different perspectives that would be reflected in the type of programming that they offered. The empirical support for this kind of presumed relationship, however, has not always been clear.46
Exposure diversity refers to the extent to which an individual is exposed to a diversity of sources or content. Under this logic, the more choices available to media users, the more they will take advantage of that choice. From a media-governance standpoint, historically, a much greater emphasis has been placed on source and content diversity than on exposure diversity.47 Policy makers have focused their efforts on trying to enhance the diversity of sources and content available to media consumers in the hope that this might lead to greater diversity of exposure. This approach reflects the fact that regulating exposure diversity directly potentially represents an unacceptable intrusion upon individual freedoms and is politically fraught.48 Thus, policy makers have pursued source and content diversity as a means of facilitating exposure diversity, through mechanisms such as ownership regulations and content regulations such as the Fairness Doctrine.
The irony here is that the unintended consequences of diversifying sources and content is that, at the individual level, exposure diversity may very well diminish.49 A substantial body of research across both old and new media contexts has shown that media consumers have a tendency to respond to greater diversity of content offerings by consuming even more of their preferred content types.50 This is the essence of the filter-bubble phenomenon—lots of people constructing their own distinct versions of the Daily Me, with lots of homogeneous content and sources that reinforce their established preferences and worldviews.
The dynamics of social media, however, provide an opportunity for these platforms to be proactive with regard to exposure diversity, in ways that traditional media could not. The combination of push and personalization that characterizes social media as a platform for the delivery of news means that social media platforms know more about what needs to be done to diversify a user’s media exposure and have the capacity, through algorithmic personalization, to “nudge”51 users in more diverse directions. That is, the extensive data that platforms have about their users’ news consumption facilitates the identification of gaps that might need to be filled from a source and/or content diversity standpoint. This kind of information was largely unavailable to previous generations of content providers. This ability to identify diversity needs, so to speak, is now coupled with the ability to deliver that content directly to a user’s news feed.
Of course, algorithmically diversifying a user’s news feed runs completely counter to what we might call the personalization principle, which has been so central in the history of social media. While there has been a fair bit of research into incorporating diversity principles into algorithmic design, applications of this approach in the realms of content-recommendation systems and news-feed curation have been quite scarce.52 Personalization to this point has largely meant providing users with content similar to what they (or their social network) have already consumed, given that more of this type of content generally leads to greater audience engagement. What personalization has not meant is providing users with content that is markedly different from what they have already consumed. This approach still represents a form of personalization, in that the process of algorithmic curation reflects a set of decisions made in response to a user’s exhibited preferences. In a diversity-enhancing approach to personalization, however, algorithms would process and act on this information in a very different way.
Would such an approach be a kind of algorithmic paternalism? Absolutely. But then again, social media platforms have been engaging in some degree of algorithmic paternalism ever since they began filtering obscenity and hate speech and abandoned reverse-chronological news feeds in favor of algorithmically curated feeds. To the extent that every algorithmic criterion represents a choice made by coders that is intended to structure and influence our own decision-making, it becomes difficult to disentangle the notion of paternalism from the very idea of algorithmic curation. From this standpoint, algorithmic paternalism grounded in public-interest principles such as diversity, rather than personalization oriented around serving demonstrated preferences, can be seen as a more socially responsible approach to this core functionality.53 Our understanding of the effects of social media on the digital news ecosystem has shown us that we need to act upon opportunities to algorithmically reassert journalism’s role in providing us with what we need rather than what we want.
Needless to say, this is far from a straightforward task. Let’s assume that, from a public-interest standpoint, diversifying a social media user’s exposure is fundamentally more beneficial to democracy than catering to, and facilitating, a narrow media diet. If so, then the next question becomes how to do this well.54 On this front, one thing platforms need to resist is what media scholar Sandra Braman has described as the “fetishization” of diversity—the tendency to value diversity in whatever form it might be measured, and to fail to take into consideration any limits to its value.55
Translating this concern to the context of news and journalism on social media highlights a couple of points. First, the process of facilitating diversity of exposure must not become fixated on the dichotomy of facilitating exposure to both liberal and conservative perspectives. Such an approach can lead to the creation of false equivalencies—situations in which two opposing perspectives are given equivalent status even when objective evaluation would overwhelmingly recognize that one perspective has greater validity than the other (think, for instance, of the cigarettes and cancer examples discussed in chapter 3’s overview of the Fairness Doctrine). Social media must not fall into the Fairness Doctrine trap, or the trap that some critics of American journalism say the mainstream news media fell into in their efforts to provide “balanced” coverage of the candidates in the 2016 presidential campaign.56 Objectivity and balance are not the same thing, though these two concepts often get conflated in discussions of the practice of journalism.
Fixating on the liberal-conservative continuum in any algorithmic mechanisms for enhancing exposure diversity in relation to journalism represents a fundamental mischaracterization of the functions of journalism. A fundamental dimension of journalism is to provide factual information to facilitate informed decision-making.57 As media policy advocate Mark Cooper has stated, “the core concept of [journalism’s] monitorial role involves the journalist serving as a neutral watchdog, rather than a partisan participant, holding social, economic, and political actors to account by presenting facts rather than advocating positions and offering opinions.”58 Today, however, a growing chorus of critics contend that the romanticized notion of the objective journalist may have been more of an ideal type than a reality, and that the pretense of objective journalism should be abandoned.59 Certainly, the contemporary journalism landscape is one in which traditional, more objective forms of journalism seem to be getting displaced by more overtly partisan approaches. However, unless we are ready to give up on the notion of objective facts,60 we need to maintain a recognition that conveying factual information in order to facilitate informed decision-making is a vital element in the functioning of journalism in a democracy, and that this functionality does not mesh well with partisan journalism. Ultimately, to paraphrase Winston Churchill,61 objectivity may be the worst form of journalism, except for all the others.
For this reason, the curation of a diverse range of sources and content options must incorporate other diversity-relevant criteria beyond ideological orientation. These might include ownership and personnel characteristics, business models (e.g., commercial/noncommercial), and story type. To boil the notion of diversity down to the liberal-conservative continuum runs counter to the very idea of diversity.
Any efforts to effectively operationalize diversity must not be independent of curation based on criteria related to trustworthiness and credibility. The curation of hyperpartisan news sites from both ends of the political spectrum should not represent the essence of algorithmically facilitated exposure diversity. Under such an approach, the consumption of a “balanced” diet of hyperpartisan right- and left-wing news sources does not really lead to the ideal of an informed citizen, given that such news sources are likely to be lacking in the relevant factual information (see chapter 3). Thus, efforts to diversify exposure should happen in tandem with efforts to prioritize journalistic authority, credibility, and, yes, objectivity.
RELEGITIMIZING JOURNALISM
This point raises a related—though certainly controversial—direction that these platforms could pursue in their efforts to rehabilitate their position in the news ecosystem. These platforms are in a unique position to confer journalistic authority on deserving news organizations. The idea of social media platforms engaging in any kind of accreditation process for news organizations has been described as a journalistic “third rail.”62 However, as Campbell Brown, Facebook’s head of news partnerships, has stated, “fake news may be pushing us into a world where we have to verify news organizations through quality signals or other means.”63 As has been discussed, over the past few years, social media platforms have been doing much more in this regard, in terms of evaluating the credibility of individual sources and diminishing the prominence of their content if they fail to meet certain standards. Increasingly, noncredible sources are removed outright.64 Continued efforts in this direction could create a much more privileged status for legitimate journalism within the social media ecosystem.
Of course, many would object to social media platforms’ engaging in this kind of more aggressive, evaluative assertion of their gatekeeping power. Indeed, the better approach might be if the journalism industry would move toward a model of self-accreditation, the results of which could dictate the gatekeeping decisions of social media platforms. Nonaccredited news sources would be ineligible for distribution on social media. Legal scholars Anna Gonzalez and David Schulz have put forth such a proposal, which would focus on a few fixed accreditation standards, such as (a) being a “generator of original content [that ascribes to] universal principles that define the goals and ethical practices of good journalism”; (b) “commitment to a generally reliable discipline of verification”; and (c) articulating and publishing “standard practices, which must advance the universal general principles and be considered reasonably rigorous by similarly situated news outlets.”65
The news industry in the United States has never had any broadly encompassing accreditation system, a position that has proven increasingly damaging as media technology has evolved. There has been perhaps nothing more damaging to the institution of journalism than the way that the Internet, social media, and user-generated content gave rise to the mantra that now “everyone is a journalist.”66 This democratization of the practice of journalism was valorized in a way that fundamentally undermined the value of professional journalists. The devaluation of journalistic authority can be seen as playing a central role in the far-too-undifferentiated way in which news has been presented, shared, and consumed on social media. Even the term news feed as applied to one’s social media feed fundamentally undermines any reasonably stringent notion of what constitutes news.
Over the past decade, it has become clear that the Internet and social media have not made everyone a journalist any more than they have made everyone a lawyer, or everyone a neurosurgeon. The pendulum needs to swing the other way, with the reestablishment and recognition of clear lines of distinction separating legitimate news and journalists from the vast oceans of content that might contain some superficial characteristics of journalism. The various technological changes that have, to this point, largely served to undermine journalistic authority need to be used now to rebuild it—to identify legitimate, professional journalism and elevate it to a privileged status within the operation of social media platforms.
Fortunately, this is already starting to happen. For instance, the journalistic subfield of fact-checking has adopted a rigorous accreditation process, along with accompanying accreditation criteria.67 The International Fact-Checking Network, hosted by the Poynter Institute for Journalism, issues the accreditation and oversees the accreditation process, conducted by an external panel. This panel assesses a fact-checking organization according to criteria laid out in the Network’s Code of Principles.68 These criteria include nonpartisanship and fairness, transparency of funding and organization, and transparency of sources and methodology.69 Facebook utilizes such accreditation when deciding whether to include a fact-checking organization in the network of fact-checkers that it relies upon to identify fake news stories.70 It may be time to broaden the scope of this model.
Such a model would certainly diminish the quantity of news that circulates on social media and probably the consumption of news through social media as well. If that is a by-product of this proposed initiative, that is fine too. Once again, the underlying assumption here is that any actions directed at reconfiguring the dynamics of socially mediated news that also (intentionally or unintentionally) discourage reliance on social media for news represent a multipronged solution to the problem. Improving the performance of social media platforms as purveyors of news and diminishing the importance of social media platforms as purveyors of news are both reasonable paths forward.
IMPLICATIONS
Many of the ongoing and proposed efforts described so far move social media platforms further away from pure personalization tools and closer to news media in the traditional sense. The exercise of greater editorial authority raises legitimate concerns, however, about concentration of gatekeeping power in the hands of relatively few platforms.
Any proposals that involve social media platforms more aggressively and subjectively utilizing their gatekeeping position and systems of algorithmic curation to provide the most “authoritative,” “trustworthy,” or “diverse” content raises the specter of further empowering already powerful social media bottlenecks. The irony in this scenario is the extent to which it reflects a transition back toward the limited number of powerful gatekeepers that characterized the prefragmentation mass media era, but in a technological context in which many of the barriers to entry characteristic of that era are no longer present. The mass media era was accompanied by critiques about concentration of ownership and the accompanying systemic homogeneity of viewpoints.71 In the era of three dominant broadcast networks, Walter Cronkite’s authoritative sign-off, “And that’s the way it is,” was seen by many critics at the time as reflecting an unacceptable level of cultural hegemony, in which the “mainstream” media could easily stifle and exclude alternative viewpoints.72 These critiques gave rise to concerns about the production and influence of propaganda that are similar to the concerns that underlie the fake-news and filter-bubble scenarios we face now.73 Given the extent to which different technological contexts seem to lead to surprisingly similar outcomes, one is tempted to conclude that a media ecosystem comprised of a fairly limited number of powerful gatekeepers is an inevitability, borne of larger institutional and economic forces, as well as innate audience behavior tendencies.74
Fortunately, from a journalistic standpoint, it is also the case that the mass media era of few, powerful gatekeepers facilitated a stronger public-service ethos than has been present since technological change facilitated increased fragmentation and competition, and an associated need for news organizations to prioritize audience and revenue maximization over public service.75 We do not want to fall into the trap of overly romanticizing the past, but it is worth remembering that the news divisions of those three dominant broadcast networks ran deficits of millions of dollars a year, secure that their losses would be subsidized by the tremendous profits of the entertainment divisions.76 Of course, within some media sectors (e.g., broadcasting), this public-service ethos could be attributed, at least in part, to a government-imposed public- interest regulatory framework. But before exploring this direction in relation to social media, it is worth considering self-regulatory approaches.
WHAT SOCIAL MEDIA CAN LEARN FROM AUDIENCE MEASUREMENT
Any proposal involving a more proactive role for platforms in the curation of news faces the inevitable—and legitimate—concern about whether this kind of authority and judgment should be wielded by a select few platforms, whose personnel tend to have inadequate interest, background, or training in making these types of decisions. What we have at this point is perhaps best described as platform unilateralism, in which individual platforms, each with tremendous global reach and impact, individually and independently make governance decisions with wide-ranging implications for journalism, an informed citizenry, and democracy. Here is where the notion of multistakeholder governance can potentially come into play. Algorithmic design in the public interest should involve a wider range of stakeholders.77 We have seen some movement in this direction. In November 2018, Mark Zuckerberg announced plans to create an independent body that would hear appeals about the platform’s content-moderation decisions.78
Self-regulation has a long tradition in the media sector, with the motion picture, music, television, and videogame industries all adopting self-imposed and -designed content ratings systems to restrict children’s access to adult content. These systems all arose in response to threats of direct government regulation.79 These threats typically took the form of congressional hearings or inquiries—classic instances of what is often called “regulation by raised eyebrow.”80
The issues of concern here are far more complex and multifaceted than concerns about children being exposed to adult content (though this is also a problem that confronts social media platforms).81 There is, however, another media-related self-regulatory context that may better reflect the nature of the concerns surrounding social media, and thus may provide some useful guidance: the audience-measurement industry.
Whether for television, radio, online, or print, increasingly interconnected audience-measurement systems provide content providers and advertisers with data on who is consuming what, and on the various demographic (and in some cases behavioral and psychographic) characteristics of these audiences. The dynamics of audience measurement are similar to those of social media in a number of ways.
Audience measurement suffers from the same absence of robust competition that characterizes the social media sector. In audience measurement, the Nielsen Company has established itself in a dominant position, as the leading provider of television, radio, and online audience measurement in the United States and in many other nations around the world.82 Even in countries where Nielsen is not dominant, the audience-measurement industry has exhibited a very strong tendency toward monopoly. Some analysts have contended that audience measurement is a natural monopoly83—an argument that has been made within the context of social media platforms as well (see chapter 4).84
The similarities do not end there. The actions of audience-measurement firms can, like those of social media platforms, have significant social repercussions. If an audience-measurement system underrepresents a certain demographic group in its methodology, the amount of content produced for that group’s needs/interests may diminish, or the outlets serving that group may suffer economically. For instance, there have been cases in which a shift in radio or television audience-measurement methodologies have led to sudden and dramatic declines in Hispanic or African American audiences. In this way, issues of accuracy, fairness, and media diversity are tied up in the dynamics of audience measurement.85 Therefore, there is a prominent public-interest dimension to audience ratings and the work of audience-measurement firms.
Both audience-measurement systems and social media platforms are susceptible to manipulation by third parties. In audience measurement, ratings systems need to insulate themselves from various forms of rating distortion. The integrity of television and radio audience ratings can be affected, for instance, if members of the measurement sample are affiliated with, or known to, any of the media outlets being measured; and are thus subject to influence or manipulation. Radio and television programmers have also been known to try to “hype” ratings during measurement periods with actions such as contests or sweepstakes intended to temporarily boost audience sizes beyond their normal levels.86 In online audience measurement, bots are a common tool for creating inflated audience estimates and thus must be policed vigilantly.87
These scenarios are not unlike those facing social media platforms, which face constant efforts by third parties seeking to “game” their news-feed algorithms in order to achieve higher placement and wider distribution of their content for economic or political gain.88 An entire industry has arisen around “optimizing” content for social media curation algorithms. At the same time, platforms are constantly adjusting their algorithms in ways to diminish the prominence of clickbait and other forms of low-quality content that often are produced with an eye toward exploiting what is known about the algorithms’ ranking criteria and how these criteria affect the performance of various types of content.89
It is also worth noting the somewhat ambiguous First Amendment status of both the ratings data produced by audience-measurement firms and the algorithms produced by social media firms. In both cases, there remains room for debate as to whether ratings data and algorithms represent forms of speech that are deserving of full First Amendment protection. The uncertainty in both cases derives from the broader (and still contentious) question of whether data represent a form of speech eligible for First Amendment protection.90 Data are the primary output of audience-measurement systems. And in the case of social media algorithms, data are the key input that generates their content-curation outputs. Thus, the two contexts represent somewhat different instances of the intersection of data and speech, and the First Amendment uncertainty that can arise.91 This issue is of particular relevance to discussions of possible government regulation in these spheres, given the extent to which the First Amendment provides substantial protections against government intervention and increases the likelihood of a self-regulatory model taking hold in speech-related contexts.
Finally, measurement firms and social media platforms share an interest in keeping the details of their methodology or algorithms proprietary. This is to prevent manipulation, as well as for competitive reasons. Therefore, it is not surprising that audience-measurement systems, like social media algorithms, have frequently been characterized as “black boxes.”92
So far, at least in the United States, despite the lack of competition, we have not seen direct government regulation of the audience-measurement industry. What we have seen instead is a form of self-regulation, instituted through the establishment of an organization known as the Media Rating Council (MRC). The Media Rating Council was created in the 1960s (when it was initially called the Broadcast Rating Council). In keeping with the other self-regulatory structures in the media sector, the impetus for the Media Rating Council came from a series of congressional hearings investigating the accuracy and reliability of television and radio audience-measurement systems.93 The MRC’s membership represents a cross section of the media industries and associated stakeholders, including media companies in television, radio, print, and online media, as well as advertising agencies, advertisers, and media buyers.94
The MRC has two primary responsibilities: setting standards and accreditation. In the standard-setting realm, the MRC establishes and maintains minimum standards pertaining to the quality and the integrity of the process of audience measurement. Under this heading, the MRC outlines minimum methodological standards related to issues such as sample recruitment, training of personnel, and data processing. The MRC also establishes and maintains standards with regard to disclosure and which methodological details must be made available to the customers of an audience-measurement service. Included in this requirement is that all measurement services must disclose “all omissions, errors, and biases known to the ratings service which may exert a significant effect on the findings shown in the report.”95 Measurement firms must disclose substantial amounts of methodological detail related to sampling procedures and weighting of data. They must also disclose whether any of the services they offer have not been accredited by the MRC.
The accreditation process is the second key aspect of the MRC’s role. The MRC conducts confidential audits of audience-measurement systems to certify that they are meeting minimum standards of methodological rigor and accuracy. While the audit decision is made public, key methodological details of the audience-measurement service remain confidential. Leaks of audit details have been incredibly rare. As with audience-measurement systems, we need to recognize the practical limits of mandated or voluntary transparency in relation to the operation of social media algorithms.96
In establishing and applying standards of accuracy, reliability, and rigor to the increasingly complex process of audience measurement, the MRC offers a potentially useful template for self-regulation for social media. In some ways, given the spate of congressional hearings focusing on issues such as the role of social media platforms in the dissemination of fake news in connection with the 2016 election, their data-gathering and data-sharing practices, and claims of suppression of conservative viewpoints, one could argue that media history tells us that some sort of self-regulatory initiative is inevitable. Then again, one recurring theme of this book has been the extent to which social media companies do not see themselves as media companies, and perhaps they will not respond to congressional raised eyebrows in the same way that traditional media companies have in the past.
However, one could imagine the establishment of a self-regulatory body that addresses issues similar to those addressed by the MRC, with a similar multistakeholder construction. This hypothetical Social Media Council could establish standards regarding the gathering and sharing of user data. It could create disclosure standards for the algorithmic details that need to be made publicly available (similar to the MRC’s disclosure requirements). Likewise, this council could set the standards for a public-interest component of news-curation algorithms. In connection with these standards, it could establish MRC-like auditing teams with the relevant expertise to determine whether news-curation algorithms meet these minimum standards, and be similarly capable of maintaining the necessary confidentiality of the audit details. And, just as significant methodological changes in audience-measurement systems require reaccreditation from the MRC, a process could be established in which significant algorithmic changes would require assessment and accreditation.
In what would appear to be an initial step in this direction, in September 2018, social media companies such as Facebook, Google, and Twitter, along with advertisers, developed and submitted to the European Commission a voluntary Code of Practice on Disinformation. According to the European Commission, this Code of Practice represents the “first time worldwide that industry agrees, on a voluntary basis, to self-regulatory standards to fight disinformation.”97 The code includes pledges by the signatories to significantly improve the scrutiny of ad placements, increase transparency of political advertisements, develop more rigorous policies related to the misuse of bots, develop tools to prioritize authentic and authoritative information, and help users identify disinformation and find diverse perspectives. Though an encouraging step, the code has been criticized for lacking meaningful commitments, measurable objectives, and compliance or enforcement tools.98
This last criticism highlights a key characteristic of the MRC, which is that it has no enforceable authority. Participation by audience-measurement services is voluntary. Measurement firms are free to bring new services to the market without MRC accreditation if they so choose. The embracing of MRC accreditation as an important indicator of data quality by the market as a whole is intended to discourage such actions, and for the most part it does.
However, not all stakeholders have been convinced that this voluntary-compliance model provides adequate oversight. Consequently, in 2005, at the urging of some television broadcasters, Senator Conrad Burns of Montana introduced the Fairness, Accuracy, Inclusiveness, and Responsibility in Ratings Act, which sought to confer greater regulatory authority upon the MRC, making MRC accreditation mandatory for any television audience measurement services on the market. Any methodological or technological changes to existing measurement systems would also be subject to mandatory MRC accreditation.99 Thus, Congress would essentially confer upon the MRC a degree of oversight authority that it lacked. Under this model of “regulated self-regulation,”100 the self-regulatory apparatus remains, but the heightened authority emanates from congressional decree.
Some industry stakeholders supported the proposed legislation.101 Many other industry stakeholders, however, were opposed to conferring this greater authority upon the MRC.102 Even the MRC itself opposed any legislation granting it greater authority, citing antitrust and liability concerns.103
Within the context of social media, could we similarly assume that the presence or absence of some sort of Social Media Council “stamp of approval” would sufficiently affect the behaviors of platform users, content providers, and advertisers to compel participation in the accreditation process by these platforms? Perhaps. If not, this process would need to overlap into the policy-making realm to a greater extent than has been the case so far in audience measurement.
IMPLICATIONS
Would the establishment of this type of self-regulatory apparatus be burdensome to algorithmic media platforms? Absolutely. However, one of the most important lessons of the past few years has been that the ability of platforms to unilaterally construct and adjust the criteria that increasingly determine the flow of news and information represents not only an unhealthy concentration of gatekeeping authority, but a process that tends to operate in a way that is largely divorced from the news values that connect with the information needs of citizens in a democracy.
Would the establishment of this type of self-regulatory apparatus potentially discourage platforms from engaging in the distribution of news? Possibly. However, the disintermediation of the relationship between news organizations and news consumers likely has positive dimensions that outweigh the negatives. While it is unrealistic to assume that social media platforms will cease to operate as news intermediaries, if efforts to improve their performance also have the effect of diminishing their centrality in the news ecosystem, facilitating more direct relationships between news outlets and news consumers, then this is a comparatively desirable outcome.
Policy Evolution
We turn now to the other key dimension of media governance, policy making, and ongoing efforts and proposals in this area. Policy makers and researchers, both within the United States and abroad, have begun to consider—and in some cases implement—a variety of regulatory interventions in the realm of social media.104 In the United States, a 2018 white paper from Virginia Senator Mark Warner laid out a range of possible interventions. These include requiring that platforms label bots and authenticate the origin of accounts or posts, codifying platforms’ “information fiduciary” responsibilities, revising section 230 of the Communications Decency Act to increase platform liability in areas such as defamation and public disclosure of private facts,105 and adopting some form of auditing mechanism for algorithmic decision-making systems.106
At this point, few of these proposals have taken the form of proposed legislation or regulatory agency action. One area of activity has been in the realm of political advertising. For instance, in December 2017, the Federal Election Commission issued a ruling that disclosure requirements regarding political-ad sponsors must extend to Facebook.107 The FEC is mulling a further extension of disclosure requirements to all online political advertising.108 Obviously, a regulatory requirement that applies only to a single social media platform would seem to be of limited effectiveness, essentially inviting bad actors to focus their efforts on other platforms. Congress has similarly focused on political advertising, with bills intended to bring greater transparency to political-ad sponsorship making their way through both the House and the Senate.109
The data-sharing improprieties at the heart of the Cambridge Analytica scandal have produced a more aggressive response than have revelations about fake news and disinformation. The Federal Trade Commission quickly opened an investigation of Facebook, focused on whether the company’s mishandling of user data violated the terms of a 2011 consent orderthat required the company to obtain user permissions for certain changes in privacy settings.110 In addition, a bipartisan group of thirty-seven state attorneys general sent a letter to Facebook seeking details about the data breach and threatening to take action if it were determined that Facebook failed to adequately protect users’ personal information.111 The Cambridge Analytica scandal also led to the introduction of the bipartisan Social Media Privacy Protection and Consumer Rights Act of 2018,112 as well as the Democrat-sponsored Customer Online Notification for Stopping Edge-provider Network Transgressions (CONSENT) Act.113 These bills are clearly inspired by the European General Data Protection Regulation, which went into effect in May 2018.114 If passed, the Social Media Privacy Protection and Consumer Rights Act would require greater transparency from social media companies regarding the types of user data collected and how such data are shared. It also grants users much more control over their data, giving them the right to opt out of data collection and to demand deletion of their data.115 The CONSENT Act would require “edge providers” such as social media platforms to obtain “opt in” from users before using, sharing, or selling their personal information, and to provide levels of transparency similar to those described in the Social Media Privacy Protection and Consumer Rights Act.116 The fact that the Cambridge Analytica scandal was followed in late 2018 by further revelations that Facebook continued to share access to user data with other large tech companies such as Apple, Amazon, Microsoft, and Netflix has provided further incentive for regulatory interventions related to how social media platforms gather and monetize user data.
The bulk of the U.S. legal and regulatory response to the various social media–related issues confronting policy makers could thus end up focusing on the protection of user data. U.S. policy makers have a long history of prioritizing the economic dimensions of media policy over the political dimensions,117 and thus the audience as consumer over the audience as citizen.118 The reorientation of social media policy concerns around the protection of user data, couched in the language of consumer protections, reflects this tendency.
Certainly, a focus on consumer data protection represents a clearer path for policy interventions. It is somewhat less fraught with conflicting partisan self-interest (given the somewhat indirect relationship between privacy and the news and information that affect citizen voting behaviors), and less entangled in thorny First Amendment issues. However, a focus on consumer privacy is, at best, an indirect approach to dealing with issues such as fake news. I say indirect because limitations on the gathering and usage of consumer data certainly have the potential to undermine the precision with which fake news or deceptive political advertising can target users. If fake news purveyors or deceptive advertisers could not target their desired audience as effectively, this would likely reduce their reach or impact, and might even discourage, to some extent, the use of social media platforms for these purposes. However, the consumer data protections being proposed are most likely not, in themselves, sufficient for solving these problems. They address one piece of a larger puzzle.
The consumer data protections that are in place in Europe are being accompanied by various degrees of policy intervention directed at fake news and disinformation. The European Union established a High Level Group to advise on policy initiatives to combat the spread of fake news and disinformation online.119 Importantly, this initiative was directed not only at possible legal and regulatory responses to the production and dissemination of fake news, but also at identifying means of enhancing quality journalism, developing self-regulatory responses, and improving digital media literacy. This group’s fairly broad recommendations include enhancing transparency in online news, promoting media and information literacy, developing tools for empowering users and journalists to tackle disinformation, and safeguarding the diversity and sustainability of the European news media ecosystem.120
Britain has announced plans to establish a dedicated “national security communications unit” that will be “tasked with combating disinformation by state actors and others”121 and is considering a range of additional interventions, including using content standards established for television and radio broadcasters relating to accuracy and impartiality as a basis for setting standards for online content, along wtih government-conducted audits of social media platforms’ algorithms.122 This is a relatively rare instance, so far, of a regulatory framework for traditional media informing social media regulation.
In France, President Emmanuel Macron has been particularly aggressive in seeking to combat fake news on social media platforms. In early 2018, he introduced legislation directed at preventing the dissemination of fake news during election campaign periods. This would be achieved through mandated transparency for social media platforms, requiring them to reveal who is paying for sponsored content. Spending caps on social media advertising are also part of the proposed plan. In addition, judges would have the authority to order takedowns of false content and block access to websites where such content appears.123 While this legislation did not pass the Senate, a bill that allows a candidate or political party to seek a court injunction preventing the publication of “false information” during the three months leading up to a national election was passed in November 2018.124 France has also taken the unprecedented step of “embedding” a team of regulators within Facebook to observe how the company addresses hate speech. Of particular relevance here is that this action is taking place with Facebook’s cooperation.125
Some of the most aggressive actions taken so far are in Germany, where, in January 2018, the Netzwerkdurchsetzungsgesetz (NetzDG) law took effect. This law requires social media platforms with more than two million users to remove fake news, hate speech, and other illegal material within twenty-four hours of notification, or receive fines of up to fifty million Euros.126 The platforms are required to process and evaluate submitted complaints and to make the determination as to whether individual posts merit deletion. Not surprisingly, this law has come under criticism for leading to deletions of legitimate speech.127 However, justice ministers for the German states have asked for the law to be tightened, and loopholes to be eliminated,128 which suggests that the law is moving in the direction of becoming more expansive rather than straining under intensive backlash. In many ways, this law is a canary in the coal mine for democracies around the world.
RETHINKING LEGAL AND REGULATORY FRAMEWORKS
As the previous chapters have shown, neither the legal nor the regulatory frameworks that have traditionally been applied to the media sector in the United States appear to adequately reflect or encompass the nature of social media platforms, and thus are in need of reinterpretation or revision. This situation helps to explain the degree of inaction that has characterized policymaking in the United States thus far. In the sections that follow, I offer some ideas for how our legal and regulatory frameworks can be adapted to a changing media landscape.
I make these suggestions fully recognizing that the current political climate in the United States makes this perhaps the most dangerous time in generations (at least since the Nixon administration) to suggest any kind of more rigorous regulatory framework for any part of the media sector, given the level of hostility that the current administration has shown toward the press. At the same time, it seems within the bounds of reasoned optimism to consider the current situation something of an historical anomaly. The following discussion operates from this longer-term perspective.
RETHINKING THE FIRST AMENDMENT
Legal doctrines and regulatory frameworks represent fundamental components of media governance. Within the context of U.S. media regulation, the First Amendment has traditionally served as the fundamental legal constraint on regulatory models and interventions. Thus, any consideration of the overall regulatory framework for social media should begin with a consideration of the legal parameters within which such a framework operates.
The key issue here is whether there are any alterations that can—or should—be made to how the First Amendment functions within the context of social media governance. Legal scholar Tim Wu recently explored the provocative question “Is the First Amendment obsolete?”129 As Wu illustrates, the First Amendment originated when opportunities to speak and reach an audience were relatively limited. Now, as Wu notes, technological changes have meant that “speakers are more like moths—their supply is apparently endless. The massive decline in barriers to publishing makes information abundant, especially when speakers congregate on brightly lit matters of public controversy. The low costs of speaking have, paradoxically, made it easier to weaponize speech.”130 Along similar lines, technology scholar Zeynep Tufekci has made the case that “many more of the most noble old ideas about free speech simply don’t compute in the age of social media.”131
The key implications of these arguments is that the technological conditions underlying our speech environment have changed so profoundly that the First Amendment, as a fundamental institution of media governance, may be undermining as much as enhancing the democratic process. Within the realm of social media, the First Amendment facilitates a speech environment that is now capable of doing perhaps unprecedented harms to the democratic process, while restricting regulatory interventions that could potentially curb those harms.
Fortunately, the natural end point of this perspective is not that the First Amendment is genuinely obsolete. Rather, the problem may be that certain aspects of First Amendment theory remain underdeveloped, or underutilized. As Tim Wu puts it, “the First Amendment should be adapted to contemporary speech conditions.”132 There are some specific ways that this could happen.
The Diminishment of the Counterspeech Doctrine
As I discussed in chapter 3, the assumption of the efficacy of counterspeech should wield less influence in any applications of the First Amendment to cases involving social media. It seems appropriate that, for First Amendment cases related to news on social media, the counterspeech doctrine should receive the same kind of more circumspect and limited application that has been advocated for in speech contexts such as hate speech and adopted by the courts in contexts such as libel. The Supreme Court’s recognition that “false statements of fact” are particularly resistant to counterspeech133 needs to extend beyond the context of individual reputation that provided the basis for that decision. First Amendment jurisprudence needs to recognize that, despite the apparent free flow of news and information from diverse and antagonistic sources that the Internet has been seen to epitomize, the dissemination and consumption of news in the increasingly social-mediated online environment merits inclusion among those speech contexts in which reliance on counterspeech is increasingly ineffectual and potentially damaging to democracy.
As legal scholar Frederick Schauer points out, the troubling irony is that First Amendment theory has seldom grappled with the issue of truth versus falsity—or, in today’s vernacular, facts versus “alternative facts.”134 As Schauer convincingly demonstrates, “nearly all of the components that have made up our free speech tradition…in the cases and in the literature, and in the political events that inspired free speech controversies, have had very little to say about the relationship between freedom of speech and questions of demonstrable fact. Implicit in much of that tradition may have been the belief that the power of the marketplace of ideas to select for truth was as applicable to factual as to religious, ideological, political, and social truth, but rarely is the topic mentioned.”135 Continuing in this vein, Schauer distressingly notes, “although factual truth is important, surprisingly little of the free speech tradition is addressed directly to the question of the relationship between a regime of freedom of speech and the goal of increasing public knowledge of facts or decreasing public belief in false factual propositions.”136 As a result, the First Amendment has essentially facilitated the type of speech that, ironically, undermines the very democratic process that the First Amendment is intended to serve and strengthen.
Historically, different categories of speech have received different levels of First Amendment protection based on their relevance and value to the democratic process.137 For instance, commercial speech receives less First Amendment protection (and more rigorous restrictions against falsity) than political speech, which represents the pinnacle of speech protection because of its centrality to the democratic process.138 The irony here is that fake news is a type of speech that is most directly and irrefutably damaging to the integrity of the democratic process, yet because it resides within the large and undifferentiated protective bubble of political speech (where journalism generally resides), it receives (as long as it is not libelous) the highest level of First Amendment protection.
Going forward, the distinction between the factual and the subjective dimensions of journalism needs to be better integrated into First Amendment jurisprudence, as part of the larger project of establishing a more robust First Amendment tradition directed at carving out a more distinct space where falsity (at least in relation to news) resides within the free speech landscape.
Embracing the Collectivist First Amendment
The fallout from the 2016 election has also helped to remind us that there have long been two somewhat competing interpretations of the First Amendment: the individualist and the collectivist interpretations.139 It is worth briefly reviewing these two approaches before making the case that an interpretive shift is in order.
The more dominant First Amendment approach has been the individualist interpretation,140 which prioritizes preserving and enhancing the free speech rights of the individual citizen (or media outlet). This emphasis on individual autonomy emanates from the fact that constitutional rights are traditionally perceived as protecting the individual from government intrusion.141 Within this interpretive framework, free speech rights are typically conceived of as a “negative liberty”142—that is, in terms of freedom from external interference in doing what one wants.
The collectivist interpretation of the First Amendment focuses on creating a speech environment that supports the community-based objectives associated with the First Amendment, such as stability, collective decision-making, and, most important, the effective functioning of the democratic process.143 The First Amendment thus functions as the means to ends that explicitly prioritize the welfare of the collective citizenry over the welfare of the individual speaker. Reflecting this assignment of value, a central guiding principle of the collectivist approach is that “what is essential is not that everyone shall speak, but that everything worth saying shall be said.”144
From an application standpoint, the key point of departure of the collectivist interpretation from the individualist interpretation is that the collectivists reject the absolutist interpretation of the First Amendment’s command that Congress make no law abridging freedom of speech or of the press. From the collectivist perspective, the phrasing of the First Amendment clearly grants Congress the authority to make laws that enhance the free speech environment. Indeed, many proponents of the collectivist interpretation of the First Amendment advocate the imposition of government regulations in order to correct perceived inadequacies in the current system of communicating information to citizens.145 Under this perspective, while the state “remains a threat to free expression, [it] also needs to serve as a necessary counterweight to developing technologies of private control and surveillance.”146
The point here, and the natural extension of the arguments of scholars such as Wu and Tufekci, is that technological change may finally be compelling the embracing of the collectivist approach to the First Amendment over the individualist approach. This should be seen not as a diminishment of the First Amendment, but rather as an adjustment in emphasis—an adjustment that many collectivist proponents have compellingly argued is in fact the more democracy-reflective and -enhancing interpretation of the First Amendment.
Arguments for a shift to a more collectivist interpretation of the First Amendment are far from new. They have a long history.147 The key point here, however, is that the news production, distribution, and consumption dynamics that characterize social media may represent the most compelling case yet for this interpretive shift to finally take place.
To apply this collectivist perspective to the context of social media, if there are prominent aspects of the algorithmic marketplace of ideas that are fundamentally (to use Tufekci’s term) “democracy poisoning” rather than democracy enhancing (e.g., fake news that leads to misinformed voting behaviors), then the collectivist approach offers what is in actuality a First Amendment–friendly (rather than hostile) path toward regulatory interventions.
An individualist-oriented First Amendment functions primarily as a constraint on policy makers’ actions. In contrast, a collectivist-oriented First Amendment functions more as a distinct policy objective, rather than as a boundary line to be respected in the pursuit of other policy objectives. The nature and function of social media platforms may represent the necessity that triggers the prioritization of the collectivist interpretation of the First Amendment over the individualist.
RECONCILING REGULATORY MOTIVATIONS AND RATIONALES
One important ramification of a more collectivist approach to the First Amendment is how it can facilitate a reconfiguration of our public-interest regulatory framework for media in a way that allows it to extend to social media platforms. As discussed in the previous chapter, the media regulatory framework in the United States is characterized by the distinction between motivations and rationales. The regulatory rationales are technologically derived justifications for regulations that would otherwise represent infringements on the First Amendment rights of media outlets. The motivations are the actual problems or concerns (fake news, adult content, lack of diversity, etc.) that attract regulatory attention. This framework reflects a fundamentally individualist approach to the First Amendment, in that regulatory action to address speech-related problems represents a de facto intrusion on individual speech rights unless some mitigating technological anomaly is present.
Given this situation, the key questions become (1) is there an alternative approach to regulatory rationales other than the reactive “technological particularism”148 that has characterized the U.S. regulatory apparatus? and (2) what would be some implications of such an approach? The purpose in asking these questions is to consider if and how a more robust public-interest-oriented regulatory framework might be applied to social media. Exploring this possibility seems particularly relevant in light of the fact that perhaps the most applicable general rationale for intervention—that social media represent a monopolistic or oligopolistic situation—is vulnerable to the argument that the barriers to entry for new online competitors remain low.149 If Google has yet to be treated as a regulated monopoly in search, then Facebook’s being treated as a regulated monopoly in social seems unlikely as well.
Considering the first question, the reality is that regulatory motivations typically beget regulatory rationales. For instance, as discussed in chapter 5, concerns about the political influence of broadcasting led to the scarcity rationale. These rationales are typically derived from the contemporary characteristics of a media technology or service, and are thus inherently vulnerable to the inevitable evolution that affects all media technologies.150 If regulatory motivations and rationales were more tightly intertwined, this would not be a problem, because the technological changes undermining an established regulatory rationale would simultaneously be addressing and alleviating the motivations. So, for instance, the argument that the scarcity of the broadcast spectrum is much less a factor today than it was fifty years ago might bear directly on the FCC’s motivations to regulate on behalf of diversity of sources/content (because more content providers are now able to reach audiences, through so many additional technologies and services). However, the relative presence or absence of scarcity has no strong bearing on, say, the FCC’s motivations to protect children from harmful content (though certainly other rationales such as the pervasiveness and public-resource rationales do).
Is there a way, then, that regulatory rationales can emerge from somewhere other than the contemporary technological characteristics of specific media and be better intertwined with core regulatory motivations?151 Can regulatory motivations and rationales essentially converge? One possibility might be for the public-interest concept to serve such a unifying role. To some extent, the public interest already serves as something of a bridge concept. It operates as a broad, general motivation for regulatory oversight and intervention, as the mandate that guides the FCC; and through its history, the public-interest principle has been populated with various, more specific policy objectives (diversity, localism, etc.).152
The public interest can also be seen as a rationale, in that in many industry sectors, the question of regulatory intervention hinges on a determination as to whether an industry, technology, or service is “affected with a public interest.” This notion of an industry being affected with a public interest has a long history that developed primarily within the context of debates over the right of the government to set prices in monopolistic situations.153 This descriptor has typically served as a rationale for various forms of government oversight and intervention. The terminology has most often been applied to “essential services” such as utilities.154 However, this terminology has not been limited exclusively to utilities regulation, nor has it been limited to the regulation of pricing.155 Legal scholar Frank Pasquale has argued for its application to search engines.156
As the affected-with-a-public-interest concept became refined over time, one particular category of business seems particularly relevant to the context at hand: “businesses which though not public at their inception may be said to have risen to be such.”157 This description would seem to effectively characterize social media platforms and the evolutionary trajectory that they have followed.158
IMPLICATIONS
This merging of the public-interest principle as regulatory motivation and rationale is premised on the ongoing convergence of media technologies and services; the multiplatform nature of how content is produced, distributed, and consumed; and the inherent interconnectedness and interdependence of the contemporary media ecosystem. Indeed, if we think about media from an ecosystem perspective,159 the inherent interconnectedness and interdependence of all of the components means that an action (by a regulator, or a regulated or unregulated industry stakeholder) that affects one component of the ecosystem is likely to have ripple effects on others. Regulatory rationales that isolate individual components of this ecosystem ignore this interconnectedness and independence in a way that is fundamentally incompatible with the very concept of an ecosystem. For these reasons, making technological characteristics no longer the starting place for establishing regulatory rationales makes the most sense going forward.
This approach would require that explicit, general criteria be articulated for whether media technologies or services are “affected with a public interest.” It would thus involve taking the public-interest concept beyond the fairly vague regulatory principle for which it has been criticized and infusing it with specific criteria (reflecting core regulatory motivations) that could then be applied to specific media technologies and services.
For instance, particularly in light of contemporary concerns, a fundamental public-interest criterion is the nature and extent of the role that a technology or service plays in the democratic process. Arguments for protecting and enhancing the democratic process as a regulatory rationale are well established,160 even if they have not been consistently embraced by policy makers and the courts. If a platform meets certain criteria associated with centrality to—or impact on—the democratic process, then this could trigger those aspects of the public-interest regulatory framework associated with preserving and enhancing the democratic process.
Inherent in this approach would be an assessment of the nature and magnitude of the role and impact that a platform has within the broader news ecosystem—something that could be reevaluated at regular intervals or as conditions warrant. Such an approach could also be sensitive to how a technology or service evolves over time. For instance, Facebook, in its original incarnation as a platform for staying in touch with friends and family members, would not merit classification as infused with the public interest. However, the contemporary version of Facebook, in its role as a central distributor of, and means of accessing, journalism, and as an increasingly prominent mechanism for political advertising and political communication, certainly would.
This example raises an important question: Can we imagine a scenario in which a set of public-interest regulatory rationales/motivations are developed that apply to some social media platforms but not others on the basis of criteria such as functionality and usage? Or should such criteria apply uniformly across media categories such that one entity could, in effect, trigger the application of the public-interest regulatory framework to all entities in that category? Germany is utilizing a user-base trigger of two million users for its regulation of hate speech and fake news on social media.161 We might also consider an approach based on the proportion of a platform’s posts or participants that fit basic criteria for journalism and/or political communication. Or self-reported usage behaviors might serve as the basis, not unlike how the Federal Communications Commission utilized survey data on the extent to which individuals relied on different media for news in order to assess the level of media diversity present in individual communities.162
This discussion of a more robust public-interest regulatory framework has focused exclusively on social media. This relatively narrow focus raises another question: Can we imagine a scenario in which a strong public-interest regulatory framework is applied to social media in particular, but not to the Internet as a whole? In considering this question, it is worth revisiting a classic analysis, by legal scholar and Columbia University President Lee Bollinger, of the different regulatory models applied to print media and broadcast media in the United States.163 As Bollinger points out (and as discussed in chapter 5), the technologically derived rationales for broadcast regulation do not hold up particularly well under scrutiny.
However, according to Bollinger, while the specific logic of treating broadcasting differently than print is faulty, the ultimate outcome not only is desirable, but also makes compelling sense when an alternative rationale is brought to bear. Specifically, according to Bollinger, given that (as discussed earlier) there are both speech-enhancing and speech-impeding aspects to media regulation,164 it makes practical sense to apply a more proactive regulatory model to one component of the media system and a more laissez-faire approach to the other. As Bollinger argues, in light of the double-edged character of media regulation, the most logical response is a “partial regulatory scheme,” in which different regulatory approaches are applied to different media sectors, media users are able to reap the benefits of both models, and these different models operate as a type of checks-and-balances system upon each other.165
As the Internet essentially subsumes all other forms of media (including print), the suggestion here is that this rationale for a “partial regulatory scheme” might apply similarly online, with the broader Internet operating in a relatively unregulated matter, but with social media platforms operating under greater regulatory oversight. This partial regulatory scheme highlights the fact that, regardless of what types of regulatory interventions are considered for social media, the broader Internet still operates as a largely unregulated space.166 Content or sources that might be deemed inappropriate for social media platforms are still likely to be accessible online. Thus, this content is not suppressed, but it is denied the tremendous amplification and targeting opportunities afforded by social media—opportunities that perhaps should be more judiciously allocated in the name of democracy. In this way, Bollinger’s case for a partial system of regulation can resurface today and help guide the path forward.
image
The public-interest regulatory model as it has existed in the United States has persisted despite—rather than because of—the logical foundation upon which it has been justified. If we accept this premise, and agree with the need to not completely abdicate government oversight of the media sector, then we should recognize the logic in developing a more unified, less technologically particularistic approach to rationalizing regulatory interventions. The evolving role, functions, and effects of social media may represent the much-needed incentive for doing so. In the end, the goal here has not been to suggest specific regulatory obligations to be imposed on social media platforms, but rather to suggest a revised regulatory framework that would facilitate such actions if they were deemed necessary.