CHAPTER 5
The Public-Interest Principle in Media Governance
Past and Present
The problems that have arisen from, and are associated with, media platforms are of a type and potential magnitude that governments are increasingly exploring a variety of regulatory responses. The question of if or how a regulatory response might proceed, or whether there are alternative solutions, is particularly complicated and rife with uncertainty.
A useful starting point for addressing these questions is to revisit a concept has been relegated to the margins of the governance of our digital media sector—the public interest. The concept of the public interest has established traditions as a guidepost for media policy makers in their formulation and assessment of policies. It has emerged as a professional norm for certain categories of media organizations and professionals (particularly news outlets and journalists), and as an evaluative and rhetorical tool for civil society organizations in their assessments of media performance and their advocacy efforts on behalf of the broader public.1 As this description indicates, the broad applicability of the public-interest concept connects with the similarly broad scope of the notion of media governance.
As noted in the introduction, media governance is broader and more inclusive than traditional notions of regulation or policy, particularly in the range of stakeholders that are seen as participating in the process. These stakeholders include not only policy makers, but also industry organizations, NGOs and civil society organizations, and even the media audience.2 Importantly, from a governance perspective, “these actors are addressed as equal partners in shaping and implementing public policies and regulations.”3 Put simply, media governance can best be encapsulated as regulatory deliberations, processes, and outcomes that take place both within and beyond the state.
The governance concept has become increasingly prominent in the discourse surrounding digital media.4 It has emerged as a reflection of, and response to, the distinctive characteristics of the Internet as a medium in which (a) national legal and regulatory jurisdictions are more difficult to define and enforce;5 (b) the very origins of the technology and how it operates reflect a somewhat decentralized, interconnected, collectivist undertaking of governmental, commercial, and nonprofit stakeholders; and (c) the traditional rationales for government regulation often lack clear applicability and relevance. This third point is particularly relevant to this chapter.
While the public-interest concept has, for a variety of reasons, been diminished and marginalized within the context of the governance of social media platforms. I will make the case that the public-interest principle is just as relevant—if not more so—to today’s digital media environment. As media scholar Tarleton Gillespie has noted, “like the television networks and trade publishers before them,” new media platforms such as Facebook and Twitter are “increasingly facing questions about their responsibilities: to their users, to key constituencies who depend on the public discourse they host, and to broader notions of the public interest.”6
I begin with a brief introduction to the public-interest principle and its traditional role in media governance. I then explain how and why the public-interest principle has been diminished and marginalized within the digital media realm in general, and within the context of social media governance in particular. Finally, I will illustrate some of the specific repercussions that make clear the need for a more robust public-interest framework for social media governance.
Revisiting the Public Interest
The public-interest principle has a long, contested, and sometimes convoluted history in the realm of media governance. One of the defining characteristics of the concept is the extent to which its meaning has often been perceived as unclear, or “vague to the point of meaninglessness.”7 With this in mind, a useful starting point for this (re-) introduction is to review how the principle has been defined and put into practice both in the operation of media organizations (the public interest as institutional imperative) and in the realm of media regulation (the public interest as regulatory mandate).
THE PUBLIC INTEREST AS INSTITUTIONAL IMPERATIVE
Let’s first consider the public interest as a guiding principle within the operation of media outlets—particularly news organizations. As has been well documented, the institution of journalism (regardless of the technology through which news is disseminated) is infused with an ethical obligation to serve the public interest.8 Consequently, the various sectors of the news media have traditionally maintained self-designed and self-imposed behavioral codes that embody the public-interest principle to varying degrees.9 For instance, many of the components of the press’s public-interest obligations are reflected in Article I (titled “Responsibility”) of the Statement of Principles of the American Society of Newspaper Editors:
The primary purpose of gathering and distributing news and opinion is to serve the general welfare by informing the people and enabling them to make judgments on the issues of the time…. The American press was made free not just to inform or just to serve as a forum for debate but also to bring an independent scrutiny to bear on the forces of power in society, including the conduct of official power at all levels of government.10
This statement represents a clear articulation of the public service objectives of aiding citizens in their decision-making and protecting them against governmental abuses of power. We find similar values reflected in the preamble to the Code of Ethics of the Society of Professional Journalists, which states “that public enlightenment is the forerunner of justice and the foundation of democracy.”11 Here, the tie between the activities of the press and the effective functioning of the democratic process is made even more explicit.
These statements exemplify the way public-interest values are intended to guide news organizations (i.e., the operational level). How are the values expressed in these behavioral codes actually applied? If we look, for instance, at the Statement of Principles of the American Society of Newspaper Editors, we find a number of specific behavioral guidelines, including independence, truth and accuracy, and impartiality.12 Comparable behavioral obligations are also outlined in the codes of ethics of the Society of Professional Journalists13 and the Radio Television Digital News Association.14 Explicit in all of these codes are not only sets of values, but also the appropriate behaviors for maximizing the extent to which the news media serve the political and cultural needs of media users.
THE PUBLIC INTEREST AS INSTITUTIONAL IMPERATIVE FOR SOCIAL MEDIA
Can the public interest as institutional imperative be effective within the specific context of social media platforms? Presently, the key theme that emerges is a diminishment in the scope of the public-interest principle. Specifically, a fairly narrow, individualist model of the public interest has taken hold. Under this formulation, social media platforms primarily provide an enabling environment in which individual responsibility and autonomy can be realized in relation to the production, dissemination, and consumption of news and information. Missing here are any broader, explicitly articulated institutional norms and values that guide these platforms, along the lines of those articulated by the various professional associations operating in the journalism field.
In many ways, the dynamics of social media design and usage have fundamentally been about this transfer of responsibility to individual media users. Within the context of social media, individual media users—working in conjunction with content recommendation algorithms—serve a more significant gatekeeping function for their social network than is the case in the traditional media realm.15 Through social media platforms, individuals are building, maintaining, and (potentially) helping to inform social networks of much larger scale than typically could be achieved through traditional secondary gatekeeping processes such as word of mouth. In other words, individuals are increasingly performing the filtering and mediating, taking on roles that are integral to the practice of journalism, and are doing so on a more expansive scale.16 Individuals have always possessed this capability to some extent, through traditional means of secondary gatekeeping such as word of mouth. However, within the context of social media platforms, the flow of news and information is much more dependent upon the judgments and subsequent actions (liking, sharing, retweeting, etc.) of the individual users of these platforms (see chapter 2).
This contrast with traditional news media reflects the nature of platforms and how they operate. However, it also reflects something of a public-interest vacuum at the institutional level that characterizes how these platforms have been designed and how many social media companies have operated. As Facebook, for instance, notes in its very brief mission statement, the platform’s mission is to “give people the power to build community and bring the world closer together.”17 This statement is a modification (made in 2017) of the previous, less directed mission statement, to “give people the power to share.” In both, the focus is clearly on individual empowerment, but in the postelection revised version we see a narrowing in terms of the types of activities the platform is seeking to empower. Twitter’s similarly brief mission statement focuses on giving “everyone the power to create and share ideas and information.”18 The overarching goal, clearly, has been to empower individual users; the service of any broader public interest must emerge from them.
This formulation reflects the valorization and empowerment of the individual media user that has been such a prominent theme in the discourse of Silicon Valley, and social media in particular.19 The extent to which social media platforms empower individuals or communities emerged as a powerful frame of reference, particularly in the aftermath of the Arab Spring, in which social media platforms were being credited with facilitating successful revolutions.20 Those who valorized the individual media user in this way also tended to have tremendous faith in these users (whether individually or in the aggregate) and their ability to serve the public interest.21
Social media companies built on these perceptions to project themselves as tools for empowering individual autonomy.22 Greg Marra, the Facebook engineer who led the coding for the platform’s News Feed, told the New York Times in 2014: “We try to explicitly view ourselves as not editors…. We don’t want to have editorial judgment over the content that’s in your feed. You’ve made your friends, you’ve connected to the pages that you want to connect to and you’re the best decider for the things that you care about” (emphasis added).23 This description at best understates the role that algorithms play in the dissemination and consumption of news. At worst it misrepresents the true power dynamics between social media platforms and users in terms of the distribution of editorial authority. Geert Lovink has argued in his critique of social media that we are in a “paradoxical era of hyped up individualism that results precisely in the algorithmic outsourcing of the self” (emphasis added).24
Users, however, have tended to cling to this misrepresentation. As Jeff Hancock, co-author of Facebook’s controversial “emotional contagion” study (discussed in chapter 4) observed, much of the negative reaction to the study reflected the fact that many users had “no sense that the news feed [was] anything other than an objective window into their social world.”25 This observation was supported by research in 2015 that found that more than 60 percent of study participants were unaware that Facebook engaged in any kind of curation/filtering of their news feeds.26 A more recent (2017) study of college students (whom we expect to have above average digital media literacy) found that only 24 percent of respondents were aware that Facebook prioritizes certain posts and hides others.27
Thus, through the discourse of individual autonomy and the inherently obscured nature of algorithmic editorial authority, the individualist notion of the public interest took hold and has, to this point, persisted in the realm of social media governance. This situation raised the question, as one critic of Facebook’s emotional contagion study suggested, “Is there any room for a public interest concern, like for journalism?”28
This privileging of individual choice and empowerment has affected the approach that social media platforms have taken to preventing the spread of fake news. The fact that these platforms have implemented a variety of initiatives to combat the dissemination of fake news certainly represents the adoption of a stronger public-interest orientation than we saw before the 2016 election. However, in many aspects of implementation, the strong individualist orientation persists.
One of the most aggressive aspects of Facebook’s response to the fake news problem has been to adopt a system in which the evaluations of users as to the trustworthiness of individual sources plays a prominent role in determining whether individual news sources feature in users’ news feeds. According to Mark Zuckerberg, the key challenge was the development of an objective system of assessing trustworthiness.29 Among the various options available to them, the company “decided that having the community determine which sources are broadly trusted would be most objective.”30 The system has been described as follows: As part of its regular user surveys, Facebook asks users whether they are familiar with individual news sources and whether they trust those sources. Individuals who are not familiar with a source are eliminated from the sample of users used to determine the overall trustworthiness of the source. Thus, “the output is a ratio of those who trust the source to those who are familiar with it.”31
In terms of the specific methodological approach, it seems reasonable to ask whether this approach can succeed in its intended goal, which is presumably to weed out fake news as much as possible. The reliance on self-reported familiarity and trustworthiness raises some concerns. The approach, as described, applies the aggregate trustworthiness score to everyone’s news feed. That is, your news feed will not more fully reflect the sources that you (or your social network) have identified as trustworthy. Rather, the system will “shift the balance of news you see towards sources that are determined to be trusted by the community.”32
Obviously, this approach puts those individuals selected to take part in the surveys in an incredibly influential position—not unlike the way that those selected by the Nielsen Company to be part of its television audience measurement panel play an incredibly influential role in determining the fate of individual television programs. Just as with audience measurement, the dynamics of how this panel of Facebook users is constructed becomes incredibly important; and an inherently vulnerable point of critique.33
How big is it? According to a 2016 report, Facebook’s “feed quality panel” was only about a thousand people.34 What’s the response rate to invitations to take part in the panel surveys? Is the sample demographically and geographically representative of the population of Facebook users? Initially, the feed quality panel participants were all in Knoxville, Tennessee.35 Obviously, proper geographic representation is important when a panel is being used to evaluate the trustworthiness of the nation’s news sources. Adam Mosseri, then head of Facebook’s News Feed (now head of Instagram), described the survey that serves as the basis for this evaluation as coming from a “diverse and representative sample of people using Facebook across the U.S.,”36 but further details have not been forthcoming. This sample will also need to be protected from coercion and manipulation efforts by third parties, just as Nielsen has had to police its samples to assure participants are not acting on behalf of any vested interests they might have in specific programs or programmers.
One also cannot help but wonder what could happen to mainstream news outlets, such as the New York Times, CNN, and the Washington Post, that have become the focus of President Trump’s nonstop assault on the press. This dynamic would seem to assure that a substantial component of any Facebook user sample will consider themselves very familiar with these sources (even if they do not use them), and will label them as untrustworthy. In contrast, intensely partisan sources, with their smaller, ideologically homogenous user bases, are likely to demonstrate a stronger correlation between familiarity and trustworthiness.37 “Familiarity”—especially of the self-reported variety—would seem potentially problematic as a baseline for eligibility for assessing trustworthiness. One could certainly imagine a scenario in which some outlets are affected very differently than others, in ways that might not correspond with any kind of reasoned evaluation of those outlets’ trustworthiness. Readers who consume a steady diet of partisan reporting bashing CNN as fake news, for instance, may feel sufficiently familiar with CNN to offer an assessment of its trustworthiness without ever actually consuming its reporting.
The broader philosophy reflected in Facebook’s approach to evaluating trustworthiness fits squarely within the individualist approach to the public interest. Again, we can see the ways in which Facebook places tremendous faith—and responsibility—in the hands of individual social media users. These users must essentially make the determination as to which news outlets are serving the public interest. In this scenario, the public interest is being conceptualized according to one classic interpretation—as the aggregation of individual interests.38
The bottom line is that a user base that has already become more politically polarized than at any point in recent history, in part because of the continually evolving dynamics of media fragmentation and personalization, seems a questionable authority to rely upon to make determinations as to the trustworthiness of individual news sources. The fact that Facebook began hiring “news credibility specialists” within a few months after launching this program39 is perhaps an indication that relying upon individual community members to effectively identify trustworthy news sources proved problematic.
All of this being said, there have been some initial efforts to establish and potentially institutionalize a stronger, more principles-based public-interest orientation in the digital media realm. Academic and professional organizations such as Fairness, Accountability, and Transparency in Machine Learning (FATML) and the Association for Computing Machinery (ACM) have developed and released statements of principles related to algorithmic accountability and transparency. FATML’s statement includes as core principles responsibility, explainability, accuracy, auditability, and fairness.40 The ACM’s statement of principles also includes auditability, along with awareness (of possible biases), accountability, explanation, and access and redress.41
In late 2016, a collection of companies, including Facebook, Google, and IBM, formed an industry consortium called the Partnership on AI, which has begun to formulate broad tenets around fairness, trustworthiness, accountability, and transparency.42 It remains to be seen whether these principles become institutionalized values that meaningfully guide the behavior of social media platforms and the way that algorithms are created and implemented.43
THE PUBLIC INTEREST AS REGULATORY MANDATE
Now we turn to the public interest as a regulatory mandate that guides the decision-making of policy makers and, consequently, the behavior of media organizations. The public interest as regulatory mandate supplements the public interest as institutional imperative, translating the principles of public service and commitment to the democratic process into regulatory policies and government-imposed requirements.44
During the seventy-plus-year history of the Federal Communications Commission and its predecessor, the Federal Radio Commission, specific sets of guiding principles have been associated with the public-interest principle. These began to take shape as early as 1928, and over the years, five key components of the public interest emerged: (1) balance of opposing viewpoints; (2) heterogeneity of interests; (3) dynamism, in terms of technology, the economy, and the interests of stakeholders; (4) localism; and (5) diversity, in terms of programming, services, and ownership.45
This is not to say that, historically, there has been strong and stable consensus in the regulatory realm as to how to operationalize or apply the public-interest principle. Rather, the evolving, contested meaning of the public-interest standard has been one of the defining characteristics of media regulation and policy in the United States.46 Consider, for instance, the well-known “marketplace” approach to the public interest typically associated with Reagan-era FCC chairman Mark Fowler:
Communications policy should be directed toward maximizing the services the public desires. Instead of defining public demand and specifying categories of programming to serve this demand, the Commission should rely on the broadcasters’ ability to determine the wants of their audiences through the normal mechanisms of the marketplace. The public’s interest, then, defines the public interest.47
This conceptualization of the public interest takes a majoritarian approach—one in which market forces, rather than a coherent scheme of values, dictate what is in the public interest. In contrast, adherents of the “trustee” approach to the public interest48 advocate placing policy makers in the position of identifying and defining specific values and then establishing specific criteria for media organizations to meet in pursuit of those values.
This is where the notion of public-interest obligations comes into play. These are affirmative requirements that regulated media outlets must meet. For instance, broadcast television stations have had to provide minimum levels of educational children’s programming. Cable and broadcast stations have had to make ad time available to political candidates at discounted rates. At one point, broadcast and cable channels had to provide balanced coverage of controversial issues of public importance (the well-known Fairness Doctrine; see chapter 3). The history of U.S. media regulation is filled with many other examples of public-interest obligations that have come and gone.49
As might be expected, differences between the marketplace and trustee approaches become particularly pronounced at the level of application, where the key question involves what specific regulatory requirements to impose in the name of the values associated with the public interest. This applicational component of the public interest as a regulatory mandate is in a nearly constant state of flux, not only because of changes in the hierarchy of values held by different administrations, but also because of changes in the media environment and regulators’ perceptions of how best to pursue those values.50 This is well illustrated by the FCC’s sudden reversal on the issue of network neutrality. In 2016, the FCC voted to impose network neutrality regulations on Internet service providers.51 In 2018, a new FCC administration eliminated those regulations. And yet, perhaps not surprisingly, both the adoption and the elimination of the net neutrality regulations were couched in public-interest rhetoric.52
TECHNOLOGICAL PARTICULARISM AND THE PUBLIC INTEREST
The nature and specifics of the public-interest regulatory framework applied to legacy media has varied as a function of the technological characteristics of each medium. For instance, broadcast radio and television, given their use of publicly owned and allocated broadcast spectrum, have been subjected to more intensive FCC-imposed public-interest regulations and obligations than other electronic media, such as cable television, mobile, or direct broadcast satellite (DBS), which utilize a different (or differently allocated) transmission infrastructure. However, even these media have been subject to a variety of public-interest-motivated FCC regulations premised on different, sometimes intersecting, regulatory rationales. And, of course, all of this stands in stark contrast to the print media, which operate almost completely outside of the U.S. framework for media regulation. We call this technological particularism, meaning that the distinctive characteristics of individual communications technologies serve as the basis for the construction of distinct regulatory models for each technology.
Motivations Versus Rationales in Media Regulation
Given the United States’ strong First Amendment tradition, compelling public-interest-based motivations to apply regulatory oversight to the media must be grounded in compelling technologically derived rationales capable of overcoming assertions that any regulatory oversight represents an infringement on the First Amendment rights of media organizations. What may seem on the surface like a semantic distinction between motivations and rationales is, in many ways, key to understanding the current disconnect between digital media and the public interest.
Within the context of this analysis, the term motivations refers to the underlying needs and objectives that create the impetus for the imposition of a public-interest regulatory framework. In the United States, these motivations have taken the form of core (and often contested) public-interest principles such as diversity, localism, competition, and universal service that are generally seen as facilitating the effective functioning of the democratic process.53 They have also included less politically oriented concerns such as protecting children from adult content.
The term rationales refers to the way that the pursuit of these objectives is justified when confronted with a First Amendment tradition that, when interpreted strictly, stands in opposition to any government intervention into the media sector. These technologically derived rationales are built on the premise that characteristics of certain media technologies or services compel public-interest oversight that, in some instances, necessarily infringes on the speech rights of media organizations. The key point here is that public-interest-minded motivations are, on their own, insufficient to justify regulatory interventions. There must be a distinctive characteristic of the technology that can justify such interventions, well intentioned as they might be.
The well-known, and widely criticized, spectrum-scarcity rationale provides a useful illustration of the distinction between the motivations and rationales for media regulation. According to the scarcity rationale, because there is insufficient broadcast spectrum to accommodate everyone wishing to broadcast, it is necessary for the government to provide regulatory intervention. The scarcity rationale is well articulated in the Supreme Court’s response to an early (1943) challenge to the FCC’s authority to regulate media ownership. In NBC v. United States, in which the Court heard NBC’s challenge to an FCC order that the company divest itself of one of its national broadcast networks, the Court noted “certain basic facts about radio as a means of communication—its facilities are limited; they are not available to all who may wish to use them; the radio spectrum simply is not large enough to accommodate everybody. There is a fixed natural limitation upon the number of stations that can operate without interfering with one another.”54
This spectrum scarcity allowed the FCC to be “more than a kind of traffic officer of the airwaves,” but to also play a role in “determining the composition of…traffic.”55 In this way, the scarcity of the broadcast spectrum became the primary justification upon which the entire system of broadcast regulation—the regulatory model that allows for the greatest degree of public-interest oversight over the structure and behavior of media companies—has been built. The spectrum-scarcity rationale has persisted despite the fact that it is riddled with logical flaws. As economists have noted, for instance, all goods are scarce to some extent.56 Certainly, when a valuable good is being given away (as is the case with broadcast spectrum), the demand is likely going to exceed the supply. Other critiques have noted that, as other communications infrastructures have developed (cable, Internet, etc.), it is difficult to make the argument that the logic of the scarcity rationale still holds up at all.
Why, then, has such a flawed rationale persisted? The answer may lie in the relationship between regulatory rationales and motivations. According to many accounts, spectrum scarcity was not the reason for government regulation of broadcasting, only the rationale that provided a justification capable of withstanding First Amendment scrutiny.57 Media economist Eli Noam, for instance, counters the conventional wisdom that spectrum scarcity provided the federal government with the mechanism to regulate broadcasting. According to Noam:
the opposite was the case. TV spectrum was scarce because governments chose to make it so, by allocating frequencies only grudgingly. One reason was the fear of the power of private broadcasting over politics and culture. State controlled radio had played a major role in the internal and external propaganda efforts before and during the Second World War and in the emerging Cold War. The extension of such power to private parties was seen as quite dangerous.58
According to Noam, this situation illustrates the reality that “each society has its concerns, problems, issues, traditions and priorities. The main purpose of media regulation is to advance such goals…. It seems unlikely that societies will simply give up on their societal priorities just because the…information now takes a different path, or is encoded in a different way.”59
Ultimately, specific and evolving social and political concerns represent the true motivations for media regulation, with the rationales providing the technologically derived justifications for acting on these motivations. These motivations may or may not be affected by technological change, but the technologically grounded rationales likely will be. Thus, Noam concludes, “none of these objectives will vanish”60 when the devices or infrastructure used to send and receive news and information change. Nor should they.
Like the scarcity rationale, a number of other regulatory rationales are determined by technological characteristics. There have been few, if any, efforts at this point to apply these rationales to the social media context. For this reason, it is worth briefly reviewing each of these rationales, and then considering whether any of them may be applicable to social media.
The extent to which a communications technology relies upon a public resource has served as a distinct regulatory rationale.61 Often this rationale is linked with the scarcity rationale (i.e., broadcasters utilize a “scarce public resource”). In this way, the scarcity and public-resource rationales operate hand in hand. However, it is important to consider the public-resource rationale separately, because compelling arguments can be—and have been—made that public ownership of the airwaves justifies the imposition of a public-interest regulatory framework, independent of whether the public resource is uniquely scarce.62 Obviously, broadcasting meets this public-resource criterion; as do cable television systems, which require access to public rights-of-way (telephone poles, public land for laying cable). Such access has served as the grounds for a quid pro quo regulatory model. Broadcasters have been required to abide by a variety of public-interest obligations in exchange for their use of the spectrum. It should be noted that, unlike mobile telecommunications service providers, broadcasters have not purchased their spectrum via auction (and thus have no claims of property rights); it is provided to them essentially for free through the licensing process. Similarly, cable systems have been required to provide channels dedicated to public, educational, and government programming in exchange for access to the public rights-of-way necessary for building and maintaining a cable system.63
Pervasiveness is another technologically derived regulatory rationale that has been brought to bear in the media sector. Certain media (broadcasting and cable in particular)64 have been deemed “uniquely pervasive” in terms of the ease with which content can be accessed (even unintentionally) and the reach they can achieve. Pervasiveness has therefore provided the logical foundation for regulatory interventions such as restrictions on indecent programming.65 Congress made a failed effort to extend the pervasiveness rationale to the Internet back in the mid-1990s, when it attempted to restrict the dissemination of adult content online through the Communications Decency Act.66 In this case, however, the Supreme Court concluded that the Internet failed to meet the pervasiveness standard of broadcast or cable television, noting that “the receipt of information on the Internet requires a series of affirmative steps more deliberate and directed than merely turning a dial” and that “the Internet is not as ‘invasive’ as radio or television.”67 If, after reading this passage, you’re still unclear why broadcasting and cable are uniquely pervasive and the web and social media are not, then you are beginning to understand the tenuous, ambiguous nature of regulatory rationales in the media sector.
Regulatory interventions have also been based on the rationale that some technologies are reasonably ancillary to other regulated technologies. Specifically, certain cable television regulations have been enacted because cable has traditionally played an intermediary role in the delivery of broadcast television signals. According to this perspective, cable is sufficiently “ancillary” to broadcasting that the regulatory authority already established for broadcasting can, to some extent, be extended to cable.68 This rationale was first established in a Supreme Court decision that focused on whether the FCC had the authority to regulate cable television,69 a technology that the FCC had previously determined could not be classified as either common carriers or broadcasters70—the two technology categories over which FCC jurisdiction had been established. Subsequently, the FCC proposed regulations that, among other things, prohibited cable systems from importing out-of-market broadcast signals.71 These regulations sought to protect local broadcast stations and thus reflected the commission’s long-standing objective of fostering localism.72 When the Supreme Court considered the challenge to the FCC’s authority to impose such regulations, it upheld the FCC’s authority, in part on the basis that such authority was “reasonably ancillary to the effective performance of the Commission’s various responsibilities for the regulation of television broadcasting.”73
THE PUBLIC INTEREST AS REGULATORY MANDATE FOR SOCIAL MEDIA
The key point in reviewing these regulatory rationales has been to illustrate how particular technological characteristics have provided the basis for regulatory interventions. To the extent that these rationales do not apply—or have not yet been attempted to be applied—to the realm of social media, public-interest regulatory frameworks remain sidelined. In some cases, however, there is some potential for traditional regulatory rationales to translate to the new context.
Consider, for instance, the public-resource rationale. As noted previously, the fact that certain components of the spectrum belong to the public has provided a key rationale for imposing a public-interest regulatory framework on broadcasters. Social media platforms do not utilize spectrum. However, they have built their business model upon monetizing large aggregations of user data. Many privacy advocates have argued that our user data should be thought of as our property—an argument that was revitalized to some extent in the wake of the Cambridge Analytica scandal. However, this perspective has never really taken hold in the United States, in terms of either policy or industry practice.74 But building on this perspective, perhaps another way to think about aggregate user data is not as private property but as public property. Perhaps aggregations of user data (such as those utilized by social media platforms) should be thought of as a public resource, akin to spectrum.
Such an argument echoes the core premise of the scarcity rationale—that the public “owns” the airwaves, and this public ownership confers a certain amount of public-interest regulatory authority. If we think about aggregations of user data in a similar vein—if the public has some ownership stake in their user data—the large-scale aggregations of data accumulated by social media platforms can perhaps be seen as representing a similar collective public resource, thereby triggering public-interest regulatory oversight.
This perspective represents a shift from the traditional policy arguments around user data, which tend to focus on the need for limitations on data gathering and sharing, and granting users greater control over the data that are gathered and how they are used. Legal scholar Jack Balkin, for instance, has compellingly argued that digital platforms should be categorized as information fiduciaries—individuals or organizations that deal in information and thus are required by law to handle that information in a responsible manner (think, for instance, of doctors and lawyers).75 A data-as-public-resource argument does not necessarily conflict with this argument. Rather, it builds upon it by creating a rational basis for the imposition of public-interest obligations on platforms that rely on the gathering and monetizing of large aggregations of user data for their business model. These obligations could operate alongside any privacy regulations directed at the gathering and use of data. When large aggregations of user data become the lifeblood of a platform’s business model, then this information-fiduciary status could expand into a broader set of social responsibilities.
There would similarly seem to be some potential applicability of the pervasiveness rationale to the social media context. As noted previously, in the late 1990s the Supreme Court rejected efforts by Congress to apply the pervasiveness rationale to the Internet. However, much has changed in terms of how people go online, given the demise of the dial-up interface, the prevalence of always-connected mobile access, and the “push” dynamic that distinguishes social media platforms from the web as a whole. Given these characteristics, it would seem that social media platforms have become as uniquely pervasive as television or radio. When we consider that a user can suddenly, unexpectedly, and involuntarily encounter something as disturbing as a live-streamed murder or suicide in one’s news feed in very much the same way that one can unexpectedly be exposed to foul language on the radio, it does seem that a compelling case for pervasiveness could be made.
The reasonably ancillary rationale may have relevance in the social media context as well. Social media platforms operate in an ancillary relationship to other regulated media, given the extent to which they increasingly serve as the means by which content from regulated sectors reaches audiences. If your social media news feed contains video from your local broadcast stations, for example, it is operating in a way that parallels how cable became an increasingly important means of accessing those stations as cable diffused throughout the 1970s and 1980s. Cable was deemed reasonably ancillary to broadcast because it evolved into an important distribution platform for broadcast programming. As social media platforms have evolved into a mechanism through which traditional media distribute their content, a similar ancillary relationship has developed.
IMPLICATIONS
To date, policy makers have not applied any government-articulated and -implemented notions of the public interest from previous generations of electronic media to social media platforms and organizations. The danger here is that as these platforms continue to grow, evolve, and possibly become an increasingly dominant part of the media ecosystem, the values, principles, and objectives that motivate regulation will apply to an ever-shrinking proportion of that ecosystem. In the past, when a new communications technology arrived, established regulatory models were, to some extent, transferred to that new technology. For instance, while television represented, in many ways, a fairly substantial departure from radio, the fact that television also relied on the broadcast spectrum meant that policy makers transferred the entire regulatory framework that had been developed for radio to television. Something similar happened (though to a lesser degree) when cable television was introduced. In this case, as noted previously, the rationale of cable as “ancillary” to broadcast television meant that policy makers were able to transfer much—though not all—of the broadcast television/radio regulatory framework to cable.
The situation we are facing today has yet to offer any of this continuity. If the reasons (i.e., motivations) for government regulation of media apply in this context, but the rationales upon which regulations are imposed do not, then we are faced with a fundamental disconnect between the need for, and the ability to implement, media regulation and policy. More concretely, if the structure and performance of these new media platforms/institutions evolve in ways that run counter to the regulatory objectives that have governed the structure and performance of established media, but no technological rationale for addressing these problems exists, then we have a scenario in which government is powerless to solve compelling communications policy problems.
The Merger of AT&T and Time Warner
To illustrate the nature and consequences of this disconnect, it is worth considering a recently approved media merger.76 In October 2016, telecommunications giant AT&T, the leading provider of wireless, broadband, and pay TV services in the United States, announced plans to acquire Time Warner, at the time the third largest media conglomerate in the world. Time Warner owned a major movie and television studio, and was also one of the largest owners of cable networks in the United States, with holdings that included such popular networks as HBO, CNN, TNT, and TBS.
There is certainly a long history of such media megamergers in the United States, especially as media ownership regulations have been steadily relaxed over the past forty years.77 What is unique about this particular media merger, however, is that it took place without review and approval by the Federal Communications Commission, which oversees many of the industry sectors in which these two firms operate and has the responsibility for conducting a public-interest-focused analysis of proposed media mergers.
In the United States, proposed media mergers undergo an assessment by the Justice Department or the Federal Trade Commission to determine the likely effects of the proposed merger on competition in the marketplace. In addition, proposed media mergers undergo a separate public-interest review by the Federal Communications Commission.78 This separate review takes into consideration not only the economic effects of the proposed merger, but also any relevant noneconomic considerations, such as its likely social or political impact. This is where established policy concerns such as diversity of sources and viewpoints, localism, and media users’ ability to access information come into play. It is in this review that a robust notion of the public interest is supposed to be brought to bear, in which the impact of the proposed merger is considered not only in terms of the economic marketplace, but also in terms of the marketplace of ideas.
However, the case of the AT&T–Time Warner merger highlighted an important loophole in this system. To understand this loophole, it is important to recognize that the FCC’s authority to engage in this public-interest standard of review is derived from the agency’s authority to approve the transfer of broadcast licenses.79 Because the FCC oversees the allocation and renewal of broadcast licenses (which are a scarce public resource), the agency must approve any transfers that arise when one firm acquires another. If no broadcast licenses are changing hands in a proposed media merger, then no FCC-led public-interest review takes place.
The AT&T–Time Warner merger presented just such a scenario. Despite being one of the world’s largest media conglomerates, at the time the merger was proposed, Time Warner owned only one broadcast television station.80 This situation led some analysts to speculate that if the company were to sell off that station, then the FCC would have no authority to impose a public-interest standard of review on the proposed merger, and the merger would only be subject to analysis of its effects on competition.81 As predicted, Time Warner sold its one television station, relieving itself of its only terrestrial broadcast license;82 and the FCC chairman Ajit Pai subsequently stated that, without the presence of such a license, the commission lacked the legal authority to review the merger.83
This scenario helps to highlight the growing disconnect between the public interest as a regulatory standard in the media sector and the very nature of the media sector toward which this standard is intended to be applied, as the media environment becomes increasingly digitized and broadcasting becomes a less central component of the ecosystem as a whole. The fact that a merger as large, wide-ranging, and impactful as that of AT&T and Time Warner can completely circumvent the public-interest standard of merger review on the basis of the absence of broadcast licenses raises questions about the adequacy of the regulatory rationales on which the public-interest standard of review is based. As this example illustrates, we have a regulatory system that is built upon—and justified by—the characteristics of an increasingly irrelevant communications technology.
The implications of this scenario for our discussion of social media and the public interest should be clear. The public-interest standard has no relevance for social media platforms, which are now arguably the most powerful intermediaries in the production, distribution, and consumption of news and information. A potential merger between Facebook and Twitter, for example, should trigger a degree of regulatory scrutiny that goes beyond economic effects of the proposed merger and considers the social and political implications. And this issue extends beyond the context of media mergers. More broadly, the fact that the public-interest standard has no regulatory foothold in either the structure or behavior of social media platforms means that we have a growing disconnect between regulatory motivations and rationales that needs to be addressed. This situation reinforces the dynamic described in the introduction to this book, in which companies that fundamentally are media companies are treated as if they are not.
PUBLIC-INTEREST OBLIGATIONS FOR SOCIAL MEDIA?
It is worth considering some of the contexts in which these unregulated social media platforms engage in activities that reflect regulatory concerns that have traditionally characterized electronic media. These instances have become frequent enough that they have begun to raise questions about whether this regulatory status quo is appropriate. Media scholars Mike Ananny and Tarleton Gillespie label these instances “public shocks,” which they describe as “public moments that interrupt the functioning and governance of these ostensibly private platforms, by suddenly highlighting a platform’s infrastructural qualities and call it to account for its public implications. These shocks sometimes give rise to a cycle of public indignation and regulatory pushback that produces critical—but often unsatisfying and insufficient—exceptions made by the platform.”84
The list of such “public shocks” is rapidly accumulating, and includes such high-profile instances as Facebook’s conducting of “emotional contagion” research on users of its platform; the dramatic discrepancy across Facebook and Twitter in the reporting of the violence taking place in Ferguson, Missouri; the live-streaming on Facebook of the police shooting of Philando Castile; revelations that Facebook’s advertising interface could be used to target anti-Semitic audiences; accusations of the suppression of conservative news stories on Facebook; the subsequent indications that fake news ran rampant across platforms such as Facebook, Twitter, and YouTube, and that these platforms passively facilitated the dissemination of fake news; the apparent role of foreign agents in using these platforms to disseminate both fake news and targeted political advertisements in their efforts to influence the outcome of the 2016 U.S. presidential election; and the revelations about the mishandling of user data by Facebook and Cambridge Analytica.85 These “public shocks” have produced moments of attention to the issue of the governance of social media platforms, and produced responses by these platforms. The public shocks have also accumulated to such an extent that policy makers have begun to examine whether existing governance mechanisms are adequate and/or in need of substantive revision (as reflected in the many congressional hearings that took place in 2017, 2018, and 2019.)
Not all of these “public shocks” necessarily merit regulatory intervention. Nor would they all have been prevented had these digital media platforms operated under a public-interest regulatory framework reflective of other electronic media. In the aggregate, however, they do raise questions about if and how the notion of the public interest is being—or should be—applied within the context of social media. Drilling down into some of these occurrences illustrates specific disconnects between existing regulatory and legal structures directed at legacy electronic media and the operation and evolving function of these digital media platforms.
Consider, for instance, the live-streaming on Facebook of violent acts such as murders and suicides. This has happened on a number of occasions. Violent expressions of hate speech have also become prominent on social media platforms. Facebook has internal guidelines and procedures for policing its platform for such content and preventing its dissemination—despite the fact, it should be noted, that they are under no legal obligation (at least in the United States) to do so. Indeed, Facebook’s own content filtering extends well beyond violence and hate speech, addressing other areas such as nudity and explicit sex.86 From this standpoint, speakers on Facebook operate under a content regulatory model (applied by Facebook) that is more akin to the regulatory model that the FCC applies to broadcasting (with its various intrusions on speakers’ free speech rights in the name of protecting children and other vulnerable groups from harmful or adult content).
However, the exact nature of this oversight and its associated procedures has been quite opaque, and when aspects of the process have been made public, the implications have been troubling. It was revealed, for instance, that Facebook’s content-moderation guidelines contained a number of aspects that can be construed as “favor[ing] elites and governments over grassroots activists and racial minorities.”87 For instance:
One document trains content reviewers on how to apply the company’s global hate speech algorithm. The slide identifies three groups: female drivers, black children and white men. It asks: Which group is protected from hate speech? The correct answer: white men.
The reason is that Facebook deletes curses, slurs, calls for violence and several other types of attacks only when they are directed at “protected categories”—based on race, sex, gender identity, religious affiliation, national origin, ethnicity, sexual orientation and serious disability/disease. It gives users broader latitude when they write about “subsets” of protected categories. White men are considered a group because both traits are protected, while female drivers and black children, like radicalized Muslims, are subsets, because one of their characteristics is not protected.88
These instances raise questions about whether the self-governance framework for potentially harmful and offensive content on social media is adequate. Outside of the United States, we have begun to see moves toward more direct government oversight, with Germany, for instance, passing legislation that imposes fines on social media platforms that fail to take down posts that meet the government’s standards of hate speech within a specified period of time.89
These actions by social media platforms to police their content—and their apparent shortcomings and biases in doing so—highlight what could be seen as a disconnect between established regulatory motivations and existing regulatory authority. Clearly, there is a perceived need to impose the kind of adult/harmful content restrictions that have characterized legacy electronic media in this newer media context, as evidenced by the fact that these platforms are voluntarily taking on this responsibility, though perhaps in ways that sometimes run counter to the public interest. Unless we see efforts to apply a regulatory rationale such as pervasiveness to the Internet (or to social media in particular), we are again faced with a disconnect between compelling motivations for regulatory intervention and access to effective regulatory rationales. I am not arguing that a government-imposed regulatory framework is necessarily the most appropriate and viable solution to the problem of the dissemination of violence and hate speech via social media; rather, I am trying to illustrate that this is another situation in which if such action were deemed to be desirable, no established regulatory rationale exists for justifying taking such action.
It is also worth considering the potential disconnect between regulatory motivations and rationales in relation to what became perhaps the defining issue of the 2016 presidential election—the prominence and potential impact of fake news. Within traditional media contexts, there are some established (though limited) legal and regulatory frameworks in place intended to distinguish between true and false news and information, and at least discourage the production and dissemination of falsity. In social media, so far, there are none.
First, as was noted in chapter 3, in the United States, media outlets are subject to libel laws that prohibit the knowing and malicious publication of false information that is damaging to an individual’s reputation. These laws operate across all media technologies and in no way vary in their applicability in accordance with the characteristics of individual media. But, as chapter 3 also noted, false information that does not adversely affect individual reputations (e.g., Holocaust denial) does not fall within the prohibitions associated with U.S. libel laws.
However, some general restrictions on falsity have found their way into U.S. broadcast regulation. Specifically, FCC regulations prohibit broadcast licensees from knowingly broadcasting false information concerning a crime or catastrophe, if the licensee knows beforehand that “broadcasting the information will cause substantial ‘public harm.’”90 This public harm must be immediate and cause direct and actual damage to the property, health, or safety of the general public, or divert law enforcement or public health and safety authorities from their duties.91 From the standpoint of contemporary debates about disinformation, these restrictions would apply only to a miniscule proportion of what might be considered identifiable fake news.
However, since the late 1960s, the FCC has also maintained a more general policy that it will “investigate a station for news distortion if it receives documented evidence of such rigging or slanting, such as testimony or other documentation, from individuals with direct personal knowledge that a licensee or its management engaged in the intentional falsification of the news.”92 According to the commission, “of particular concern would be evidence of the direction to employees from station management to falsify the news. However, absent such a compelling showing, the Commission will not intervene.”93 Indeed, news distortion investigations have been rare (especially since the deregulatory trend that began in the 1980s), and have seldom led to any significant repercussions for broadcast licensees.94
These limitations in the scope and actions directed at falsity in the news reflect the First Amendment tradition that has both implicitly and explicitly protected the production, dissemination, and consumption of fake news across an array of subject areas—unlike, for instance, the prohibition found in Canadian broadcast regulation.95 The Canadian Radio-Television and Telecommunications Commission (CRTC, Canada’s version of the FCC) has a blanket prohibition against the broadcasting of false or misleading news, though the CRTC has never taken actions against a station for violating this rule.96
Of course, as discussed and widely acknowledged, it is the digital media realm, rather than broadcasting, that has become the primary breeding ground for fake news. Unfortunately, even the limited public-interest protections described earlier do not apply to increasingly important sources of news and information such as search engines and social media platforms. This brings us back to section 230 of the Telecommunications Act of 1996, which was developed primarily to grant immunity from various forms of legal liability to online content providers for content produced or disseminated on the platform by third parties. Thus, aggregators and curators such as search engines, social media platforms, and video-hosting sites are immune from legal liabilities that might arise from their curation or dissemination of obscenity, hate speech, or libelous/false content that is produced by third parties. This immunity applies even if the platform operators actively engage in various forms of editorial discretion, such as content filtering, curation, and suppression based on internal standards and economic considerations.
Essentially, these digital media platforms possess the editorial authority of a publisher while simultaneously possessing the immunity from liability more akin to a common carrier. Critics of section 230 describe it as “a law that specifically mandates special treatment for Internet service providers and platforms that no other communications medium has,”97 suggesting that once again technological characteristics are playing a determinative role in the construction of legal and regulatory frameworks—but, in this case, a legal and regulatory framework that is utterly disconnected from the public interest.
Implications
In this chapter, I have made the case that the public-interest principle has been systematically diminished and marginalized within the context of social media governance. These processes of diminishment and marginalization have been a function of the much narrower, technocentric orientations of these technology firms that have taken on the functionality—though not the responsibility—of media organizations. It has also been a function of the technological particularism that has guided the construction of regulatory rationales. However, the problems and concerns being generated by the operation of social media platforms fit squarely within traditional notions of the public interest in media governance.
All of this leads to the question of what happens if the rationales for media regulation, which reflect specific technological contexts out of which they were generated, fail to apply to new technological contexts in which the motivations may be equally compelling. If the accompanying rationales do not translate to these newer technological contexts, then we are ultimately left with a decision to make: what matters more, the motivations for media regulation or the rationales? This question becomes particularly acute when we consider that the regulatory rationales discussed earlier are, for the most part, inapplicable to increasingly dominant and influential social media and search engine platforms, which play an increasingly influential role in the dissemination, consumption, and financial health of the media organizations that utilize the technologies and do fall within these regulatory rationales. In this way, the regulated and the unregulated sectors of the media ecosystem are increasingly intertwined—not unlike, it should be noted, the relationship that served as the catalyst for the “reasonably ancillary” rationale discussed earlier.
In 2011, the Federal Communications Commission conducted an extensive and ambitious proceeding dedicated to the complex topic of whether community information needs were being met in the digital age.98 After producing a massive report on the topic, which illustrated the many threats to informed communities, despite (and, to some extent, because of) the diffusion of digital media, the commission proposed only modest exercises of existing regulatory authority, such as requiring broadcasters to make their public inspection file available online, and called for other institutions, such as foundations, to step in and help solve the problem.99 The underlying message of these proposals was clear: the regulatory rationales that delineate the scope of the FCC’s regulatory authority allowed the agency to, at best, nibble around the edges of a substantial communications policy problem—a problem that strikes at the heart of the effective functioning of the democratic process. In many ways, the outcome of this proceeding is a microcosm of the issue presented in this chapter—that a much-needed, more robust public-interest framework for social media governance has yet to emerge from either the relevant institutions or policy makers.
My discussion of the public-interest principle’s traditional role in media governance is a reminder of how the norms of public service and a commitment to enhancing the democratic process have long been ingrained, and explicitly articulated, in the governance frameworks for traditional news media. However, many would argue that public-interest norms have long been inadequately represented in media governance, as decades of deregulation, along with increased commercialization and sensationalism, have led to a media ecosystem that has come to prioritize commercial imperatives over public service. This is a central theme of an extensive and compelling body of journalism and policy criticism.100 It is difficult to dispute this perspective.
However, it is also important to remember that times of dramatic technological change—specifically, times when core infrastructures of communication have undergone transition—have provided opportunities to revisit, revise, and perhaps even rehabilitate the public-interest frameworks that apply to our media ecosystem. We can look back, for instance, to the late 1990s, when the Unites States was in the midst of the transition to digital television. This transition involved migrating television broadcast signals from analog to digital transmission systems, including new production and transmission equipment for television producers, stations, and networks well as new reception equipment (new televisions and/or set-top boxes) for many consumers.
An important, but somewhat forgotten, aspect of this transition was the associated efforts by policy makers to revisit the public-interest regulatory framework that governed television, in an effort to identify and address inadequacies and correct mistakes embedded in the regulatory framework that had become institutionalized over the previous seventy years. Toward these ends, in March of 1997, the Clinton administration established the Advisory Committee on the Public Interest Obligations of Digital Television Broadcasters.101 The committee, a mix of industry executives, academics, and public-interest advocates, was charged with the task of “determining how the principles of public trusteeship that have governed broadcast television for more than 70 years should be applied in the new television environment.”102 The committee was essentially tasked with rethinking what the public interest could—and should—mean in the digital age, with the technological transition providing the impetus for this reconsideration.
After a series of reportedly contentious meetings, as well as consultations with outside experts and members of the public, the committee submitted a report to the White House.103 The report suggested a number of modest modifications to the established public-interest framework. By far its most radical and controversial recommendation was the proposal that all political candidates receive five minutes of free television time each night in the thirty days leading up to an election.104 The report also led to an FCC proceeding that was initially to include the free-air-time proposal but, under pressure from Congress, ultimately did not (incumbents generally oppose any proposal that could potentially level the playing field with their challengers).105 Thus, while the digital television transition ultimately did not produce a dramatic reconfiguration of the public-interest framework that applies to television, it illustrates how a period of dramatic technological change can trigger a reexamination of the normative foundations of media governance.
One could argue that we are seeing something similar today within the journalism field. The disruption to well-established journalistic practices and business models brought about by technological change has triggered profound reexaminations of journalistic norms and the nature of the relationship between news organizations and their audiences.106 News organizations are rethinking, for instance, their traditional, somewhat paternalistic relationship with their audience (see chapter 2).107 Under this model, journalists and editors maintained more or less exclusive authority to determine what was important for news audiences. Today, that relational dynamic is being reconsidered, as news organizations explore more collaborative and directly engaged relationships with their audiences, in an effort to find a more viable model for sustainability in the digital age.108 The place of the public in journalism’s efforts to best serve the public interest is thus being reexamined, and this reexamination has been triggered in large part by the damaging and disruptive effects of social media and search engines as means of distributing and consuming news.
Media historian Robert McChesney has described such periods of technological change as key components of what he terms critical junctures: periods of time when the opportunities for dramatic change in the media system are at their greatest.109 It is worth noting that other conditions McChesney identifies as contributing to such critical junctures include public discrediting of media content and major political crisis.110 Based on these criteria, McChesney points to the Progressive Era (late 1890s to early 1900s) and the late 1960s/early 1970s as other critical junctures in American media history.
In light of how the rise of social media in the news ecosystem (technological change) has coincided with the increasing prominence of fake news (discrediting of media content), perhaps unprecedented levels of political polarization, and the election in 2016 of a president who seems to deviate from the norms of the office in a number of ways that appear dangerous to democracy (political crisis), it seems reasonable to suggest that we are in the midst of such a critical juncture right now, and that it should compel a reexamination of our system of media governance. This is the focus of the next chapter.