Law, Borders, and Speech: Real Power, Real Outcomes, Realpolitik - Platforms and the Law

This piece is exerpted from the Law, Borders, and Speech Conference Proceedings Volume. The conference, convened by Stanford's Center for Internet and Society, brought together experts from around the world to discuss conflicting national laws governing online speech -- and how courts, Internet platforms, and public interest advocates should respond to increasing demands for these laws to be enforced on the global Internet. For two weeks in January 2018, we will be posting these materials on the CIS Blog. The Proceedings Volume itself contains these and other resources, including reading lists, conference slides, and results of participant surveys. It is Creative Commons licensed for re-use in teaching materials and elsewhere.

Panel Summary by Daphne Keller

Panelists:

  • Anupam Chander - Martin Luther King, Jr. Professor of Law, UC Davis
  • Juniper Downs - Global Head of Policy, YouTube
  • Min Jiang - Associate Professor of Communication, UNC Charlotte; Secretariat Member, Chinese Internet Research Conference
  • Peter Stern - Policy Manager, Facebook
  • Emma Llansó - Director, Free Expression Project, Center for Democracy & Technology

Agenda:

Sometimes, the most powerful forces shaping Internet content removal decisions don’t come from the law. Companies’ own discretionary Terms of Service or Community Guidelines often prohibit far more speech than the law does. How do these discretionary rules relate to national law—do they effectively displace it? Does public pressure from powerful countries, including their governments, shape content policies applied to speech around the world? The Council of Europe Human Rights Commissioner has said that States exercise authority - and must respect limitations grounded in human rights—when they pressure private Internet platforms to “voluntarily” remove content. Is this really a legal issue, or only a political one?

Summary:

This panel considered issues of national jurisdiction in relation to Internet platforms’ voluntary content removal policies. These policies, typically set forth in Community Guidelines (CGs) or similar documents, prohibit content based on the platforms’ own rules or values—regardless of whether the content violates any law.

Content removal based on CGs can raise important questions about the overall power of platforms to shape the information available to their users. Law Professor Anupam Chander captured this concern well in his panel contribution, which discussed Facebook’s decisions to remove widely supported legal content, such as breastfeeding images. Chander’s presentation was titled, aptly, “Should Mark Decide?” Removals based on CGs complicate the relationship between State and private power. Platforms typically set and enforce the same policies worldwide, meaning that users from different cultural backgrounds—Stockholm versus rural India, for example—all operate under the same rules. This may flatten out regional differences in law and culture, displacing local values about speech, privacy, sexuality, and other significant topics.

At the same time, platforms’ willingness to remove users’ expression globally for violating CGs can be a source of global leverage for powerful States, as panelist Emma Llansó of the Center for Democracy and Technology pointed out. When governments convince a platform to ban or support content under CGs, they effectively achieve global enforcement of their own national norms, values, or laws.

In addition to Llansó and Chander, who provided framing observations, the panel included two company representatives, Facebook’s Peter Stern and Google’s Juniper Downs. Both discussed the platforms’ internal practices and decision-making with respect to CGs. Communications Professor Min Jiang added a description of government practices constraining Internet content within China—practices which may be increasingly common in the rest of the world as more countries embrace a territorialized Internet.

The two company spokespeople, Downs and Stern, fleshed out internal thinking and processes used in enforcing CGs. Stern noted that Facebook’s CGs are informed by law and human rights principles, but not grounded in the specific laws of any country. Rather, both he and Downs emphasized the companies’ own founding missions as a source of direction and purpose for their policies. Both described extensive internal processes to set CGs standards, and discussed the difficulty of balancing conventional broad free expression protections with what Downs called the “freedom to belong.” As both she and Stern noted, if CGs permit unfettered speech by all users, hostile or bullying voices may effectively prevent others from participating—which in itself reduces the array of viewpoints voiced on the platform. Downs also discussed the difference in policies for different Google products. For example, because completeness is important in the Web Search and Maps products, CG removals are more rare than in more community-oriented products like YouTube.

In an audience question, David G. Post pressed these speakers to consider alternatives to the internal standard-setting process. Expanding on a recurring theme of the conference, he asked about the potential for meaningful self-governance by the community of platform users. Might they, rather than companies or States, be consulted and relied on in establishing CGs? Both platform representatives talked about their existing efforts to listen to thought leaders and ordinary users, and to look to civil society groups as a proxy for user perspectives. As Stern pointed out, though, the diversity of users and preferences would make for a wide array of input—likely leaving companies to decide CGs at the end of the day in any case.

Chander, too, raised more pointed questions about internal processes, focusing on the enforcement of CGs. As an example, he identified Facebook’s decision not to remove a post from then-candidate Donald Trump, which users had flagged as violating the company’s hate speech policy. The company explained that the post would remain accessible because of its newsworthiness. As reported by the Wall Street Journal, the decision came from CEO Mark Zuckerberg himself. This and similar decisions led one newspaper editor to call Zuckerberg “the world’s most powerful editor.”

As Chander outlined, this power to shape available speech on a key Internet platform establishes an odd relationship between privately created CGs and publicly enacted law. Conceived as a Venn diagram, he said, this relationship could go one of three ways: CG could prohibit only a subset of the speech prohibited by law; law could prohibit only a subset of the speech prohibited by CG; or the circles representing law and CG rules could overlap—with each set of rules prohibiting some content that the other permits. Pressures on platforms, governments, and Internet users vary depending on this configuration.

The company representatives expanded on this relationship, discussing the role of law and CGs in their internal practice. As described by Stern, national law becomes relevant to Facebook’s content removal decisions only in cases where the law prohibits more speech than the CGs do. The company’s first step is always to vet a removal request against the CGs. Only if the CGs do not require removal does Facebook proceed to consider national laws—looking to questions like the nature of the issuing authority, due process, and whether the order is directed to the creator of the content. This is a rigorous process, he said. Where national law truly requires removal, the company attempts to be “noisy” about compliance, and blocks the content only in that country. Describing Google’s process, Downs emphasized that CG and legal removals follow separate internal tracks. For national law violations, Google, too, removes content for specific countries rather than the entire world.

Another question raised by Chander focused on the connection of CGs to civil law. In particular, he asked whether CGs set forth in companies’ Terms of Service are binding contractual terms, enforceable through civil litigation. As he noted, if national courts enforced CGs to mandate removal of lawful content, this would create a troubling role for the law in delegitimizing and prohibiting lawful speech. This function of the law would be particularly concerning if the risk of civil damages led platforms to preemptively remove users’ posts. On the other hand, problems might arise if the law prevented platforms from removing content based on their CGs. A key US intermediary liability law, Communications Decency Act 230, was enacted precisely in order to encourage platforms to take down content they deem inappropriate. As Chander explained, Congress enacted the law in response to a case that effectively punished an early platform for trying—but failing—to enforce CGs against offensive and defamatory speech. Fearing that the ruling would discourage platforms from voluntarily removing offensive material, Congress spelled out immunities for platforms that did so, as well as immunities for platforms that leave user content online. As a result, Chander noted, US law would be unlikely to support either mandatory CG “take-downs” or mandatory “leave-ups.”

An audience question tied this analysis to questions of jurisdiction and national legal difference. If CGs are actually contractual provisions under platforms’ Terms of Service, does that mean that CGs are subject to interpretation by national courts? As several speakers pointed out, the Terms of Service themselves are drafted to preclude this outcome, reserving ultimate discretion to the platforms. However, given national differences in consumer protection and contract law, such reservations of authority may not be enforceable in all countries. Grounding CGs in the Terms of Service, and exposing them to national contract enforcement, could effectively reintroduce local legal variation and undermine the global effect of CGs.

Turning from civil law to criminal law, Llansó walked the audience through the increasing importance of CGs as a tool for national law enforcement. As she explained, counter-terrorism police units known as “Internet Referral Units” (IRUs) have been established in some EU countries and Interpol. IRUs review online content, identify material that is potentially terrorism-related, and report it to platforms for removal based on the platforms’ CGs. One law enforcement officer publicly noted that relying on CGs—and not national law—is advantageous, because it allows police to seek removal of information that is not actually illegal.

As Llansó noted, IRUs have faced extensive criticism from civil society organizations. Many question whether using State resources and power to silence lawful speech is appropriate—or even consistent with constitutional and human rights of Internet users. Other critics have expressed alarm about law enforcement relying on private enforcement by platforms to effectively bypass national courts. When platforms resolve difficult questions about the balance between speech rights and public safety, societies may miss out on the important public conversations and policymaking needed to grapple with these very issues. Similar concerns arise, Llansó noted, regarding another important recent development: the 2016 Hate Speech Code of Conduct agreed on by the European Commission and four major platforms. As explained by Llansó, the Code of Conduct requires platforms to prohibit violent or hateful content under their CGs, and to accept removal requests on that basis. This too, she said, is an exercise of State power that causes private actors to suppress lawful speech.

In Llansó’s analysis, even if much of the affected speech were actually unlawful, these programs would still raise an issue of users’ rights to remedy and “oversight at scale.” Errors by law enforcement or platforms are inevitable—but it is unclear what redress users have if the error came from enforcement of discretionary CGs. Transparency about law enforcement and platform removal efforts, and access to appellate review, are key to protecting users’ rights. Remedies may be particularly important given the global scope of CG removals, which effectively amplify one country’s law enforcement actions to users around the world.

Does the Chinese model tell us that nations can have their cake and eat it too—maintain a bordered, regulated Internet without sacrificing a flourishing Internet commercial sector?

Min Jiang identified a similar shift toward “voluntary” platform content removal in China. There, she said, content removal is increasingly initiated by platforms themselves, rather than government. The mesh of laws and operating licenses governing their operations give platforms strong incentives to internalize law enforcement goals and proactively remove potentially illegal information. At the same time, Jiang said, legal authority is often highly fragmented among different government actors and sources of law. A vast array of overlapping national and regional authorities have some say over Internet content issues. In addition, many of the most important laws come from local regulation or other “lower level” sources of law—with only about ten percent of relevant law for platforms coming from sources like statutes, national regulations, or court decisions.

In addition, China famously has preserved bordered Internet access for citizens, using the “Great Firewall” and other technical and legal tools. Jiang explained the Chinese Internet of today as the product of a long and deliberate government policy of “Internet sovereignty.” As early as 1995, the Chinese telecommunications minister stated the country’s intention to preserve territorial borders online, comparing online communication to international travel: when you cross borders, you must go through customs, show your passport, and abide by national laws. “There is no contradiction at all,” he wrote, “between the development of telecommunications infrastructure and the exercise of State sovereignty.” Following this policy, China has, in Jiang’s words, “painstakingly grafted borders onto the Internet.”

A striking feature of the Chinese story, Jiang pointed out, is the economic success of Chinese Internet companies in recent years. Does the Chinese model tell us that nations can have their cake and eat it too—maintain a bordered, regulated Internet without sacrificing a flourishing Internet commercial sector? This question may become all the more pressing given, as other conference participants pointed out, the apparent interest of Russia and other countries in following a similar path.

In her closing remarks, Jiang described a shift toward a “re-nationalized,” bordered Internet – with erosion of liberties and increasing surveillance and filtering of online communications. As she noted, this trend is by no means limited to China. Lawmakers around the world are increasingly troubled by online content that violates their laws, and increasingly aggressive about enforcement. A similar observation was voiced by Google’s Juniper Downs. Describing her own conversations with governments, she observed “a real inflection point” regarding platforms’ role in policing user-generated content. Andrea Glorioso of the European Commission, who was participating in his personal capacity, weighed in from the audience to echo the observation. The Internet industry, he said, severely underestimates the emerging climate of localism in both the EU and the US. The internationalism that has long characterized Internet policy is waning, and the demand from national governments for Internet companies to solve problems created by online content is growing.

If the alternatives are “harmonization” of national laws via private platforms’ Terms of Service, or an increasingly bordered and fragmented Internet governed by national laws, which do we actually prefer?

This shift in government and public expectations is importantly connected to platforms’ voluntary enforcement of CGs as a basis for removing online content. Are CGs the new de facto source of international norms – resolving regional differences not through transnational cooperation of governments, but through unilateral action of private companies? If platforms’ choices are not truly unilateral but shaped by pressure from powerful State actors, should that raise concern about extraterritorial action by national governments? If the alternative to “harmonization” of national laws via private platform CGs is an increasingly bordered and fragmented Internet, which do we actually prefer? Our evolving answers to these questions will determine the real-world power of State and private actors, and the real-world choices of Internet users seeking information or exercising expression rights online, in the coming years. 

 

Add new comment