Two important current trends in Internet law go together in ways that aren’t getting enough attention. They should, though, because the overlap is well on its way to messing up the Internet further.
One trend involves what I call “must-carry” claims – Internet users’ assertion of rights to have their accounts or posts included on major platforms, even if the platforms don’t like it. The U.S. seems to have a new right-wing poster child for this claim every week. Their claims don’t pass legal muster in the U.S. But they might in other countries.
The other trend is national governments’ assertion that they can make platforms take down content everywhere in the world. French privacy regulators are making this claim to the EU’s highest court this week, saying Google should remove search results globally based on French “Right to Be Forgotten” laws – even if they constitute protected free expression in other countries. The court is also considering a similar global removal claim involving Facebook. A global takedown order in either of these cases would protect EU citizens’ rights under their own national laws in the short run. In the long run, though, it would speed the day when those rights may be curtailed by global orders from courts in Russia, Turkey, or China.
I had an op ed in the New York Times about global removal claims this week. It wasn’t the right place to get into the deep doctrinal connections between those cases and the must-carry cases, so I’m doing that here. The links come from wonky areas of jurisdiction law that are boring even to lawyers. The upshot is this: The more legal freedom a platform has to reject must-carry claims and take down speech it doesn’t like, the more power governments have to make platforms enforce speech-restrictive laws everywhere in the world. If courts don’t find doctrinal tools to decouple the must-carry issue from the global-takedown issue, we are headed in a bad direction.
Here’s a rundown of the two issues, and why they go together.
Question 1: Can you use “must-carry” claims to force a platform to show people your speech?
Answer 1: Not in the U.S., but maybe in Brazil or Germany.
This first question is playing out, for now, in Washington D.C., Berlin, and Sao Paolo. In D.C., political leaders from the President on down are saying that Twitter and other platforms unfairly silence conservative speakers. This “censorship,” they say, should be illegal – megaplatforms like Twitter, Facebook, and YouTube should have to carry users’ posts, even if they don’t want to. Critics across the political spectrum share conservatives’ underlying uneasiness, and question whether private Internet companies should control speech in what the Supreme Court has called the “modern public square.”
Whatever the philosophical merits of these must-carry claims, they fail under U.S. law. Two dozen or more plaintiffs have tried suing platforms for taking down their posts or accounts, and the platforms have won every case. For starters, platforms’ Terms of Service and statutory immunities under CDA 230 protect them from having to host speech they disagree with. More importantly, courts have consistently held that platforms’ own First Amendment rights protect them from laws that would force them to host or index content against their will. That means that even the must-carry legislation that some politicians have threatened to pass probably wouldn’t survive a Constitutional challenge.
In Germany and Brazil, though, this question has been playing out differently. At least two Brazilian courts have ordered YouTube to reinstate parody videos, holding that by taking them down, YouTube violated the parodist’s right to free expression. (1, 2.) Both YouTube and the copyright holders who told YouTube to take the parodies down were ordered to pay damages. Germany has had cases like this, too. At least one appeals court has ordered Facebook to reinstate content that the platform said violated its Community Guidelines. So have a couple of lower courts.
It’s hard to fathom where these cases are supposed to lead us in the long run. If platforms have to host all legal speech, are they effectively nationalized? Germany’s NetzDG law also requires platforms to take down “manifestly” unlawful content within 24 hours of being notified, and other illegal content within 7 days. So, is the idea that platforms can’t err on the side of over-removal or under-removal, and must instead perfectly interpret and enforce the law – doing what courts do, but much faster and with less information? Are platforms no longer allowed to define social norms for the communities they host, or to weed out offensive-but-lawful speech – which in many countries includes things like racial slurs, bullying, or anorexia how-to videos? Or are we heading for some strange new legal regime in which platforms are allowed to take down some legal speech, but not all of it? That would be a radical departure from current free expression law, putting legislatures or courts in charge of defining new and more restrictive speech rules for the Internet. These and other problems raised by the Brazilian and German courts’ orders are massively complex. They may be unsolveable.
But these rulings do, perhaps, change the legal landscape for the courts considering global removal claims. That brings us to the second question.
Question 2: Can one country force a platform to globally take down speech that is legal in other countries
Answer 2: It might depend on the answer to Question 1.
The seemingly separate issue of global content removal orders is mostly playing out in Europe and Canada, for now. In Europe, the CJEU is hearing both France’s claim that Google must apply “Right to Be Forgotten” rules globally, and also Austria’s claim that Facebook should globally remove a post calling a politician “corrupt” and a “traitor” – although the post would plainly be lawful political speech by many countries’ standards.
If the CJEU follows conventional international law principles, it may well follow in the footsteps of the Canadian Supreme Court – which, in 2017, ordered Google to remove links from search results everywhere in the world based on Canadian trade secret law. The Canadian court followed fairly standard principles (variously framed as rules of jurisdiction, conflict-of-law, international public law, or comity), which the CJEU will likely also consider. These principles tell courts to hold back – to not order extraterritorial enforcement of their laws – if, as the Canadian court put it, such an order would “require [the defendant] to violate the laws of another jurisdiction.”
Applying this standard, the Supreme Court of Canada decided that ordering Google to take down search results globally was reasonable. But it left open the possibility that Google could modify the order by showing that it conflicted with another country’s laws. Google tried that, getting a U.S. court to declare Canada’s order unenforceable here. The U.S. order didn’t require Google to reinstate the links, though – it wasn’t a must-carry order. For that reason, a lower Canadian court said there was no real legal conflict with U.S. law. As of now, Canada’s global removal order still stands. (Disclosure: I worked on earlier stages of this case when I was at Google.)
Global takedowns and platforms’ immunity from must-carry claims go together in U.S. court treatment, too. Two U.S. district court cases make that explicit. In Zhang v. Baidu, democracy activists said that Baidu, China’s main search engine, had violated their rights by excluding their speech from search results. Baidu did so, they said, at the behest of the Chinese government. The court rejected their claim. Even if the Chinese government did prompt Baidu’s actions, it concluded, Baidu’s own First Amendment rights allowed the company to exclude whatever speech it wanted. In Sikhs for Justice v. Facebook, similarly, human rights organizations said the social network had silenced them at a foreign government’s behest – and the court said Facebook was free to do so. The widely misunderstood 2000 Yahoo France case has the same lesson. The French court’s order to remove Nazi memorabilia from auction listings may have been inconsistent with the First Amendment, but that didn’t matter, because the French plaintiffs weren’t asking U.S. courts to enforce it. An appellate court in California ultimately dismissed the case, and Yahoo voluntarily -- and quite legally -- changed its global practices to align better with French law.
The lesson of all these cases is this: As long as a platform can take down user speech in Country A, nothing stops courts in Country B from compelling the platform to do so. That’s a troubling outcome for the citizens of Country A. They may have constitutional rights to prevent their own government from suppressing their speech. But they are helpless when a foreign government makes a private, multinational platform do it.
Putting Questions 1 and 2 together: What is to be done about global takedowns?
Odds are, the CJEU and other courts will grant global content removal orders like the ones France and Austria want in at least some cases. This piecemeal approach will be a step in the wrong direction – and one that will be emulated by courts in other countries.
It doesn’t have to be that way, though. Both courts and legislatures have ample latitude to choose the direction the law takes. The applicable legal doctrines are ambiguous, and the future is not written. Courts could follow recommendations from human rights lawyers, who say that one country shouldn’t overrule another’s free speech rules unless they fall outside the range permitted by international human rights principles. They could adopt jurisdiction scholar Dan Svantesson’s “scope of remedial jurisdiction” test, and apply it either early in a case (when assessing jurisdiction, venue, choice of law or comity) or late in the case (when assessing remedies). They could accept companies’ necessarily-imperfect attempts to comply with national laws on nationally targeted versions of their services, and not insist that any risk of forbidden content crossing borders via VPNs and other tools justifies enforcing one country’s speech laws everywhere. They could find grounds within the data protection, trade secret, or other substantive bodies of law to refrain from global enforcement. The CJEU in particular has the leeway to strike out in the direction of its choice – and its choice will influence courts around the world.
Or then again… Perhaps the must-carry rulings in Brazil and Germany will cause something much more interesting and unpredictable to happen. The CJEU could conclude that Brazil’s and Germany’s embrace of must-carry obligations for platforms has changed things. After all, we now know of at least two places where compelling platforms to take down legal speech would require it to violate local law. That “hard conflict” makes it much harder, under conventional international law principles, to justify global takedown orders like the one Canada issued.
Digression 1: Some nerdy speculations about Brazil, Canada, and intellectual property law (with parentheticals)
How would the “your order requires us to violate the law in Brazil” argument play out in Google’s Canadian case? (That case has been quiet but is, as far as I know, ongoing.) We probably shouldn’t expect Google to raise it, since its position is that the Brazilian courts are wrong. But if the Brazilian courts are right, then the two YouTube rulings are uniquely relevant because of what they tell us about intellectual property law. The Canadian courts have been assuming, not unreasonably, that other countries’ trade secret law looks much like Canada’s. (They also put the burden on Google, which hasn’t even been accused of wrongdoing in the case, to prove otherwise. Hopefully that’s some special lex googealis, and not rule for less abundantly lawyered litigants.)
The thing is, copyright law is supposed to be even more similar across borders than trade secret law. Copyright is harmonized by global treaties. (DMCA notice and takedown isn’t, though – it’s other parts of the DMCA that came from a WIPO agreement.) For that reason, Internet jurisdiction specialists routinely name copyright law as an area where global removal orders might make sense. But now, Brazil has surprised everyone (except Brazilian lawyers, who tell me they expected this) by saying YouTube was prohibited from taking down a video, even when it received an infringement notice from the copyright holder. That’s quite a wrinkle in this supposedly harmonized field of law. It should give pause to any court or litigant that thinks it knows how Internet content removal cases would come out in foreign jurisdictions.
Digression 2: France isn’t China
Courts and governments aren’t the only decision-makers here, of course. Internet companies choose which laws they will be subject to when they choose which national markets to enter. If making money in China, Vietnam, Russia, or Turkey means being legally required to participate in censorship – and becoming a vector for that censorship to affect the whole world – maybe it’s the wrong thing to do.
But there’s a limit to that reasoning. Plenty of human-rights-respecting countries prohibit speech that is legal in other human-rights-respecting countries. It’s not a moral failing for Google to do business in France, just because France outlaws speech the U.S. allows – or for it to do business in the U.S., just because the U.S. permits privacy violations that are illegal in France. We need a set of rules to make Internet communication work when basically-reasonable counties don’t have the same laws. Only governments can create those rules.
Back to the Big Picture
Zooming out a bit, it seems clear that courts should find a way to decouple the must-carry question from the global-content-removal question. Country A should be free to define its own free expression laws – within the parameters of human rights agreements – without effectively inviting Country B’s courts to set the rules for online speech.
More broadly, we need to get better at seeing the ways that government power and Internet platform power are connected. Too often, we are told that the two are in opposition. In the recent past, platforms like Twitter were seen as checks on over-reaching state power, and celebrated as instruments of liberation. The current political climate takes that simplistic model and reverses it. Now platforms are the menace, and pundits welcome government intervention to constrain them.
Those formulations are seriously incomplete, and they are naïve. They ignore a dozen ways that state and platform power can work together – to the detriment of ordinary Internet users’ rights. (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12) The global enforcement and jurisdiction questions are just one instance of a larger problem we need to address. The current debate about Internet content regulation won’t go anywhere good until we do.