This piece is exerpted from the Law, Borders, and Speech Conference Proceedings Volume. The conference, convened by Stanford's Center for Internet and Society, brought together experts from around the world to discuss conflicting national laws governing online speech -- and how courts, Internet platforms, and public interest advocates should respond to increasing demands for these laws to be enforced on the global Internet. For two weeks in January 2018, we will be posting these materials on the CIS Blog. The Proceedings Volume itself contains these and other resources, including reading lists, conference slides, and results of participant surveys. It is Creative Commons licensed for re-use in teaching materials and elsewhere.
Panel Summary by Albert Gidari
- Albert Gidari - Director of Privacy, Stanford Center for Internet and Society
- Jennifer Granick - Director of Civil Liberties, Stanford Center for Internet and Society
- Nathaniel Jones - Assistant General Counsel, Microsoft
- Andrew Woods - Assistant Professor, University of Kentucky College of Law
Lawyers and activists concerned with law enforcement, surveillance and privacy have long debated the rules that should govern cross-border requests for Internet platforms to disclose user data to law enforcement. The Microsoft Ireland case and mutual legal assistance treaty (MLAT) and Electronic Communications Privacy Act (ECPA) reform discussions in the US have added new urgency to this issue. Do lessons and insights from that discussion help us to think through the cross-border content regulation issues raised in this conference? How should the debate over cross-border data requests be informed by a broader understanding of the problems of international content regulation?
The Law, Borders, and Speech conference at Stanford’s Center for Internet and Society asked the important question: Which countries’ laws and values will govern Internet users’ online behavior, including their free expression rights? The conference used the landmark article written in 1996 by David G. Post and David R. Johnson to examine whether twenty years on their conclusions still held true. Post and Johnson had concluded that “[t]he rise of the global computer network is destroying the link between geographical location and: (1) the power of local governments to assert control over online behavior; (2) the effects of online behavior on individuals or things; (3) the legitimacy of the efforts of a local sovereign to enforce rules applicable to global phenomena; and (4) the ability of physical location to give notice of which sets of rules apply.” They proposed that national law must be reconciled with self-regulatory processes emerging from the network itself.
The conference panels addressed how we reconcile differences in national laws governing speech today, and asked how we should be reconciling them; what are the responsibilities of Internet speakers and platforms when faced with diverging rules about what online content is legal; and whether users have relevant legal rights when their speech, or the information they are seeking, is legal in their own country. For those interested in content regulation and intermediary liability issues, these topics are well-traveled and often discussed, but the conference added a panel on law enforcement access to user data—a topic that raises many of the same jurisdictional and prudential concerns as the assertion of power to remove or regulate content globally but which is seldom discussed in the same breath or in the same room as content regulation.
The goal of the panel was to look at the current practices and procedures of sovereigns demanding access to user data, and how providers respond to such legal demands, to glean any lessons applicable to the content regulation world.
The discussion opened with the observation that there is a century or more of established practices and procedure that is territorially-based for cross border evidence gathering. Our existing treaties and conventions all recognize the doctrine of territoriality when it comes to evidence gathering. That is, one country doesn’t seize evidence from within the borders of another country without the searched country’s permission and cooperation. From that simple principle arose mutual legal assistance treaties and international procedures for procuring evidence abroad that stood the test of time—until the Internet broke them.
No doubt, the rise of the Internet, Internet platforms, social networks and cloud computing has created significant problems for law enforcement (LE). It is obvious, really—the victim of a crime of fraud may be local, but the perpetrator is likely in one or more countries using one or more access providers to get online and one or more applications to commit the fraud. The evidence of the crime and the perpetrators are likely outside the victim’s place of residence, presumably the locus of the crime. This is not unlike the most difficult content regulation problems where the content may be hosted in one country, where it is legally displayed by a publisher located in a second country where it is legal to speak the content, but the party harmed by the content is located in a third country where there is no practical remedy under that nation’s laws.
There are international processes in place for LE to obtain evidence of the crime or to identify the perpetrators, but it is acknowledged that these avenues are at best cumbersome, slow, bureaucratic and inefficient. Going directly to providers in other countries does not work—most of the providers of interest have historically been in the US, but increasingly that is not the only case. The US, like most countries, has blocking statutes that prohibit disclosure of content other than to domestic government authorities. And post-Snowden, even if there was previously room for voluntary cooperation, that door has closed. This gives rise to frustration for the investigatory agencies as the evidence often is necessary or fundamental to prosecution.
But just as with content regulation, LE is not sitting on its hands waiting for international law to develop to solve the problem. Instead, we see the extraterritorial assertion of power to compel disclosures by providers located abroad. Law enforcement agencies, sometimes assisted by local courts, assert power and demand cooperation from providers—including by asserting that a global platform is using facilities within the country by merely having a service that is accessible and therefore actually “within the jurisdiction,” arresting employees who are present in the jurisdiction, and even blocking access to service to force cooperation.
In short, these agencies act extraterritorially. This is not a problem just for requests coming from outside the US, affecting only US providers. US law can also be the source of extraterritorial demands in other countries. Amendments to Rule 41 of the Rules of Criminal Procedure would permit a warrant to issue from any US court to remotely access a computer where its location is unknown. The same issue exists under the Cybercrime Convention. Article 32b of the Convention is an exception to the principle of territoriality and permits unilateral transborder access without the need for mutual assistance under limited circumstances. Rule 32b permits LE to access or receive, through a computer system in its territory, stored computer data located in the territory of another Party, if LE obtains the lawful and voluntary consent of the person who has the lawful authority to disclose the data to LE through that computer system.
Some countries have also acted locally. They require data localization to facilitate lawful access to user information. Russia has been very aggressive of late in requiring local storage of data by online providers, but even local storage solutions are not ideal or fully effective—not all users whose data might be subject to local storage reside within the country for example. What choice or notice will those users have when corresponding or interacting with a user where localization applies?
The current system of mutual legal assistance treaties is inadequate to the task and in need of reform. Providers cannot be in the position of deciding daily which nation’s laws are going to be broken in responding or not responding to legal demands. Not all providers can rely on the kind of argument raised by Microsoft in its ongoing case against DOJ—that data stored in Ireland can’t be compelled for production on US process served on a US provider where the process lacks extraterritorial application.
But legal victories based on territorial limitations of the sovereign’s power may in the long run be a “log on the fire” of data localization. Governments will not be deprived of the evidence necessary to investigate and prosecute crimes, and the lack of an effective system to balance the needs of users, platforms and government agencies may yield worse precedents.
It is interesting to think about technology as a solution to the “problem”—such as the use of encryption for stored content. But governments see such “solutions” as obstruction of justice, just as they see claims of jurisdictional protection by providers as avoidance of responsibility. Government mandates in the end can affect the lawfulness of the technology just as they can affect the disclosure of the data. As one panelist noted, “jurisdiction is a hack” to solid encryption, but that doesn’t mean that technology should be ignored as a solution.
Just as with content regulation, interoperability among differing legal systems, and not harmonization, may be the more desirable goal. But at what cost to which principles? Probable cause and free speech in the US are values not shared globally, nor are they values that always trump other valid concerns of other sovereign interests.
Over the course of presentations and conversation, panelists identified a number of points that may distinguish cross-border LE requests for user data from cross-border content removal demands.
- Legal obligations: National law affirmatively obliges Internet companies to protect users’ privacy including against foreign LE requests in some cases, potentially creating a conflict with the law of the country whose LE is seeking the data. Under MLAT agreements, this situation may arise where the act under investigation is a crime in the LE’s country, but not in the company’s. By contrast, a company facing a foreign content removal demand almost never has legal obligations to protect speech rights of users, and thus is free to comply with the request even in cases where the speech is protected under the company’s national law.
- Sources of law: LE requests for user data are governed by long-established—if increasingly archaic—laws and treaties governing data disclosure, and establishing territoriality as a governing principle. No comparable history or source of law exists for content removal.
- Available information: Companies may have to respond to LE requests without knowing for sure where the data sits, what the user’s nationality is, or where the user may be physically located. Content removal requests rarely arise in such an informational vacuum.
- Technological differences: Tools like geoblocking may permit companies to comply with content removal demands on one nationally-targeted version of their service, while keeping the content available in other countries. Such territorially limited compliance does not have an analog in the LE context.
- Risk: While both LE data requests and content removal requests can affect Internet users’ human rights, the worst case scenario for improper data disclosure—wrongful arrest and abuse of innocent people—may be considerably worse.
At the same time, the panelists identified a number of areas of similarity.
- Centralization exacerbates conflicts: As online information is increasingly processed by a relatively small number of intermediaries, these companies become chokepoints for information control and centralized repositories of data about user activity. These companies and the governments of their home countries will face increasing pressure to reach accommodations with governments around the world, both for content removal and user data disclosure.
- Lack of public information: The processes followed by companies in response to both kinds of requests are relatively opaque to the public, and may be unknown even to the affected user.
- Political consequences of non-compliance: Companies rejecting or disregarding foreign legal demands of both sorts risk offending government actors in those countries. The resulting political fall-out, such as data localization requirements, may harm both the Internet companies and their users.
- Courts are the wrong forum: Resolving these complex issues through litigation is unlikely to lead to sound policy solutions. Not all affected parties or interests will likely be heard, and parties must shape their arguments to existing, flawed law—rather than promoting more sensible balances that might be achieved through legislation or treaty negotiation.
Perhaps in the end there are more similarities between content regulation and cross-border evidence demands than practitioners in both areas might have imagined. The jurisdictional questions are largely the same; the pressures on platforms to be solution providers is enormous; and government frustration with provider push-back is the same in both worlds. It seems clear to those that deal with cross-border evidence collection that in the absence of an agreed upon international framework with safeguards to permit lawful access to data, more and more countries will take unilateral action and extend law enforcement powers to remote transborder searches either formally or informally with unclear safeguards. The same is true with content regulation.
 Microsoft Ireland: In the Matter of a Warrant to Search a Certain E-Mail Account Controlled and Maintained by Microsoft Corporation 829 F.3d 197 (2d Cir. 2016), reh'g en banc denied, No. 14-2985, 2017 WL 362765 (2d Cir. Jan. 24, 2017).