William Barr and Winnie the Pooh

Right now, Chinese users of WeChat, an app that includes text, video, and picture messaging plus a Facebook-style news feed (among many other features), can't message each other a meme of Winnie the Pooh. Why not? Because, being short and rotund, he supposedly evokes an unflattering comparison to President Xi Jinping. So, at the behest of the Chinese government, WeChat censors pictures of a beloved children's character in order to crack down on government criticism. Here in the U.S., if the Attorney General gets his way, Facebook and other U.S. services will be able to do the same to your private chats.

Late last week, Attorney General William Barr and the acting secretary of Homeland Security joined British and Australian officials in a letter to Facebook head Mark Zuckerberg that asked Facebook not to go forward with its plan to implement end-to-end encryption across its messaging services. The October 4 letter coincided with an event held by the Department of Justice (DOJ) that day, which featured Barr, the letter’s British and Australian co-authors, and FBI Director Christopher Wray, among others. Both the letter and the event focused on the use of online communications platforms for the transmission of child sexual abuse material (CSAM), warning that the roll-out of end-to-end encryption for messaging would risk stymying law enforcement efforts to detect, investigate, and prosecute that activity. The letter and event came hot on the heels of a New York Times article about the problem of CSAM on online platforms like Facebook. Barr’s demand may be the precursor to rumored anti-encryption legislation that might come out of the Senate Judiciary Committee soon, more than three years after the embarrassing debacle over a bill proposed by Senators Richard Burr and Dianne Feinstein (who is on that committee).

This is a significant escalation in the current Crypto Wars. The U.S. government has not gone so directly head-to-head over encryption with a specific company since its showdown with Apple in early 2016, when the government blinked first. (Well, it hasn’t done so in public, anyway.) The suddenness of this new push is alarming. Also noteworthy is that suddenly the main reason to demonize encryption is CSAM, with terrorism and other ills playing second fiddle. Even as recently as late July 2019, when Barr revived his predecessors’ habit of castigating encrypted service providers, it was drug cartels he invoked. But CSAM is the dominant focus now, suddenly and thoroughly. 

It is beyond question that CSAM is a real and serious problem for Facebook (and every tech company that has ever given users the ability to upload, store, send, share, post, or otherwise communicate files). It is radioactive, it is illegal everywhere, and no legitimate company wants it on their servers. Nevertheless, this new single-minded focus on CSAM in the revived anti-encryption push feels like an exceedingly cynical move on the part of the U.S. government. Out of the Four Horsemen of the Infocalypse (terrorism, drug trafficking, CSAM, and organized crime), terrorism didn’t work to turn public opinion against encryption, so the government has switched horse(men) midstream. 

It also feels like cynical exploitation of the “techlash,” as I’ve observed (a year ago, and a year before that). The techlash has made it more politically palatable to pick on tech companies -- particularly Facebook. Never mind that people distrust Facebook because of its privacy screw-ups, and so they should be glad that Facebook is adding end-to-end encryption to more of its services, because that will make Facebook less able to invade users’ privacy. It’s not important, for Barr’s purposes, that average people (or congressmembers) actually understand what Facebook’s end-to-end encryption plan will do; only that they create a mental link between encryption and crime, and another link between the problem of criminal activity on Facebook’s platform with the problem of Facebook’s own repeated privacy misdeeds, such that the privacy-related distrust commutes into distrust of the end-to-end encryption plan.

Who is the antagonist to be bested in this fight against Facebook’s effort to enhance the security and privacy of over a billion people? Not pedophiles -- or at least, not just pedophiles. The “problem” that Barr, Wray, and their counterparts are trying to solve is that of people being able to talk to each other privately without government ability to snoop on them. This was made plain in the October 4 letter. It stated, “Companies should not deliberately design their systems to preclude any form of access to content, even for preventing or investigating the most serious crimes.” All well and good so long as there’s the focus on crimes, right? But later, the letter called on Facebook “and other companies” to “[e]nable law enforcement to obtain lawful access to content in a readable and usable format.” All content should be accessible by law enforcement. To get at evidence of crime, law enforcement must be able to get access to everything. Every text, every private message, every call. Every communication you make with another person through an electronic medium like Facebook.

Of course, as is the norm in government exhortations to the tech industry, the letter doesn’t say how Facebook should go about doing that. Governments have been wary of making concrete suggestions ever since the failure of the Clipper Chip in the ‘90s. But in recent times, when they do, there’s been some change. As I wrote in a whitepaper last year, Wray and former Deputy AG Rod Rosenstein both advocated around late 2017 and early 2018 for some kind of key escrow scheme. More recently, in November of last year, GCHQ (the UK’s NSA) made what’s called the “ghost proposal” for silently adding the government as a party to encrypted conversations. This reflects an evolution: by and large, government officials now understand that if they are going to make some sort of actual suggestion (rather than stating their goal of access to plaintext and leaving it to the tech companies to figure out how to get there, as the Oct. 4 letter does), rule #1 is now “don’t touch the crypto.” If you can say “this proposal isn’t a ‘backdoor,’ it doesn’t require breaking the encryption,” then that changes the proposal’s security impact -- and most law enforcement officials presumably do sincerely want to minimize adverse impact on user security. (Most of them.) So it changes the response by information security professionals. It also changes the optics of the proposal in terms of public relations, since the public learned from the Apple vs. FBI showdown that “breaking encryption” and “backdoors” are bad news. 

Enter “content moderation.” One proposal for enabling law enforcement access is to build a system where the provider (Facebook) would check content, such as a photo attached to a message, before it’s encrypted and transmitted to another user -- i.e. while the content is on the sender’s device, not traveling through the provider’s server -- to try to figure out whether that content is or might be abusive content such as CSAM. Jonathan Mayer has just published a very good short first-draft discussion paper about what content moderation for end-to-end encrypted messaging might look like. This is a technical paper. It is not a policy paper. Mayer expressly says that he is not claiming that the concepts he describes “adequately address information security risks or public policy values, such as free speech, international human rights, or economic competitiveness.”

So, allow me to state the obvious: There is no way in hell that Facebook or anyone else could introduce content moderation for end-to-end encrypted messaging without it inevitably sliding into abuse. It would start with CSAM, but it would not stop there. The predictable result is surveillance and censorship, a chill on privacy and free speech. No, client-side pre-encryption content moderation “doesn’t touch the encryption,” in keeping with snooping governments’ new rule #1 for proposals to “solve” the encryption “problem.” But that doesn’t put it in the clear (and, again, Mayer is emphatically not suggesting it does). As Jon Callas of the ACLU said in response to the GCHQ ghost proposal: this “proposal would not ‘break’ encryption, but it would nonetheless have the same effect by creating a situation in which people are no longer confident they are securely talking to their partners.”

A variant of this content moderation is already done in various contexts. Facebook already scans for attempts to upload and share CSAM on the parts of its service that are not (yet) end-to-end encrypted -- that’s the visibility that government officials are worried would go away if Facebook proceeds with its plan. Email service providers scan your email attachments against a hash database of known CSAM, as the Times article describes. Upload filters are also already in use for other purposes besides interdicting CSAM: for example, upload filters that are intended to prevent copyright-infringing material from being posted to YouTube. Upload filters have also been proposed for preventing the posting and sharing of “violent extremist” content such as the Christchurch shooting video. Indeed, as my colleague Daphne Keller explains, it appears that filtering requirements of some sort will now be the law of the land in the European Union thanks to a defamation case, though nobody knows what that filter is supposed to look like, exactly. So already, we are seeing CSAM, plus defamation, copyright infringement, and violent extremism (all concepts that are much harder to accurately spot on sight than child sex abuse), as the driving forces behind existing and government-demanded filters on people’s ability to engage in “one-to-many” speech online, through such mediums as YouTube or Facebook.

And already, “upload filters are inherently inconsistent with fundamental freedoms.” It’s a problem as-is from a fundamental-rights standpoint when filters are applied to interdict attempts to share content broadly to many people, through a channel that is not end-to-end encrypted. But it is even more troubling when the same idea is applied to flag blacklisted content (be it words or images) in a one-on-one or small-group conversation -- something we reasonably consider private. Particularly where the interlocutors are using end-to-end encryption to try to assure that their conversation is private (rather than broadcast it to the world à la YouTube). And it is especially troubling if the provider designs its messaging service so that this scanning for blacklisted content happens automatically, for every single user’s conversations, not just those users who are reasonably suspected of crime and for whom a wiretap order has been issued for their electronic communications.

I understand that the approaches Mayer describes include technical measures intended to respect the privacy of conversations as much as possible and winnow down the amount of unencrypted content that is ever actually reviewed by a human (though the potential false positive rates are very troubling given the criminal consequences). Designing privacy-enhancing technologies to deal with the trash fire that is the Internet is certainly an interesting, if depressing, research area. And I understand that ostensibly we are talking about systems that are only for CSAM, at present. But when you’re checking content against a blacklist (or fuzzily trying to predict whether content your system hasn’t seen before should be blacklisted), ultimately you are talking about a system that keeps a list of things that must not be said or shared, and that monitors and reports people if they do so. 

Interdicting and reporting unencrypted content pre-transmission surely sounds like a good idea when applied to CSAM (content the recipient is unlikely to report as abusive, if the content is being sent from one pedophile to another). Or malicious attachments that could do harm if you opened them — content you the recipient might think you wanted to look at and wouldn’t report as abusive because you didn’t realize it to be abusive (until it was too late).

But we do not live in a world where that system always stays tightly confined to CSAM, or malware scanning, and doesn’t end up enabling censorship of individuals’ private personal conversations with other people over content that is not illegal or harmful. That already happens in China (which is increasingly an object of envy by U.S. law enforcement). China uses its online censorship capabilities to keep its citizens from using WeChat to talk about Winnie the Pooh or “Tiananmen Square”. An end-to-end encrypted messaging system that would do client-side scanning of content against a blacklist before it’s encrypted and report the positive hits? China would rush to fund that work, and likely already has.

The affinity for censorship is not limited to China. Here in the U.S., Hollywood, whose copyright supramaximalist views have long found favor in Congress, would be all too glad to have your private conversations filtered. Other Western democracies such as the European Union countries and New Zealand would want your end-to-end encrypted messages to be pre-scanned for “violent extremist content” and defamation. Never mind how hard it is to define “violent extremist content,” much less accurately identify it without false positives, and the fact that as a concept it covers speech that is not illegal in many countries. And the censorship demands won’t be just for images, but also for text. The recent EU court decision that Daphne discusses imposes a requirement to filter for defamatory textual phrases. 

And from CSAM, copyright claims, “violent extremist content,” and defamation, the blacklist will keep expanding. Tired of getting unwanted dick pics? Fine, the nudity filters Facebook would be called upon to implement in its end-to-end encrypted messaging apps might help you in some circumstances. But don’t be surprised when they deploy their Nipple Detection Systems, which have long come under fire for censoring Facebook and Instagram posts, to intervene to keep you from sending a nude to your romantic partner over Messenger or WhatsApp.

And on and on. “Hate speech” is impossible to define, but that won’t stop the calls to censor it, so that even willing recipients can’t get it, in addition to the people who would otherwise be abused by receiving such speech. There will be demands to stop and report any user who tries to send a picture of a swastika, followed by demands to do similar for the Confederate flag. Again, China is instructive: in the latest version of iOS, the soft keyboard no longer includes the Taiwan flag for users in Hong Kong and Macau. That’s a more extreme version of not allowing the user to transmit a message containing the flag—which seems so reasonable by comparison, doesn’t it?

When a government prevents you from speaking certain things or depicting certain pictures, it’s called prior restraint and, with narrow exceptions, it is almost invariably unconstitutional. When a platform does it at the behest of government, as Facebook might do if Barr had his way, we call it “content moderation.” That anodyne phrase obscures the evil at work here: of government ordering a private third party to censor speech that is, or under any human rights-respecting regime should be, legal. Yes, CSAM is and should be illegal everywhere. No one disputes that. But it is staggeringly naive to believe that, even in the United States of America, client-side pre-encryption “content moderation” would stop at CSAM.

And lest we forget, those measures won’t catch all content they’re intended to interdict. As Mayer notes, users could still encrypt their content separately and then send it. That means pedophiles can encrypt CSAM before transmitting it — just as they can now on services that are not end-to-end encrypted. So, getting Facebook to implement client-side pre-encryption content moderation would catch the pedophiles who are bad at opsec, but as Mayer notes, the rest would adjust, evolve their techniques for evasion, and teach those strategies to each other (which, again, they do already). 

Meanwhile, Hollywood would make damn sure you can’t just send someone a meme over WhatsApp unless you go to the extra effort of separately encrypting it first. Everyone’s perfectly legal speech would be burdened and chilled — because who wants to spend time separately encrypting everything? It’s easier just to not say the thing you wanted to say, to not send the picture that would be worth 1000 words, to express yourself in some other way. Some way that won’t trip up the censorship filter. Sure, you’ll find new ways, as the Chinese did by coming up with Winnie the Pooh as a stand-in for Xi. And then, as with Pooh, the filter will be updated, and you can’t say that either. So you stop saying the forbidden words or sharing the forbidden images. And then, eventually, you stop thinking them too.

If you are willing to accept Facebook (or Google, or Apple, or any other encrypted messaging service provider Bill Barr bullies into compliance) censoring all your private text conversations — and everyone else’s — because it might make it a little easier for the government to catch the most inept pedophiles, then I’m not sure I’ve got a lot else to say to you. But if this idea bothers you — if you don’t like the thought that before very long from now, you won’t be able to say what you please in private discussions over text, while pedophiles learn how to continue operating without detection — then I hope you’ll see Barr’s demand to Facebook for the grave danger it is. If so, let Facebook know. More importantly, let your congressional representatives know.

Now, this post isn’t a careful position paper like Jonathan Mayer wrote. All of the above is what is known as a “slippery slope” argument, and it’s easy to dismiss as hysterical. “Of course we would never do Y just because we are doing X,” platforms and the government would assure you. Then, once mission creep inevitably happens — which it always, always does — the official line would switch to: “Of course we would never do Z just because we are doing Y.” Slippery slope arguments might sound hysterical at the top of the slope; from the bottom, they sound premonitory.

Let’s look to China again. The highly intrusive surveillance of Uighurs in China used to be “just” for Uighurs in Xinjiang at first. Then it was “just” for them and people who visited Xinjiang, regardless of the visitors’ own religion or ethnicity. Then it was “just” for them and, oh, also Tibetans too, a totally different ethnic and religious group that China is fond of persecuting.

The ratchet of surveillance has a pronounced tendency to only go one way. End-to-end encryption is one of the best measures we have for pushing it back and maintaining our security and privacy. But while end-to-end encryption may be necessary to protect those rights, it is not sufficient, as proposals for measures like client-side pre-encryption moderation of private conversations demonstrate.

The rationale may change — national security and terrorism one day, and if that doesn’t work, child abuse the next — but the goal is the same: for governments to have the ability to eavesdrop on your every conversation, the legal power to require that all your conversations be recorded, and the authority to make private-sector providers do their bidding in the process. To have total control. And, if they really succeed, they will reach the ultimate goal: to not even need to exert that control to restrict what you say and do and hear and think — because you’ll do that yourself. You will save them, and Facebook, a lot of time.

It starts with something nobody could possibly oppose: reducing the scourge of child sex abuse. It will not end there. That is the slippery slope.

I don’t pretend to have the answer for how to fight CSAM without simultaneously opening the door to mass surveillance and censorship. I’m not sure there is one, but I appreciate the efforts of the technologists who are trying to find one, or at least to elucidate different technical approaches to different aspects of the encryption debate (such as Jonathan Mayer, who is hardly pro-surveillance). And I know that as long as I don’t have affirmative proposals of my own, just objections to others’, it makes me easy to dismiss as just another hysterical absolutist zealot. That is unfortunate, because, as some of my academic colleagues have privately observed, there is far more nuance to information security experts’ and civil libertarians’ positions in the debate than it might often appear from the outside, or than Bill Barr wants you to think there is. 

That said, this is not the most nuanced of blog posts. I find everything I’ve said above to be painfully obvious. And yet I feel it will still keep needing to be said as long as the Attorney General keeps pretending this debate is only about universally-reviled conduct such as terrorism and child sex abuse. After all, he is also the same Attorney General who was chosen to be, basically, the capo to a mob boss, one who wants Barr to investigate his political opponents. The sitting Attorney General of the United States is the last person we should trust with the ability to read everyone’s messages. We cannot afford the polite fiction that the nation’s law enforcement officials, even those at the very top, are all “the good guys.” 

Those who work for providers, in academia, or in civil society may be tempted to start down the slippery slope we can all see ahead of us, partially out of the commendable desire to help children, partially to show the U.S. government how “reasonable” and “adult” and “mature” we are when it comes to the encryption debate. Let me be clear: It is not reasonable for any government to demand that platforms build the ability to surveil and censor everyone’s private communications. You do not have to help brainstorm, design, build, rationalize, or excuse a system for pervasive surveillance and censorship. Technologists must design and build systems that acknowledge the uncomfortable truth: that China is much closer than we think.

Add new comment