The European Commission, for One, Welcomes Our New Robot Overlords

This is my third and most intemperate blog post about the European Commission’s recent Communication on platforms and illegal content. The first two point out serious factual problems with the Commission’s claims. I hope to write one more post making lawyerly points about why the Communication conflicts with EU and human rights law. This post, though, I write as a pessimist and consumer of dystopian novels. The Commission is asking for an Internet police state on par with Minority Report or 1984.

The Commission says companies that host online expression, from Facebook to your local news forum, should “proactively detect, identify, and remove” anything illegal that passes across their servers. And they should use automated filters to do it, with or without human review. That means scanning every word we say, algorithmically identifying if any of it is illegal, and erasing it – and then reporting it to the police. Alternately, the police (not courts – police) can tell the companies when a post, image, or video is illegal. Then the companies are supposed to use algorithms to make sure no one sees it or says those things again. Lest anything escape the dragnet, platforms should share databases of illegal content, so they can all identify the same speech and enforce the same rules. The inevitable resulting errors and deletion of lawful and important speech are to be corrected, per the Communication, by having platform employees review grey-area removal decisions and by allowing users whose expression has disappeared to challenge the platform's decision, using a  "counternotice" process.

One problem with this vision is simply that filters fail in predictable ways. They set out to block ISIS, and wind up silencing Syrian human rights organizations instead, as one example. Review by platform employees also has real problems. That's the system we have for most notice and takedown now, and companies routinely err on the side of caution, removing lawful speech. Counternotice, while very important, corrects only a fracation of improper removals. 

But perhaps the bigger problem is that perfect, universal enforcement of rules to govern our public speech and our private communications is a terrifying concept. The Commission’s proposal is Orwellian, but with better technological control. Its merger of state and private power is like something from David Foster Wallace or Neal Stephenson, but considerably darker. Few of us would really want this kind of supervision and control from even from the most benign and trustworthy governments. But none of us live under those kinds of governments anyway. And in any case, the Commission's proposal is that private, mostly American-owned companies should do it.

Here are some choice passages:

  • “Platforms and law enforcement authorities are … encouraged to develop technical interfaces that allow them to cooperate more effectively in the entire content governance cycle.”
  • “Fully automated deletion or suspension of content can be particularly effective and should be applied where the circumstances leave little doubt about the illegality of the material, e.g. in cases of material whose removal is notified by law enforcement authorities[.]”
  • “[O]nline platforms should report to law enforcement authorities whenever they are made aware of or encounter evidence of criminal or other offences … Evidence of criminal offences obtained in the context of illegal content removal [i.e., what the person said] should be transmitted to law enforcement authorities[.]”
  • “Preventing known illegal material from being disseminated across platforms requires closer co-operation between online service providers... It is also important to increase the cooperation by law enforcement authorities with small, less resilient companies, who may become the preferred platform of choice by criminals and other persons involved in infringing activities online if they are deemed more vulnerable than others…. Access to databases that are used to automatically match and identify reappearing illegal content should be available to all online platforms.”

The Commission’s goals are understandable. Tackling dangerous content online is important. But the Internet is where we keep pictures of our kids, and embarrassing old emails, and health records. It’s where teenagers keep their diaries and activists coordinate protests and fledgling rappers post their rhymes. We don’t want an Internet that subjects all of us to a constant, automated, privatized “content governance cycle.”

Some ways out

The Commission’s preferred future hasn’t come to pass – yet. Here are a few ideas about how to avoid it.

  • First, and best, EU citizens should tell politicians they don’t want this. A lot of the important activism on this is may come at the Member State level – from groups like Bits of Freedom in the Netherlands, La Quadrature du Net in France, Open Rights Group and OpenMedia in the UK, or the Helsinki Foundation in Poland. EDRi is an important organizer at the EU level and is closely tracking this issue. If you are European and don’t like the Commission’s proposal, take a look at these groups’ work and see what you can do.
  • Second, the Communication is full of caveats saying enforcement must be consistent with data protection law. It’s really hard to figure out what that means as a legal matter -- the data protection analysis would be an unholy mix of free expression exemptions and derogations, government exemptions, unresolved questions about hosts’ role as Controllers of data in user posts, and more. I like the idea that this could bollox up the whole proposal, though. It would be a beautiful thing to see data protection law save the open Internet.
  • Third, governments actually can’t just outsource judicial and policing functions to private companies and use that as an excuse to ignore their own constitutional and human rights obligations to citizens. States still need to protect free expression and information rights, privacy, and rights to fair judicial process -- even when they make private platforms do the dirty work. If the Commission’s plan really comes to pass, there’ll be a plenty to litigate.
  • Finally, going back to the sci fi… Suppose this thing really becomes law, and the worst case scenario comes to pass. Suppose all the little platforms go out of business, because they can’t afford to comply; and the huge platforms become entrenched quasi-state entities. That would be the end of the Internet as we knew it. But innovators and tinkerers and subversives are still out there. They know what the Internet can be, they remember how they built it last time, and they have some new ideas. Some of them have been working on evading government speech controls for years. They will surely build something new and interesting and, at the beginning, outside of state control.

As William Gibson wrote in an essay about Orwell’s 1984, “[w]e've missed the train to Oceania, and live today with stranger problems.” In solving those problems, we must be clearsighted about unintended consequences. The consequences of the Commission’s proposal – unintended or not -- are all too apparent. You don’t even have to be a sci fi reader to see them.


Originally posted Oct 12, 2017; Updated Dec. 8, 2017



Add new comment