Stanford CIS

The European Commission, for One, Welcomes Our New Robot Overlords

By Daphne Keller on

This is my third and most intemperate blog post about the European Commission’s recent Communication on platforms and illegal content. The first two point out serious factual problems with the Commission’s claims. I hope to write one more post making lawyerly points about why the Communication conflicts with EU and human rights law. This post, though, I write as a pessimist and consumer of dystopian novels. The Commission is asking for an Internet police state on par with Minority Report or 1984.

The Commission says companies that host online expression, from Facebook to your local news forum, should “proactively detect, identify, and remove” anything illegal that passes across their servers. And they should use automated filters to do it, with or without human review. That means scanning every word we say, algorithmically identifying if any of it is illegal, and erasing it – and then reporting it to the police. Alternately, the police (not courts – police) can tell the companies when a post, image, or video is illegal. Then the companies are supposed to use algorithms to make sure no one sees it or says those things again. Lest anything escape the dragnet, platforms should share databases of illegal content, so they can all identify the same speech and enforce the same rules. The inevitable resulting errors and deletion of lawful and important speech are to be corrected, per the Communication, by having platform employees review grey-area removal decisions and by allowing users whose expression has disappeared to challenge the platform's decision, using a  "counternotice" process.

One problem with this vision is simply that filters fail in predictable ways. They set out to block ISIS, and wind up silencing Syrian human rights organizations instead, as one example. Review by platform employees also has real problems. That's the system we have for most notice and takedown now, and companies routinely err on the side of caution, removing lawful speech. Counternotice, while very important, corrects only a fracation of improper removals.

But perhaps the bigger problem is that perfect, universal enforcement of rules to govern our public speech and our private communications is a terrifying concept. The Commission’s proposal is Orwellian, but with better technological control. Its merger of state and private power is like something from David Foster Wallace or Neal Stephenson, but considerably darker. Few of us would really want this kind of supervision and control from even from the most benign and trustworthy governments. But none of us live under those kinds of governments anyway. And in any case, the Commission's proposal is that private, mostly American-owned companies should do it.

Here are some choice passages:

The Commission’s goals are understandable. Tackling dangerous content online is important. But the Internet is where we keep pictures of our kids, and embarrassing old emails, and health records. It’s where teenagers keep their diaries and activists coordinate protests and fledgling rappers post their rhymes. We don’t want an Internet that subjects all of us to a constant, automated, privatized “content governance cycle.”

Some ways out

The Commission’s preferred future hasn’t come to pass – yet. Here are a few ideas about how to avoid it.

As William Gibson wrote in an essay about Orwell’s 1984, “[w]e've missed the train to Oceania, and live today with stranger problems.” In solving those problems, we must be clearsighted about unintended consequences. The consequences of the Commission’s proposal – unintended or not -- are all too apparent. You don’t even have to be a sci fi reader to see them.

---

Originally posted Oct 12, 2017; Updated Dec. 8, 2017