Stanford CIS

Making Google the Censor

By Daphne Keller on

Prime Minister Theresa May’s political fortunes may be waning in Britain, but her push to make internet companies police their users’ speech is alive and well. In the aftermath of the recent London attacks, Ms. May called platforms like Google and Facebook breeding grounds for terrorism. She has demanded that they build tools to identify and remove extremist content. Leaders of the Group of 7 countries recently suggested the same thing. Germany wants to fine platforms up to 50 million euros if they don’t quickly take down illegal content. And a European Union draft law would make YouTube and other video hosts responsible for ensuring that users never share violent speech.

The fears and frustrations behind these proposals are understandable. But making private companies curtail user expression in important public forums — which is what platforms like Twitter and Facebook have become — is dangerous. The proposed laws would harm free expression and information access for journalists, political dissidents and ordinary users. Policy makers should be candid about these consequences and not pretend that Silicon Valley has silver-bullet technology that can purge the internet of extremist content without taking down important legal speech with it.

Platforms in Europe currently operate notice-and-takedown systems for content that violates the law. Most also prohibit other legal but unwelcome material, like pornography and bullying, under voluntary community guidelines. Sometimes platforms remove too little. More often, research suggests, they remove too much — silencing contested speech rather than risking liability. Accusers exploit this predictable behavior to target expression they don’t like — as the Ecuadorean government has reportedly done with political criticism, the Church of Scientology with religious disputes and disgraced researchers with scholarship debunking their work. Germany’s proposed law increases incentives to err on the side of removal: Any platform that leaves criminal content up for more than 24 hours after being notified about it risks fines as large as 50 million euros.

European politicians tout the proposed laws as curbs on the power of big American internet companies. But the reality is just the opposite. These laws give private companies a role — deciding what information the public can see and share — previously held by national courts and legislators. That is a meaningful loss of national sovereignty and democratic control.

Read the full piece at The New York Times.