Your Speech, Their Rules: Meet the People Who Guard the Internet

Publication Type: 
Other Writing
Publication Date: 
February 27, 2019

When Facebook started 15 years ago, it didn’t set out to adjudicate the speech rights of 2.2 billion people. Twitter never asked to decide which of the 500 million tweets posted each day are jokes and which are hate speech. YouTube’s early mission wasn’t to determine if a video shot on someone’s phone is harmless speculation, dangerous conspiracy theory, or information warfare by a foreign government. Content platforms set out to get rid of expression’s gatekeepers, not become them.

Yet here we are. Controversial content takedowns are regular news. In August 2017, Cloudflare withdrew its DDOS protection service from the Daily Stormer, an American neo-Nazi online publication. A year later, Apple, Facebook, YouTube, and Twitter removed content by conspiracy theorist Alex Jones. Late in 2018, Apple pulled Tumblr from the iOS App Store, reportedlybecause of child pornography. Tumblr in turn banned all adult content and is now back in the App Store. Like tariffs on companies that get passed on to consumers, restrictions on platforms flow downstream to silence users — writers, trolls, bigots, activists, shitposters, people who like porn, people who like politics.

We want platforms to provide tools that expand expression while protecting us from the harms caused by that newly enabled and amplified expression. We want to protect speech, protect people, and protect society — and we disagree wildly over how to do these all at once. Meanwhile, as policymakers, academics, nonprofits, and private companies work on solving this very hard problem, someone has to wake up, go to the office, sit at their desk, and make these decisions every day. What stays up — and what comes down? What conduct is encouraged, what is tolerated, and what will get you banned (and for how long)? Who decides?

This is where the trust and safety team comes in. Most companies operating an online platform have one. It sometimes goes by other names — “content policy” or “moderation” — and comes in other flavors, like “community operations.” Whatever the name, this is the team that encourages social norms. They make platform rules and enforce them. They are at once the judges and janitors of the internet.

Read the full piece at Medium