Alex Jones was just banned from YouTube, Facebook and iTunes. Here’s how he managed to survive until now

Publication Type: 
Other Writing
Publication Date: 
August 6, 2018

In the last 24 hours, AppleFacebook and YouTube have banned content from conspiracy theorist broadcaster Alex Jones. Last week, I interviewed Tarleton Gillespie, whose new book, “Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media,” talks about how big Internet platforms have tried to deal with Jones and other controversial sources of content. Here’s what he has to say.

Henry Farrell: Facebook has just banned Alex Jones [notorious for pushing conspiracy theories about the Sandy Hook Elementary School shooting] from using his personal Facebook account for 30 days, while Twitter has been criticized by President Trump for purportedly “shadow-banning” conservatives. Your new book argues that decisions over how to moderate content is crucial to how social media platform companies like Facebook and Twitter work. Why is it so important, and why has it been overlooked?

Tarleton Gillespie: Facebook, Twitter and the other social media platforms began with a clear promise to their users: Post what you want, it will circulate; search for what you want, it will be there waiting. It was the fundamental promise of the Web, too, but made easier and more powerful.

And for most social media users in most cases, it can seem true. But underneath that promise, social media platforms have always had rules. They have always removed content and users who violate those rules, and they have always struggled about how to do it — not just what to remove, but how to justify those removals and still seem open, connected and meritocratic.

The truth is, these are machines designed to solicit your participation, monetize it as data and then make choices about it: what gets seen by whom, what goes viral and what disappears, and what doesn’t belong at all. Peel away that apparent openness, what you’ll find is a massive apparatus for solving the central problem of social media: how to take in everything but only deliver some of it. And that includes making value judgments about the most troubling aspects of public speech: deciding what’s hateful, pornographic, harassing, threatening and fraudulent. As platforms draw those lines, they are quietly asserting the new contours of public speech and priming the political battles that will test them.

Read the full piece at The Washington Post