Stanford CIS

Platform Justice: Content Moderation at an Inflection Point

By Danielle Citron on

On Thursday, Sept. 6, Twitter permanently banned the right-wing provocateur Alex Jones and his conspiracy theorist website Infowars from its platform. This was something of the final blow to Jones’s online presence: Facebook, Apple and Youtube, among others, blocked Jones from using their services in early August. Cut off from Twitter as well, he is now severely limited in his ability to spread his conspiracy theories to a mainstream audience.

Jones has been misbehaving online for a long time. Following the Sandy Hook mass shooting in 2012, he spread theories that the attack had been falsified by the government, ginning up harassment against parents of the murdered children to the extent that one couple has been tormented, threatened, and forced to move seven times. So why has he only been banned from these platforms now?

In the wake of Russian interference in the 2016 election campaign, technology companies are facing unprecedented scrutiny from the media and within government. Companies like Facebook and Twitter, which previously took a largely hands-off approach to content moderation, have shifted—though reluctantly—toward greater involvement in policing the content that appears on their platforms. Even three years ago, it would have been unthinkable that Jones could have been blocked from almost every major platform across the internet. But by the late summer and early fall of 2018, the bulk of the public and media outrage over Jones’ banning was not that technology companies were silencing his voice and limiting speech—it was that Jones had not been banned earlier.

Read the full post at Lawfare.