Stanford CIS

Empirical Evidence of “Over-Removal” by Internet Companies under Intermediary Liability Laws

By Daphne Keller on

NOTE: I periodically update this post.  Last update was May 8, 2020.

The "Over-Removal" Issue

Most intermediaries offer legal “Notice and Takedown” systems – tools for people to alert the company if user-generated content violates the law, and for the company to remove that content if necessary.  Twitter does this for tweets, Facebook for posts, YouTube for videos, Google for search results, local news sites for user comments, etc.  National law varies in terms of what content must be removed, but some version of Notice and Takedown exists in every major market.  Companies receive a remarkable mix of requests – from those identifying serious and urgent problems, to those attempting to game the Notice and Takedown system as a means to silence speech they disagree with, to those stating wildly imaginative claims under nonexistent laws.

What do companies do with these removal requests?  Many of the larger companies make a real effort to identify bad faith or erroneous requests, in order to avoid removing legal user content.  (I worked on removals issues for Google for years, and can attest to the level of effort there.)  But mistakes are inevitable given the sheer volume of requests – and the fact that tech companies simply don’t know the context and underlying facts for most real-world disputes that surface as removal requests.

And of course, the easiest, cheapest, and most risk-avoidant path for any technical intermediary is simply to process a removal request and not question its validity.  A company that takes an “if in doubt, take it down” approach to requests may simply be a rational economic actor.  Small companies without the budget to hire lawyers, or those operating in legal systems with unclear protections, may be particularly likely to take this route.

Much of the publically available information about over-removal by intermediaries is anecdotal.  But empirical evidence of over-removal – through error or otherwise – keeps trickling in from academic studies.  This data is important to help policy-makers understand what intermediary liability rules work best to protect the free expression rights of Internet users, as well of the rights of people with valid claims to removal.  This post lists the studies I have seen.

These studies were mostly conducted by academics or advocates with a particular interest in protecting user free expression and ensuring that legal content remains available online.  One day I hope we will see more data from the other side – advocates for rightsholders, defamation plaintiffs, or other groups harmed by online content that violates their legal rights.  That could help build a more complete picture of the over-removal issue as well as any related under-removal problem – intermediaries failing to remove content when notified, even though applicable law requires removal.

The Studies

More studies and data sources surely exist, or will exist.  I have heard in particular of one from Pakistan, but have not found it so far.  If you know of other sources, please let me know or post them in the Comments section so this page can become a more useful resource for people seeking this information.