Stanford CIS

Counter-Notice Does Not Fix Over-Removal of Online Speech

By Daphne Keller on

This is the first of three posts about the Commission's Communication on Tackling Illegal Content Online. Post Two addresses problems with relying on filters to identify unlawful content, and Post Three addresses dystopian aspects of the Communication.

* * *

The European Commission recently released a Communication on Tackling Illegal Content Online. It concludes that platforms have a responsibility to develop filtering technologies, in order to identify illegal content ranging from copyright infringement to hate speech. In some cases, it said, “fully automated deletion or suspension of content” with no human review is appropriate.

This recommendation seems to rest on dangerously flawed assumptions about both available filtering technology and real-world notice and takedown. I wrote about problems with the first assumption here. Algorithms cannot assess what speech is legal (and platform employees don’t do a great job either). Mandatory filters would lead to removal of lawful, important expression.

The Commission’s second assumption is that notice and takedown procedural protections, like counter-notice from users whose content was deleted, can correct for over-removal. But counter-notice doesn’t fix most overremoval problems, even under current laws. It certainly could not offset the avalanche of improper removals that would result from the Commission’s plan.

Annemarie Bridy and I discussed the efficacy of counter-notice in a 2016 filing with the US Copyright Office. Our submission addressed counter-notice under the US Digital Millennium Copyright Act, which includes detailed counter-notice provisions. Relevant parts of our submission are replicated below. A key takeaway is that while improper or questionable notices are common – one older study found 31% questionable claims, another found 47% -- reported rates of counter-notice were typically below 1%. That’s over 30 legally dubious notices for every one counter-notice.

Another study, which came out after we finished our filing, offers more up-to-date numbers. Researchers found questionable notices to Google web search in 28% of cases, of which 4% simply identified the wrong content. (Google later reported that a stunning 99.95% of web search notices are for pages that were never in web search in the first place.) Researchers also said that, outside of the major platforms, most smaller OSPs reported taking down content “even when they are uncertain about the strength of the underlying claim.” But many of them, including those processing thousands of removals a year, never got counter-notices at all.

Similar data is not available, to the best of my knowledge, for removals based on non-copyright claims like defamation or hate speech. Anecdotally, though, I have heard that counter-notice for such removals in Europe is rare, even among Internet users who appear to have good legal arguments in support of their expression.

The Commission’s proposal, if enacted into law, would represent a drastic shift in Internet information policy. As I’ll discuss in other posts, it would entrench the economic positions of the current major platforms, make life difficult or impossible for smaller ones, and erode the distinctions between big platforms and the state. It would encourage or mandate content removals using filters that are guaranteed to misapply the law. That is not a change to make without an incredibly compelling reason, including a strong factual basis. Real-world research, like that reproduced here, is essential.

From Annemarie Bridy and Daphne Keller’s submission to the U.S. Copyright Office 512 Study, April 1, 2016:

Question 16. How effective is the counter-notification process for addressing false and mistaken assertions of infringement?

We have not seen studies or significant public data on this question, though there will be useful information in the study just published by Urban, et al.[1] Based on our own experience and discussion with other practitioners, we believe that it is rare for users to file counter-notices. Counter-notices certainly appear to be far less common than the improper removals that they are intended to counteract.

A handful of companies track counter-notices in their transparency reports. These companies don’t appear to aggregate the data over time and, in some cases, they track it using non-parallel categories so that comparison is difficult. For example:

These tiny percentages are dwarfed by the portion of dubious DMCA removal requests that researchers have identified. (See studies reported in Appendix B.)  Even if the studies are off by an order of magnitude in their estimates, the number of potentially mistaken or malicious notices still vastly exceeds the number of counter-notices.[6]

Importantly, the companies issuing detailed transparency reports may be relatively unique among small intermediaries in their commitment to protecting users and offering them a chance to counter-notice. It is unclear whether the thousands of other companies that have registered DMCA agents with the Copyright Office assume similar costs and inconveniences to provide a viable counter-notice process.

The ineffectiveness of the DMCA counter-notice process may be attributable to a number of causes:

Collectively, these factors constitute a meaningful deterrent to counter-notice. The point we make here is not that Congress lacked the intent or policy basis for establishing the detailed hurdles for counter-notifiers in section 512(g). The problem is that, because counter-notice has not been an effective corrective for wrongful notices, section 512(g) alone cannot adequately protect Internet users from having their legal speech removed. For that reason, the other procedural protections for users in section 512, such as form-of-notice requirements and declarations of good faith by copyright owners, play a more important role than Congress may have foreseen. Robust interpretations and enforcement of those protections by the courts and the Copyright Office are critical to maintain the DMCA’s carefully structured balance. A more detailed discussion of these other protections in is included above in response to Question 12.

In (weak) defense of section 512(g), the transparency and expectation of procedural fairness created by the counter-notice process may be acting as a deterrent for some bad faith removal requests. It is possible, however, that the value of counter-notice is far exceeded by the value of public transparency about particular removals, such as those posted through Lumen or noted by the OSP on the page from which content has been removed. This transparency allows the identification of erroneous DMCA notices to be crowd-sourced across interested individuals online. To our knowledge there are no public datasets that would allow us to test this hypothesis.

Question 17. How efficient or burdensome is the counter-notification process for users and service providers? Is it a workable solution over the long run?

Our response to Question 16 above addresses the burden for users. For OSPs, providing a counter-notice process at any kind of scale is unavoidably somewhat burdensome. That said, requiring such a process is appropriate given the impact of wrongful removals on individual speakers. As stated above in response to Questions 1 and 12, the DMCA’s counter-notice provision is a key to the balance Congress intended to establish through the legislation. It functions as a visible and concrete, even if largely symbolic, acknowledgment of the importance of users’ expressive rights in the digital environment. Its inclusion brings some aspects of due process to the extrajudicial DMCA removal system.

As an operational matter, the counter-notice process requires a commitment of personnel and internal tracking systems—whether ad hoc or specially built by the company. Tracking multiple communications with copyright owners and counter-notice providers can be difficult in practice, since parties may send multiple communications by different channels, without including consistent identifying information. At small scale, this is a manageable problem. At large scale, it requires bespoke internal tools, significant time commitment by employees, or both.

Removing and restoring content also likely require time commitment from engineers, whose time may be, in the company’s eyes, more valuably spent in developing the company’s products. Lawyers who easily persuade their clients to dedicate engineering resources to DMCA removals may have a harder time explaining why restoration on counter-notice is a high priority. This is largely due to the fact that the value of the safe harbors for OSPs lies more or less exclusively in their limitation of liability vis-à-vis copyright owners, not users. Designing and implementing a counter-notice process may be an issue for small companies, in particular, since they are unlikely to have built any dedicated tools for DMCA compliance.

As discussed in our response to Question 16 above, restoration on counter-notice alone is not adequate to correct for the pattern of over-removal under the DMCA. That said, it is an important element of online speakers’ procedural protections.


[1] See Jennifer Urban, et al., Notice and Takedown in Everyday Practice (2016), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2755628.

[2] Twitter, Transparency Report, Copyright Notices (2015), https://transparency.twitter.com/copyright-notices/2015/jul-dec.

[3] Tumblr, Copyright and Trademark Transparency Report (2015), http://static.tumblr.com/zyubucd/0uWntp2iw/iptransparencyreport2015a_upd....

[4] GitHub, Transparency Report (2014), https://github.com/blog/1987-github-s-2014-transparency-report.

[5] Automattic, Intellectual Property (2015), https://transparency.automattic.com/intellectual-property/intellectual-p....

[6] This calculation assumes that the rate of counter-notice for the data sets discussed in Appendix B is similar to the rates reported in the transparency data discussed above.  We see no reason to expect otherwise.

[7] See, e.g., Song Fi, Inc. v. Google, Inc., 2015 WL 3624335 (N.D. Cal. June 10, 2015); Lewis v. YouTube LLC, 2015 WL 9480614 (Cal. App. Ct. Dec. 28, 2015); Sikhs for Justice “SFJ,” Inc. v. Facebook, Inc., 2015 WL 7075696 (N.D. Cal. Nov. 13, 2015).