The GDPR’s Notice and Takedown Rules: Bad News for Free Expression, But Not Beyond Repair

Cross-posted to the Internet Policy Review News & Comments and Inforrm blogs.

This is one of a series of posts about the pending EU General Data Protection Regulation (GDPR), and its consequences for intermediaries and user speech online.  In an earlier introduction and FAQ, I discuss the GDPR’s impact on both data protection law and Internet intermediary liability law.  Developments culminating in the GDPR have put these two very different fields on a collision course -- but they lack a common vocabulary and are in many cases animated by different goals.  Laws addressing concerns in either field without consideration for the concerns of the other can do real harm to users’ rights to privacy, freedom of expression, and freedom to access information online.

Disclosure: I previously worked on "Right to Be Forgotten" issues as Associate General Counsel at Google. 

-----

The pending EU General Data Protection Regulation (GDPR) changes the processes companies use to handle legal complaints about content online.  These procedural changes have a very substantive impact, undermining safeguards for free expression and increasing the likelihood that lawful online content will be erased. 

Internet intermediaries such as Facebook or Google look to processes created under intermediary liability law to tell them what to do when someone alleges that content put online by their users is illegal.  These laws -- like national laws implementing the eCommerce Directive in the EU, or the copyright-specific DMCA in the US -- are the basis for existing notice and takedown processes used by Internet companies.  As protections for online expression and information, existing rules and processes are far from perfect.  But they are vastly better than the processes the GDPR will impose.  Most existing rules permit -- or require -- procedural checks and balances to protect rights of both the accused who put the content online, and the accuser who believes that the content violates her legal rights.

The GDPR changes that.  It legislates a broad basis for content erasure under the “Right to Be Forgotten,” and tells intermediaries to follow a novel process to carry out removals.[1]  As this post will detail, the new process is highly problematic for Internet users’ rights of free expression and access to information.  And because the GDPR imposes extremely high fines on intermediaries who fail to erase content when they should have -- draft figures range from up to .5% to up to 5% of annual global turnover -- the intermediary has every incentive to assume requests are valid, even when they challenge potentially important and legal content. (Art. 79)

It doesn’t have to be this way.  The GDPR could easily incorporate widely-used notice and takedown principles and procedures, designed to balance rights of both the person posting content online, and the person who wants it taken down.  Importantly, it could do so without weakening the other core privacy rights created by the GDPR.   The GDPR’s notice and takedown problems arise because, in Articles 17 and 19, the Regulation treats two very different situations as if they were the same.  By separating them and handling them differently, the Regulation could improve protections for free expression rights as well as privacy rights.

The first situation addressed in the GDPR's erasure provisions occurs when an Internet user seeks to delete data collected by companies about her online behavior -- data typically held in back-end storage systems such as logs, and used for profiling or similar activity by the company. When a user wants to erase this kind of data, two sets of rights are implicated: those of the requesting data subject, and those of the company. Presumably the requester’s rights will usually prevail. The GDPR’s erasure provisions seem broadly reasonable for this two-party situation.  

The second situation is the one I am concerned with here: when the requester asks the intermediary to erase another person’s online expression.  This request is very different, because it affects at least four parties: the requesting data subject; the intermediary; the person who posted the content online; and other Internet users who want to view the content.  Procedures designed for back-end data deletion and a two-party interaction are not adequate to protect and balance the rights of these four very different parties.  Unsurprisingly, when rules designed for the two-party scenario are applied to this more complex situation, the rightsholders left out in the cold are the ones exercising free expression rights by posting content, or exercising information access rights by seeking and viewing it. 

By effectively leaving the free expression and information-access rights of Internet users out of the equation, the GDPR creates a badly flawed notice and takedown process.  It’s a process that will likely have consequences far beyond data protection law.  The GDPR’s new, low bar for protection of free expression on the Internet will shape future cases before DPAs and courts, and will likely also affect legislation in other areas.  If the EU’s current Digital Single Market initiative leads to reconsideration of intermediary liability laws, the GDPR’s new notice and takedown process will be seen as a model to be emulated for copyright, defamation, and other removal claims. 

GDPR drafters should incorporate procedures to protect expression and information access now, to ensure that processes designed to streamline removals for data subjects with genuine claims do not also become tools for targeting and deleting lawful information online. 

A separate post walking through the GDPR’s removal process in detail and parsing Regulation language is here. More analysis of the GDPR’s removal process, including its relation to other EU laws and legal issues, will appear in my next post.

Why Notice and Takedown Rules Matter

A good notice and takedown process does two things. It takes unlawful content down, and it keeps lawful content up.  Procedural checks are critical for the second goal, to limit what legal scholars call “collateral censorship.”  As Yale Law School professor Jack Balkin explains, relying on intermediaries to enforce laws about expression creates a structural problem:

Collateral censorship occurs when the state holds one private party A liable for the speech of another private party B, and A has the power to block, censor, or otherwise control access to B’s speech.  This will lead A to block B’s speech or withdraw infrastructural support from B.  In fact, because A’s own speech is not involved, A has incentives to err on the side of caution and restrict even fully protected speech in order to avoid any chance of liability. (P. 2309)

This is a real issue, not a theoretical one.  Notice and takedown processes are widely misused to target lawful content, even when they do incorporate procedural checks against abuse.  And, as multiple studies confirm, intermediaries often take the path of least resistance by simply acquiescing to removal requests, even when they are improper.  Some companies do put real effort and resources into identifying and rejecting unfounded removal requests.  I am proud to say that I was part of this effort during my own time at Google.  But both anecdotal and statistical evidence tell us that such efforts, alone, are often not enough to avoid removal of legal and valuable content.

It’s important to appreciate the numbers behind this issue.  Intermediaries receive a lot of groundless removal requests.  In the "Right to Be Forgotten" context, Google says that currently 59% of incoming requests fail to state valid legal claims.  Privacy regulators seem to agree: the Article 29 Working Party, reviewing cases brought to DPAs, concluded that “in the great majority of cases the refusal by a search engine to accede to the request is justified.”  Empirical data about copyright removals shows a similar pattern of requests targeting legal content.[2] Scholars reviewing copyright-based removals from Google web search in 2006 found that almost a third of successful requests raised questionable legal claims.  If unsuccessful requests were included, that number would be much larger. 

Notice and takedown laws also exist to protect people who are harmed by online content.  But protecting those people does not require laws to prioritize removal with little concern for the rights of online speakers and publishers.  A good notice and takedown process can help people with legitimate grievances while incorporating procedural checks to avoid disproportionate impact on expression and information rights.  Valuable information that would be gone under existing laws but for these checks -- importantly including transparency about what content has been removed -- spans religious, political, and scientific content, along with consumer reviews. Crafting the law to better protect this kind of content from improper removal is both important and possible.

What’s Wrong with the GDPR’s Notice and Takedown Process

The GDPR content removal process doesn’t track legal requirements established under the eCommerce Directive.   And it bears almost no relation to the gold standard for notice and takedown, developed by civil society groups and published in the Manila Principles.  Instead, it creates a new, untested system in which it is very easy to get content taken down, and very hard to identify or correct wrongful removals.  In a separate post I walk through the GDPR’s exact requirements in much more detail.  They are very convoluted and hard to piece together,[3] but appear to work as follows.  All citations to draft GDPR provisions refer to versions available in this comparative table.

1.      An individual submits a removal request, and perhaps communicates further with the intermediary to clarify what she is asking for.

2.      In most cases, prior to assessing the request’s legal validity, the intermediary temporarily suspends or “restricts” the content so it is no longer publicly available.[4]

3.      The intermediary reviews the legal claim made by the requester to decide if it is valid.  For difficult questions, the intermediary may be allowed to consult with the user who posted the content.[5]

4.      For valid claims, the intermediary proceeds to fully erase the content.  (Or probably, in the case of search engines, de-link it following guidelines of the Costeja “Right to Be Forgotten” ruling.)  For invalid claims, the intermediary is supposed to bring the content out of “restriction” and reinstate it to public view -- though it’s not clear what happens if it doesn’t bother to do so.

5.      The intermediary informs the requester of the outcome, and communicates the removal request to any “downstream” recipients who got the same data from the controller.

6.       If the intermediary has additional contact details or identifying information about the user who posted the now-removed content, it may have to disclose them to the individual who asked for the removal, subject to possible but unclearly drafted exceptions. (Council draft, Art. 14a)

7.       In most cases, the accused publisher receives no notice that her content has been removed, and no opportunity to object.  The GDPR text does not spell out this prohibition, but does nothing to change the legal basis for the Article 29 Working Party’s conclusions on this point. 

The deviation from standard notice and takedown processes here is significant.  I’ve flagged in italics the most extreme examples, with the greatest adverse effects on online expression and information-access rights.  

            Temporary, Pre-Review Content Restriction

One of the biggest issues with the GDPR process is the immediate, temporary removal of content from public view, “pending the verification whether” the removal request states a legitimate basis for permanent removal.  The GDPR calls this “restriction.”  Although intermediaries can in theory skip this step in special cases, the GDPR provides no clear guidance on what those cases are, and levies debilitating fines for failure to restrict content when appropriate.  The restriction provisions shift an important default: from a presumption that online expression is permitted until proven otherwise, to a presumption that its challenger is right.  This is dangerous because intermediaries receive many, many groundless requests -- recall the 59% figure for Google. Rather than leave that lawful content up, the GDPR would take it down.

As a matter of fair process even in run-of-the-mill cases, this automatic restriction right is troubling.  An allegation made in secret to a private company should not have such drastic consequences.  In other, less common -- but all too real -- scenarios, it is flatly dangerous.  Instant, temporary removal of internet content is a tool begging for use by bad actors with short-term issues: the disreputable politician on the eve of election; the embezzler meeting a new client; the convicted abuser looking for a date. Mandating it by law is a big mistake.

Automatic restriction is also inconsistent with the eCommerce Directive.  Neither it nor any other intermediary liability law I’ve seen makes intermediaries remove content they have not even seen or assessed.  The Directive requires an intermediary to remove when it has “knowledge” of unlawful content.  At a bare minimum, that means the intermediary has no removal obligation until it knows what content is at issue and why it violates the law.  Some European courts have held that the “knowledge” standard can set a much higher bar for removal.  For difficult removal questions, these courts say, intermediaries can’t know whether content is illegal, and therefore are not obliged to remove it, until the matter has been adjudicated in court.[6]  Courts in other parts of the world -- including the Supreme Courts of both India and Argentina --  have reached similar conclusions.  Courts arrived at these rulings, and erected high procedural barriers to wrongful content removal, in order to protect fundamental rights of Internet users to seek and impart information.  Where the GDPR’s removal process applies to expression posted by Internet users -- as opposed to data only stored and used by companies -- should do a much better job factoring in those same rights.

            Disclosure of Publisher’s Personal Information

A second glaring problem with the GDPR process is its requirement that companies disclose the identity of the person who posted the content, without any specified legal process or protection.[7]  This is completely out of line with existing intermediary liability laws. Some have provisions for disclosing user identity, but not without a prescribed legal process, and not as a tool available to anyone who merely alleges that an online speaker has violated the law.  It’s also out of line with the general pro-privacy goals of the GDPR, and its specific articles governing disclosure of anyone’s personal information -- including that of people who put content on the Internet. 

The GDPR provision requiring disclosure does make confusing reference to other laws on point, which may be intended to incorporate protections against improper disclosure of the publisher’s information.  If so, the GDPR should express that protection much more clearly to avoid wreaking havoc with rights to anonymous speech.  Hopefully revision in this section is politically feasible -- the disclosure requirement is so obliquely phrased, and so bizarre in a pro-privacy regulation, that I can only assume it is unintentional.  If it is not remedied, the provision creates yet another incentive for the GDPR take down process to be abused, and puts yet another very serious burden on rights to free expression online. (See new footnote[8] for updated analysis as of Nov. 17 2015.)

            No Standard Notice or Opportunity for the Publisher to Contest Removal

Finally, the GDPR creates considerable procedural unfairness for the user who posted the disputed content, in most cases excluding her from the entire process.  The GDPR does not spell out when an intermediary may tell an online publisher about legal challenges to her material.  But it does nothing to change the conclusion reached by the Article 29 Working Party for Google’s "Right to Be Forgotten" delistings, that Google may not routinely tell webmasters when their content has been delisted.  In special cases, the Working Party conceded, Google may communicate with the webmaster before removing. 

That exception will do very little to deter over-removal in the GDPR context, however.  One of the main purposes of providing broad notice to users whose content has been removed is to let affected parties correct the intermediary’s errors.  Intermediaries already over-remove legal content, because assessing claims is time-consuming and expensive, and standing up for user content exposes companies to legal risk. The GDPR’s vague rules and high fines make incentives to remove all challenged content even stronger.  This problem cannot be cured by asking already risk-averse companies to take on the costly task of identifying difficult cases and re-routing them for special internal processing. Routinized notice puts the opportunity for error-correction in the hands of the person best motivated and equipped to use it: the content’s publisher.  Leaving the determination entirely in the hands of a technology company simply cannot substitute for involving the publisher as a mechanism to reduce improper removals.

From a pure data protection perspective, leaving the accused publisher out of the loop makes a sort of sense: if an individual has the right to make the company stop processing data about her, that should also preclude their processing it even further by talking to the publisher about it.  This “when I say stop, I mean stop” reasoning may be sensible in the traditional data protection context, where an individual wants stored, back-end data such as logs or accounts deleted.   But when the free expression rights of another individual are at stake, systematically depriving that individual of any opportunity to defend herself is a serious denial of fairness and proportionality.  In this respect, too, the GDPR departs from intermediary liability laws and model rules, which in many cases provide for notice to the publisher and an opportunity for objection via “counternotice.”  Legal systems without this formal process would still generally consider the publisher’s defenses relevant to the “knowledge” that triggers removal. None systematically preclude knowledge or input by the person whose free expression rights are at stake.

Conclusion

The point of intermediary liability law and notice and takedown processes is to take illegal content down and keep lawful expression up.  It is to protect the rights of all parties involved -- including the person asserting a removal right, the person whose expression is being challenged, and the person seeking information online.  The GDPR process falls far short of this goal and will lead to the wrong outcomes, potentially in the majority of complaints.  It can be greatly improved by incorporating procedural checks in cases where one person’s request for an intermediary to erase content affects a third party’s right to seek or impart information online.

-----

References

The introduction to this series and an FAQ about the GDPR and intermediary liability are here.  A detailed walk-through of the GDPR notice and takedown process is here.  In my next post, I will discuss how the GDPR relates to other laws and how it got to this point, as a matter of political process and legal evolution.


[1] As discussed in the Introduction to this series, there is an argument that hosts like Twitter or YouTube are not covered by “Right to Be Forgotten” removal obligations because they are data processors rather than controllers.   This series assumes that hosts will be covered.  The law on point is open to interpretation; but I doubt that DPAs and courts will ultimately choose to exempt such important Internet actors.

[2] Most data and anecdotal evidence of over-removal comes from the copyright field, because it generates a high volume of removals and has been subject to considerable transparency for many years. The “Right to Be Forgotten” in Europe is poised to become what copyright is in the US: an important claim to enforce legitimate rights, but also the tool of choice for people who don’t have a valid legal claim and want online content suppressed anyway.

[3] In the introductory post to this series, I asked for comments and criticism from others reviewing this material. That request is especially relevant for interpretations of the GDPR’s requirements. If you think I got something wrong, please tell me.

[4] Provisions covering this have moved and been redrafted quite a bit, but are mostly at Articles 4(3a), 17, 17a, and 19a.

[5] Pursuant to Article 29 Working Party’s guidelines.  The GDPR itself does not specify.

[6] Davison case in the UK;  Royo v Google in Spain (Barcelona appellate court judgment 76/2013 of 13 February 2013, holding that knowledge requires court order unless validity of claim is manifest). 

[7] This provision appears in Council draft Article 14a.  Possibly relevant exceptions are set forth at sections 4(c) and (d) of that Article. This provision seems clearly designed for a different situation, in which one company gets data from another company - an insurance company and a data broker, for example.  But since the individual putting content online is probably a controller for that data, it’s her information that would be disclosed.  CJEU case law and Article 29 opinions suggest that in most cases, an individual in this situation could not escape controller status by claiming a household use exemption.  See Ryneš and Lindqvist cases, and discussion in Bezzi et al, Privacy and Data Management for Life, p. 70-71.

[8] November 17, 2015 update: Miquel Peguera has pointed out that 14a(1) appears to require the intermediary controller to disclose its own contact information.  He is right, as usual.  Language I should have cited instead appears at 14a(g) and 15(1)(g) of the Council draft. 

14a(2)(g) requires the intermediary to disclose “from which source the personal data originate, unless the data originate from publicly accessible sources.” Language at 14a(2) may limit this disclosure based on “circumstances and context,” but does not clearly address the privacy concerns of the Internet user whose information would be disclosed.

15(1)(g) additionally provides the data subject with the right to obtain information, including “where the personal data are not collected from the data subject, any available information as to their source.”

 

 

 

Add new comment