The “Right to Be Forgotten” and National Laws Under the GDPR

The EU’s new General Data Protection Regulation (GDPR) will come into effect in the spring of 2018, bringing with it a newly codified version of the “Right to Be Forgotten” (RTBF).  Depending how the new law is interpreted, this right could prove broader than the “right to be de-listed” established in 2014’s Google Spain case.  It could put even more decisions about the balance between privacy and free expression in the hands of private Internet platforms like Google. National lawmakers have an opportunity to shape the platforms’ processes, and to ensure that both privacy and expression rights get a fairer hearing. This post reviews the issues. Another post, here, lists eight specific GDPR articles that affect RTBF de-listings and erasures. 

The GDPR’s “erasure” provision says that data controllers can reject some RTBF claims if necessary to protect expression and information rights. (Art. 17.3) Individual EU Member States are responsible for fleshing out this exception, providing national laws to reconcile the GDPR with free expression and information rights. (Art. 85) They can also adopt legislation about the specific RTBF articles in order to protect both data subjects and “the rights and freedoms of others.” (Art. 23.1(i)) Any such adjustments in national law must be necessary and proportionate, in accordance with fundamental rights defined in the EU Charter and European Convention.

RTBF laws primarily affect two fundamental rights: the data subject’s right to privacy, and other people’s rights to seek and impart information. National laws define the scope of information rights, and help determine which interest should prevail for any given RTBF request. Those laws can also protect procedural fairness when RTBF claims conflict with the interests of publishers and ordinary Internet users. Without adequate procedural protections, these Internet users will almost certainly find their online expression erased or de-listed in more cases than the GDPR’s drafters or national legislators intended.

National laws addressing this and other aspects of the GDPR are being drafted now. (If you read this by May 10, 2017, you can respond to the UK’s Call for Views on its legislation.) Lawmakers should take this opportunity to protect citizens’ information and expression rights under the private notice-and-takedown process for RTBF requests.

I explore the GDPR’s new RTBF rules in detail in my new article, focusing on the nuts and bolts – the operational steps that private platforms like Google are supposed to carry out when “adjudicating” RTBF requests.  Like civil or criminal procedural rules in a court, these matter a lot. Must a claimant make a particular showing of fact, or provide particular information, before a platform must honor her RTBF request? Is the affected Internet user notified or consulted? Who can appeal the platform’s decision, and under what circumstances?  Procedural questions like these can determine the real-world outcome of RTBF requests when platforms, regulators, or courts balance privacy and information rights.

Research and common sense tell us that when platforms face legal trouble for failing to remove user expression, they are likely to remove too much. Claimants consistently ask platforms to remove more information than the law requires: studies say that 38% of copyright removal requests to Google Image Search raise invalid legal claims; Google and Bing both report that over 50% of RTBF requests do as well. But as the studies show, platforms often err on the side of caution, taking down lawful or lawfully processed information. Incentives to play it safe and simply comply with RTBF requests are strong under the GDPR, which permits penalties as high as 4% of annual global turnover or €20 million.  (Art. 83) National law should account for this dynamic, putting procedural checks in place to limit over-removal by private platforms. Civil society recommendations like the Manila Principles offer a menu of options for doing just this. For example, the law can penalize people (or businesses, governments, or religious organizations) if they abuse notice-and-takedown to target other people’s lawful expression. 

The GDPR does not provide meaningful procedural barriers to over-removal. In many cases, it appears to strongly tilt the playing field in favor of honoring even dubious RTBF requests – like ones Google received from priests trying to hide sexual abuse scandals, or financial professionals who wanted their fraud convictions forgotten.

Better and more balanced GDPR interpretations are possible, but realistic avenues to make those interpretations part of accepted law are rare: individuals whose rights are affected by RTBF removals will not have the opportunity to ask DPAs or courts to clarify the law, and the platforms will generally not have the incentive to do so. That makes proactive clarification by lawmakers or regulators important, to reduce platforms’ incentives to simply erase or de-list information upon request.

This post will not try to set forth specific legislative interventions under any country’s national law, but will identify key concerns arising from the GDPR’s new RTBF provisions. Each is explored in more detail in the article.

Will Facebook and Other Social Media Platforms Have to Honor RTBF Requests?

This is a key question, with no clear answer in current law or in the GDPR (though interesting litigation on point is brewing Northern Ireland). Litigating this issue is a risky choice for platforms, because if a DPA or court decides the platform is a data controller for user generated content, the platform must take on extensive -- and expensive -- new legal obligations, in addition to RTBF compliance. For small or risk-averse platforms, simply complying with RTBF requests is far safer and easier.

Applying RTBF to platforms like Facebook, Dailymotion, or Twitter would be a big deal for Internet users’ expression and information rights. RTBF in its current form under Google Spain only covers search engines, and only requires “de-listing” search results – meaning that users will not see certain webpage titles, snippets, and links when they search for a data subject by name. Regulators have said that the RTBF is reconcilable with information and expression rights precisely because information is only de-listed, and not removed from the source page. But if social media or other hosts had to honor RTBF requests, much of the information they erased would not merely be harder to find – it would be truly gone. For ephemeral expression like tweets or Facebook posts, that might mean the author’s only copy is erased. The same could happen to cloud computing users or bloggers like artist Dennis Cooper, who lost 14 years of creative output when Google abruptly terminated his Blogger account.

Expanding the list of private platforms that must accept and adjudicate RTBF requests would directly affect users’ expression and information rights. But it is hard to pinpoint quite which GDPR articles speak to this issue. Is it purely a question of who counts as a controller under the GDPR’s definitions (Art. 4)? Might it be, as I have argued in other contexts, a question about the scope of objection and erasure rights (Arts. 17 and 21)? Do national expression and information rights shape a platform’s “responsibilities, powers and capabilities” under the Google Spain ruling (para. 38)? These are difficult questions. The answers will, in a very real way, affect the expression and information rights that Member State legislatures are charged with protecting.

Will Ordinary Internet Users and Publishers Have Redress if Their Expression is Wrongfully Erased or De-Listed?

The Article 29 Working Party has said that search engines generally shouldn’t tell webmasters about de-listings, and the Spanish DPA recently fined Google €150,000 for doing so.  The data protection logic here is understandable. When a data subject tells a controller to stop processing her data, it seems perverse for the controller to instead process it more by communicating with other people about it.

But excluding the publisher or speaker from the platforms’ behind-closed-doors legal decisions puts a very heavy thumb on the scales against her. It effectively means that one private individual (the person asserting a privacy right) can object to a platform’s RTBF decision and seek review, while the other private individual or publisher (asserting an expression right) cannot.  Other procedural details of the GDPR tilt the balance further. For example, a platform can reject a RTBF request that is “manifestly unfounded,” but only if the platform itself – which likely has little knowledge about or interest in the information posted by a user – assumes the burden of proof for this decision. (Art. 12.5)

This lopsided approach may be sensible for ordinary data erasure requests, outside the RTBF context. When a data subject asks a bank or online service to cancel her account, the power imbalance between the individual and the data controller may justify giving her some procedural advantages. But RTBF requests add important new rights and interests to the equation: those of other Internet users. Procedural rules should not always favor the data subject over other private individuals.

A similar imbalance occurs in the public process for reviewing platforms’ RTBF decisions. Data subjects   can “appeal” the platforms’ decisions to government institutions – DPAs – which are charged with helping them. Internet users whose expression is de-listed or erased generally have no regulators on their side. Even in courts, data subjects have clear standing to enforce their rights, while people whose expression has been erased or de-listed likely do not.

Tilting the scales so strongly in favor of one party would be harmless if private platforms decided every RTBF request correctly. Far too many public discussions about RTBF seem to turn on the idea that they will – that “Google is doing a good job.” I used to be on the inside of Google’s RTBF de-listing process, and I believe they are trying hard. But that’s not the same as always getting it right. And since DPA review only happens when a data subject wants more de-listing, there is no public correction mechanism for cases where Google actually should de-list less. Whatever we think of Google and Bing’s work in this area, we certainly should not expect similar efforts and expenditures from smaller and less wealthy platforms, if RTBF obligations are extended to them. Absent better procedural rules, we should expect over-removal.

Will All the GDPR Provisions About Personal Data Really Apply to Publicly Shared Information?

Internet platforms and data protection law have always been an odd fit.  For example, it is not clear what basis Google as a data controller has for processing “sensitive” data, such as celebrity pregnancy gossip, from indexed websites. (The CJEU will soon hear a case on this general question.) Rules governing data controllers often seem to be designed for databases and other kinds of “back-end” processing, not for the public exchange of information. But under Google Spain and in the GDPR, databases and public expression are governed by the same rules.

Under the GDPR, one odd result comes from provisions requiring controllers to tell data subjects “from which source the personal data [about them] originate” and “any available information as to their source[.]” (Arts.  14.2(f) and 15.1(g)) Applied to Google, this would seem to mean RTBF claimants can learn whatever the company knows about the webmaster whose page is targeted by the RTBF request. That could be anything from the webmaster’s communications with the company to the contents of her Gmail account. Similarly, if Twitter were deemed a controller for tweets, it seemingly would have to freely disclose the identity of anonymous speakers. Surely this was not the intention of the GDPR’s drafters. But it is hard to find grounds for other interpretations in the GDPR.

As another example, if the “accuracy of the personal data is contested by the data subject,” then the controller must restrict public access to the data “for a period enabling the controller to verify [its] accuracy.” (Art. 18.1(a)) If Twitter were a controller, for example, it might have to delete tweets on this basis – unless the platform itself could somehow prove that users’ tweets are truthful. This would seemingly displace existing defamation law, along with the notice-and-takedown rules under laws like the eCommerce Directive or the UK’s carefully calibrated 2013 Defamation Act. Then again… maybe that’s not what the GDPR means. The “restriction” requirement has an unclear exception “for the protection of the rights of another natural or legal person,” which might excuse compliance when expression or information rights are on the line.  That’s the kind of thing that lawmakers can make clear – so users are not dependent on the platforms to adopt an interpretation that protects their rights.

Conclusion

The examples discussed here are just a starting point. My longer article lays out more, and suggests specific legal fixes that lawmakers could adopt. For example, they could commit up front not to assess fines against platforms that, in good faith, reject RTBF requests. This could considerably ease pressures on small platforms to comply with improper requests. Options like this should be on the table as lawmakers weigh their powers and responsibilities to protect information and expression rights under the GDPR. 

Add new comment