Cross-posted from Wired.
Co-authored with Evan Selinger.
As soon as news spread that the European Court of Justice now requires search engines like Google to allow people to “be ‘forgotten’ after a certain time by erasing links to web pages,” critics in the U.S. worried that the decision would break the internet.
While there’s good reason to disagree with the European ruling, we should avoid being too self-congratulatory. As the recent Snapchat debacle illustrates, the language that’s driving key privacy discussions worldwide is fostering false expectations and diverting attention away from what should be the focal point: the proper way to enhance or preserve obscurity.
Getting Google to forget isn’t the same thing as getting people to forget.
We get it. “The Right to Be Forgotten” sounds catchy. And, yes, the language of “erasure” laws and “disappearing” messages is captivating. Unfortunately, these popular words are fatally inaccurate in the privacy context. As a result, critics risk tackling irrelevant arguments about unattainable perfection while advocates and consumers are invited to place their hopes in a technology that is doomed never to be fully successful.
Impassioned defenses of the “right to be forgotten” all too easily conjure up images of tweaking Google to create collective amnesia. It’s as if removing a Google link douses the world in Forget-Me-Now pills or zaps us with Men in Black neuralyzers, preventing us from recalling that the man who started the E.U. lawsuit, Mario Gonzalez, once had debt problems.
Having Google delete links to information can’t make anyone forget anything. It can make it harder for people who didn’t know that Gonzalez was a debtor to discover that fact. And it can make it harder for people who once knew this fact to recall it at later date. But getting Google to forget isn’t the same thing as getting people to forget.
This isn’t to say widespread use of online technology doesn’t affect our memory. Relying on search engines changes “transactive memory.” Just as you don’t need to strain to memorize birthdays if your spouse can do so easily, you can turn to Google to reliably locate extensive information. Thanks to cognitive offloading, Google Maps lets us get by much of the time without memorizing travel routes.
But the very idea of a right to be forgotten suggests a different view of the relationship between technology and memory. It can mislead us into thinking of links on Google as essentially like phone numbers stored on digital contact lists. True, if we don’t back those lists up, then losing them effectively entails forgetting. But that is different; it’s not new information, such as Google linking to disclosures of other people’s financial trouble we’re unaware of. The differences between recall and discovery, invisibility and non-existence need to be salient in this debate.
Indeed, the predictable coverage of Mario Costeja González’s legal struggle proved that the right to be forgotten is a misnomer that’s doomed to fail. Critics of the court’s decision found it ironic that a spotlight was cast on the objectionable information that the plaintiff wanted to go away. John Oliver pointed this out brilliantly on Sunday night. If a full-fledged right to be forgotten existed, it would be incompatible with the risk of the Streisand effect that everyone faces when pushing for momentous legal reform. Among other things, forgetting would require the EU court to extract González’s name from the legal records and prohibit Google from “linking to the search results containing the opinion or any news articles about the opinion.” University of Chicago law professor Eric Posner thus hits the nail on the head when he asserts that the right to be forgotten is “not a right to be purged from the memory of people who know you, but rather to control how information about you appears online.”
The differences between recall and discovery, invisibility and non-existence need to be salient in this debate.
The same weaknesses that plague the phrasing of the right to be forgotten are also haunting a new breed of promising ephemeral media, such as Snapchat. The Federal Trade Commission recently settled a dispute with Snapchat over allegations that the app falsely represented messages sent through the application as disappearing forever after the user-set time expires. According to the FTC, these messages actually persisted on mobile devices and could be retrieved numerous ways with minimal hassle.
Misdescribing that expiration function led some users to believe their messages were fading into oblivion instead of simply becoming invisible to the recipient, while still accumulating in every participating data storage device. Of course this limitation doesn’t mean that the technology is useless. Quite to the contrary, Snapchat is ideal for sharing information that isn’t overly sensitive yet, in the aggregate, might raise privacy concerns. Critics who lament that the app isn’t foolproof tend to overlook this point and act as if the all personal information that we have privacy interest in is radioactive, like nude selfies.
Nude selfies aren’t the only kind of thing a person might not want easily available for people to see.
A similar mistake is made by critics of “eraser” laws like the one recently enacted in California, which gives minors the right to delete information they previously posted but doesn’t apply to “re-posts” by third parties. Naysayers insist the law can’t meet its goal because it won’t “erase” the “infamous” viral posts many users would come to regret.
Technically speaking, this criticism is true. But it overlooks why the law is still effective. In a non-trivial way, it makes it harder to discover information online that minors find problematic. This point about legislation providing some (but not absolute!) protection would have been easier to appreciate had the “Erasure Button” law been given a different title, maybe the “Reducing the Likelihood of Harm” law. Yes, this isn’t a catchy description, but it’s significantly more accurate. As are terms like “invisible in the user interface,” instead of “disappearing messages,” and “the right to make information harder to find,” instead of “the right to be forgotten.”
This debate is not and should not be about forgetting or disappearing in the traditional sense. Instead, let’s recognize that the talk about forgetting and disappearing is really concern about the concept of obscurity in the protection of our personal information. So how about we make a pact to resist using language like “the right to be forgotten,” “eraser” laws, and “disappearing” messages and focus on developing messages, laws, and technologies that focus on enhancing obscurity?
So how about we make a pact to resist using language like “the right to be forgotten,” “eraser” laws, and “disappearing” messages.
As we’ve previously written, “[o]bscurity is the idea that when information is hard to obtain or understand, it is, to some degree, safe. Safety, here, doesn’t mean inaccessible. Competent and determined data hunters armed with the right tools can always find a way to get it. Less committed folks, however, experience great effort as a deterrent.” By accurately identifying the obscurity issues at stake where privacy is being discussed, we can move beyond nirvana fallacies to answer the difficult, but critical policy questions.
For example, how easy should it be to find personal information about you? Does it matter what kind of personal information it is? Which obscurity strategies are acceptable and which ones are problematic? Most people don’t have a problem with privacy settings, delete buttons for user profiles, robot.txt files, scraping prohibitions in terms of use agreements, and other similar commonly deployed technologies that foster obscurity. Yet the European Court of Justice’s recent decision providing for a right to have information removed from certain search results has struck many as going too far. Could obscurity be leveraged to carve out a “right to fail?” Would a compromise be degrading the page rank of certain results? And is that a bad thing, or a valuable protection in itself? As Harvard law professor Jonathan Zittrain points out, “The second page [of search results]…might as well be in Siberia,” Poor search visibility is, in itself, an extremely valuable obscurity protection because of the high transactional cost of finding information that isn’t easily uncovered by a simple search.
The admitted weakness of couching privacy protection in terms of the probability of obscurity is that you’re forced to recognize the inevitability of some information slipping through the cracks of obscurity and going viral. This admission is necessary, however, because it not only allows those who are disclosing information to better tailor their expectations, but it also encourages the public and policy makers to demand an elevated response when mere probabilities won’t do. Course correcting won’t occur overnight. But to begin, the most important thing we need to forget is the word itself.
- Publication Type:Other Writing
- Publication Date:05/20/2014