Q&A: How Do You Define ‘Privacy Harm’? « Go back Publication Date:

Ryan Calo, a senior research fellow at the Center for Internet & Society, is interviewed by Jennifer Valentino-DeVries of the Wall Street Journal Digits Blog about issues of "privacy harm" and topics included in his forthcoming paper, "The Boundaries of Privacy Harm":

In debates about online privacy, one question always seems to crop up: What’s the harm? How can harm come from a breach of privacy if there’s no fraud and the information isn’t used for, say, identity theft? When the only thing that seems to be wrong is a feeling of “creepiness,” what should that be called?

Ryan Calo, senior research fellow at the Center for Internet and Society at Stanford University Law School, has been trying to answer that question. This summer, he released a draft of a paper titled the Boundaries of Privacy Harm that is set to be published in the Indiana Law Journal next year.

Calo spoke with Digits about privacy harm and how it applies in the digital world. His condensed comments are below.

Why do we need to define privacy harm?

If you look at regulations of abortion or sodomy or contraception, the Supreme Court looked at these as privacy issues. But a lot of people would say you can’t regulate sex between two people of the same gender, not because it happens in private but because it’s an equality issue. … In order to surface these values, we need to draw a line and say that not everything is privacy.

Then there are times when we look at a situation and … we say, “Ah, this looks weird,” but we don’t know what is wrong and more importantly don’t know how to deal with it, but it’s privacy. One example we use is the targeting of spam or the targeting of elderly people for “sucker lists” to get ads for gold coins. People are making lists on the basis of vulnerability characteristics, or in the case of spam combing the Internet finding information in the form of an email address.

So what is a privacy harm?

In the paper I use the analogy of assault and battery [to describe two types of harm -- called “subjective” and “objective”]. Assault is apprehension about getting hit, and it’s separate in court. You can go after someone for making you feel like they’re about to hit you. Battery is actually getting hit.

Subjective privacy harm can be triggered if you’re creeped out. … Objective privacy harm is when a person’s information is used against that person [as in a denial of a job or a malicious attack].

In your paper you distinguish what you call a “privacy violation” from what you call a “privacy harm.” What’s the difference there? The term “privacy violation” still sounds pretty harmful.

It might be easiest to look at this by considering an analogy. It’s obvious why you’re required to stop at a red light: If you run the light and hit someone, that’s clearly harm. But in the middle of the night, if there is nobody there, you can run it and there’s no harm. But technically you violated the law; it’s a violation.

If people do not know about being watched, there could be a privacy violation but no privacy harm.

There also are plenty of times when there is a harm and no violation, no one to blame. I can think of incidents where somebody suffers privacy harm, but it’s entirely incidental.

In the paper I even go to the extent to say that people who are paranoid, delusional and believe they are constantly being observed — I realize that sounds absurd, but this person still might feel concerns and be harmed.

Where would all this go in terms of the actual law?

In writing an academic piece I felt the need to own the logical conclusion of my theory; I wouldn’t say you would legislate a privacy violation if a paranoid schizophrenic is delusional.

But many privacy claims fail for lack of harm. Harm has often operated as a hurdle because courts have a very difficult time articulating what the harm is. They just are not sure that psychological perceptions constitute a harm.

There was a case where the government engaged in massive surveillance, but the plaintiff failed to articulate just how it affected them. These are real harms, though, and they actually are measurable. People don’t think it is measurable, but there’s a lot of social science out there that indicates that it is.

Does privacy harm require that personally identifiable information be used?

Both harms of my privacy test can be triggered without that.

The standard industry defense is that, yes, we do track consumers, but we don’t know it’s them, so who cares? To some extent it makes sense.

But take the example of someone who goes through a messy divorce and suddenly gets ads for singles. They might start to think, “Oh my God, does Facebook know I’m single? Do they think I need to improve? Who has this information?” But then you go to Facebook, and Facebook says, “We didn’t know it was you. The advertiser doesn’t know. We just know someone has gone through a messy breakup.”

But imagine a government system that just combed through everything and sent [messages] for all the times people mentioned they did marijuana, and you went to them and said, “This is a horrible invasion of my privacy,” and they just said, “I didn’t know it was you.” It doesn’t matter!

On the objective [harm] side, it only matters in the extent to which the information provides a key to [acting against] the person.

What about situations we see where the information is public but people feel there is a privacy harm?

One of the biggest problems in privacy is the notion of private and public. According to the law, if you go outside and something happens to you and someone records it, you don’t have any expectation of privacy.

But people have agitated about that, and there are so-called upskirt laws [related to inappropriate photos], so there’s some recognition that there’s nuance there. My hope is that my theory does some work to add more rigorous nuance to the notion of private and public.

So even if you’re in public, one factor is the extent to which monitoring is unwanted. In another, like if there are CCTV cameras everywhere, it might not be noticed or unwanted but is invasive, excessive.

But also there are many cases where people suffer low-level privacy harm, but we don’t want to throw the law at the other person.

What about if people post something online and then feel harm later?

Perhaps user-generated content is like privacy in public. Since you put the photo or comment out there, the level of “unwantedness” … is low. But there could still be a very invasive use, for instance, if I created a packet of your worst moments across multiple websites … and sent it to potential employers or dates. Maybe we’d still want to call that a privacy harm.

If I upload something I’m perfectly comfortable with at the time but then later on I start to get concerned and attempt to take it down and can’t, then in my theory I begin to experience subjective privacy harm.

My opinion on this is that is it’s a shared responsibility between the user and the platform. And for the platform, the considerations are: 1) Does your interface help people anticipate consequences and 2) Do you help make it possible to change them?