“Tool Without A Handle”: Privacy and Regulation – An Expanded Rationale

In the previous blog, I noted that the logical next step in understanding “privacy as fairness” was to examine if it is possible to identify principles that would effectively guide policy and regulation aimed at such fairness.  As discussed, there is often no broad consensus as to what “fairness” entails in a given situation. Common principles of “privacy as fairness” (such as an expectation that data collectors should offer notice and choice) are, in effect, simply the application of a principle (“do what is fair”), rather than a principle itself. 

For example, more accurate privacy guidance could be stated as “offer notice and choice, except where fairness does not require it, such as when consent can fairly be implied or where the compliance with a warrant affords no choice.”  It’s my guess that exploring regulation of “privacy as fairness,” then, will necessarily involve more of a balancing of interests than an application of universal principles.

At the core of privacy concerns is the state interest in the well-being of citizens

First, though, I offer an observation as to why consensus concerning “privacy as solitude” is more strongly established.  That is because the animating core of privacy law is concern for personal well-being, including emotional and physical health, a concern to which “privacy as solitude” is closely linked. 

The core interest animating the “right to be let alone” is only partially expressed through concepts such as “autonomy” or “personal control.”  We need to ask further: “why do we believe “personal control” matters?  Plenty of societies, not all of them totalitarian, have elevated other interests more highly.  I believe a more direct statement of why we care about privacy (both “privacy as solitude” and “privacy as fairness”) is the state interest in the well-being (emotional and physical) of citizens and thus in the nation as a whole.  This state interest is expressed in a variety of ways, from civil sanctions against intentional infliction of emotional distress to promotion of personal welfare and opportunity.[1]

For example, undue infringement of liberty interests protected by the Constitution (e.g., personal, civil, political) is thought of as psychologically harmful.[2]  At the center of Constitutional values we find, unsurprisingly, an interest in individual human flourishing as an end that the state is bound to protect (and to protect primarily by not unnecessarily infringing on it itself).  This insight – the function of the law is to protect the health of an “inviolate personality,” is at the core of what Warren and Brandeis intended in identifying privacy as a legal concern distinct from protection of property or reputation interests.[3]

Similarly, criminal laws against child abuse images (discussed in an earlier blog)[4] have not been legislated on the basis of either property or defamation.  The fundamental basis for such laws is the legislative interest in protecting children’s emotional, mental and physical health and the well-documented harms to health such abuse creates.  And, as courts have held, this alone is a sufficient basis for criminal sanctions on child abuse image distribution.[5]  More recently, scholars have called for expanding criminal sanctions to nonconsensual publication of adult personal, sexual images, taking clear cognizance of the harms to individual health that likely arise from such publication.[6]

These sanctions are also examples of the foundational legal principle that there is no right without a remedy.  The importance of access to remedy for privacy intrusions is illustrated well in the international human rights context.  In the case of K.U. v. Finland, the European Court of Human Rights held unanimously there had been a violation of Article 8 of the European Convention on Human Rights (right to respect for private and family life) following a non-consensual posting of a sexual ad about a child on an Internet dating site. 

In that case, the failure of Finnish law to adequately protect private and family life was, perhaps ironically, a privacy regulation.  In particular, Finnish law lacked sufficient authority to compel an Internet service provider to disclose the identity of the party posting the advertisement.  Because neither the aggrieved party nor the state could compel disclosure of identifying data concerning the party who posted the ad, the child and his family were without effective remedy at law.  In that decision, the right to “respect for private life” was interpreted as rooted in concern for the potential threat to the boy’s physical and mental welfare, rather than concepts of property, reputation or “unfairness.”[7]

So, foundationally, privacy law is an expression of state power aimed at protecting personal health and well-being.  From this foundation, we can then examine the case for official legal protection of “privacy as fairness” interests, including collection and use of personal information involved in commerce, and in particular, privacy laws aimed at balancing power among economic actors, and prohibiting misuse of personal data for harmful discrimination. 

Some “privacy as fairness” regulations have broad support

In my last post[8] I listed four examples of “privacy as fairness” regulation that have broad support, including a right to have credit or employment decisions based on accurate data, prohibitions on secret sharing of data with 3d parties, and sanctions for inadequate data security.  Here, I explain how (and to what extent) such regulation is justified because the regulation can be shown to relate directly to protection of personal well-being. 

In the case of credit and employment, for example, both are fundamental to economic opportunity and, in turn, the state recognizes that economic opportunity is fundamental to human flourishing.  Government has both enacted laws and located protections in the Constitution for equal access to such economic opportunities.  In turn, we can find examples of regulations that enjoy a broad consensus of support, are animated by promotion of fair economic opportunity, and which require fairness in the use of personal data. 

The use of credit reports has an important public purpose in that distribution to lenders of data about overextended borrowers helps prevent imprudent lending, defaults, and thus increases in the costs of borrowing that impact society generally.  But regulation governing credit reports was adopted in part as a reaction to abuses that not only involved false entry of negative information but compilation and use of data with a questionable connection to creditworthiness (e.g., sexual orientation).[9]  This, in turn, was identified as harmful to individual well-being and opportunity.[10]  The Fair Credit Reporting Act (“FCRA”) enjoys relatively broad support as a benchmark against which both consumers and industry can measure what constitutes reasonable steps to achieve “privacy as fairness" in this context.[11]

Other privacy regulations are far more controversial

Contrast the FCRA with the discovery of a “right to be forgotten” in the European Data Protection Directive, and in particular its application in the Costeja decision in the European Court of Justice. The Costeja decision clarifies that the protections of the Directive apply to search engines, including the obligation to remove personal data on the request of the subject.  The policy behind this decision is to allow data subjects to redress uses of inaccurate data (similar to rules aimed at accurate credit reports); it is best seen as a "fairness" concern, rather than a right to solitude.  Among other things, newspapers or other parties that are not "data processors" are not subject to the same obligation and so the law readily allows publicity with regard to the exact same information.[12]

This decision provided controversial, to say the least.[13]  One reason the decision is controversial is that while the court acknowledges there are balances to be drawn in the case of certain data, it nonetheless officially intervenes in the distribution of truthful information, newsworthy information, and subjective information, none of which are at issue in the case of laws governing credit reports. Furthermore, the ruling of the court obliges a private party to be the arbiter of what constitutes unfairness in order to comply with the law, an approach which commercial parties and advocacy groups both find uncomfortable.[14]

This is regulation far removed from that animated by concern for personal well-being, in that it extends far beyond publication of information which is demonstrably harmful to emotional health.  This regulation extends to personal data which is merely “inadequate, irrelevant or excessive.”  In addition, it elevates the interests of the data subject without considering impacts on the well-being or emotional health of others with an interest in the information. 

For example, if a party committed a crime which was reported in the news, what weight to give the emotional reaction of a victim’s family if such news is removed from online search indices?  Such concerns, being personal to the family, might fall outside the “public interest” that the court stated should inform decisions of whether or not to remove content.  The expanse of such regulation, and the balancing tests it therefore requires to implement, is a legitimate reason for many to doubt its wisdom.[15]

I suggest caution in regulating the balance of power among various economic actors

Similarly, caution is warranted when considering official legal intervention aimed at balancing power among economic actors – particularly where imbalances in power are not yet well understood or debatable.  Credit reports are an established factor in the opening (or closure) of certain economic opportunities.  

In the case of data used for advertising profiles, however, there are important empirical questions as to whether these impact economic opportunity at all.  For example, enquiry as to whether personal data is effective at orienting advertising or other marketing techniques, about whether marketing techniques impact consumer decisions, and about whether consumer autonomy is so restricted that any such impacts rise to the level of harms that justify legal intervention in commercial markets.[16]

Where existing laws are broadly accepted, a key factor is they address scenarios where consumers have little ability to directly affect what a custodian could do with their personal data, or to avoid harmful impacts of data misuse.[17]  While a consumer may also have little control over what sort of advertising profile is created to define him or her, consumers do have a substantial degree of choice as to which advertisements they find attractive and, more importantly, which goods they ultimately purchase. 

Concerns about “paternalistic” misuses of personal data through profiling are legitimate concerns, but at the same time one is hard pressed to imagine something more paternalistic than legislating against non-misleading advertising on the basis that consumers may not conduct their shopping wisely, or in alignment with behaviors the state considers healthier or more productive.[18]

As I discussed in the earlier blog, regulation to address economic imbalances would also not fit neatly within the “right to privacy” articulated by Warren and Brandeis.  In those authors’ view, the right to privacy would only govern truly private data, and would not cover acts done in a “public or quasi-public capacity.”  Certainly some (though clearly not all) economic actions of an individual are in a quasi-public capacity.  Additionally, privacy regulation can both protect the rights of one at the expense of another, or of democratic society at large.  For example, economic data on geographies and consumer profiles associated with purchases of tobacco could be used, in aggregated or de-identified form, for important public health purposes, which would not be possible if concerns over imbalances between consumers and merchants led to prohibitions on the collection of such data..[19]

These objections are not intended to categorically rule out any regulation based on “fairness.” Rather, it is to suggest that debate about such regulations should be oriented along the lines I’ve suggested:  1) “privacy as fairness” should be linked back to concerns about human well-being; 2) regulations should enjoy a broad consensus as to both the harm to be addressed and the effectiveness of the proposed rule, and 3) “privacy as fairness” should not be confused with the protection of privacy as “solitude.”

It’s unfortunate to hear debates that discuss whether we should “regulate to protect privacy” without clearly defining the rationale for such regulation.  Whether regulation aims to protect personal mental and physical well-being, or simply aims to protect personal reputation or economic interests, is hugely important.  This is particularly so if one believes that the direction of policy and regulation is primarily oriented by interest group politics.[20]

In the next blog, I’ll return to the theme of metaphor to identify how such debates could be oriented more productively through consideration of newer metaphors to understand commercial data flows.  In short, we tend to think of data as “goods” – that is, as physical objects which move in space from consumer to commercial providers, to ad brokers and to governments, and which should be secured against unauthorized access through guarded facilities through metaphorical padlocks and keys.

But data is not necessarily a good like a carton of milk or a pallet of bricks.  These physical world metaphors cannot fully guide us. Among other things, unlike a physical good, the same data can exist in multiple places (with no loss of utility for either possessor).  Particularly with digital data, which moves at the speed of light, it seems more effective to think of data as energy - a wave or a flow rather than a static entity.  In short, the question I’ll explore next:  what can principles of quantum theory tell us about modern privacy?

[1]A scholarly (i.e., fancy Greek) term for this goal is “eudaimonia,” often translated as “human flourishing.”

[2]This characteristic of constitutional jurisprudence has been persistent even while judicial views as to the scope of such liberty rights have changed.  For example, amici in Bowers v. Hardwick, 478 U.S. 186 (1986), argued that Georgia’s anti-sodomy law should be struck down because of its negative impact on psychological health of individuals.  http://www.apa.org/about/offices/ogc/amicus/bowers.pdf.  In Lawrence v. Kansas, overturning Bowers, the Court explicitly referenced the psychological impact of an officially sanctioned “stigma” against homosexual persons that is associated with anti-sodomy laws. Lawrence, 539 U.S. 558, 575 (2003); id., at 578 (“petitioners are entitled to respect for their private lives.”) https://supreme.justia.com/cases/federal/us/539/558/case.html

[3]To be sure, Warren and Brandeis also were concerned about collective, social harms – referring for example to the “blighting influence” of gossip.  It’s notable that, without much analysis or experimentation, one can imagine modern attempts to regulate publication of otherwise private facts on the basis that gossip corrupts society and diverts from nobler endeavors of the mind would be met rather skeptically.  It is only on the basis of harms to individuals, or violation of individual rights, that we find a truly strong modern consensus for privacy regulation.

[5]See New York v. Ferber, 458 U.S. 747 (1982); online at: https://supreme.justia.com/cases/federal/us/458/747/case.html. Such laws have, in some cases, also been justified on grounds of “obscenity” (content so offensive against community standards that it falls outside of First Amendment protections), but at the core of objections to “obscenity” is the idea that such content is personally harmful.  Moreover, analysis of such laws is likely to end up ultimately identifying a state interest in emotional health and well-being.  Laws such as the Child Online Protection Act that failed to meet First Amendment standards were based on the premise that pornographic content was “harmful to minors.”  See, e.g., Ashcroft v. American Civil Liberties Union, 535 U.S. 564 (2002); see also R. v. Labaye 2005 SCC 80, [2005] 3 S.C.R. 728 (Supreme Court of Canada finds harm is an essential element of obscenity).

[6]See, e.g., Danielle Citron and Mary Anne Franks, “Criminalizing Revenge Porn,” Wake Forest Law Review, Vol. 49 (2014); online at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2368946.

[9]For a longer discussion of the concerns animating the Fair Credit Reporting Act (and its relative success today) see Jim Harper, Reputation under Regulation:  The Fair Credit Reporting Act at 40 and Lessons for the Internet Privacy Debate, online athttp://www.cato.org/sites/cato.org/files/pubs/pdf/PA690.pdf

[10]As a parallel example, the core holding in the landmark Supreme Court decision on equal access to education is based on the impact of unequal access on the psychological well-being of those individuals assigned to segregated schools.  See Brown v. Board of Education of Topeka, 347 U.S. 483 (1954), referring to the “sense of inferiority” denoted by racially-based segregation in public schools. 

[12]Google Spain SL, Google Inc. v Agencia Española de Protección de Datos, Mario Costeja González, Case C-131/12; http://curia.europa.eu/jcms/upload/docs/application/pdf/2014-05/cp140070en.pdf

[14]See, e.g., “Google is Not a Legitimate Arbiter,” Reporters Sans Frontiers, http://en.rsf.org/union-europeenne-google-is-not-a-legitimate-arbiter-26-09-2014,47025.html; Official Google Blog, “Searching for the Right Balance,” http://googleblog.blogspot.com/2014/07/searching-for-right-balance.html

[16]The World Economic Forum’s paper on this topic is chock full of such questions: http://www3.weforum.org/docs/WEF_RethinkingPersonalData_ANewLens_Report_2014.pdf.  

[17]See, e.g., the Privacy Act of 1974 (5 U.S.C. § 552a), the Gramm-Leach-Bliley Act (15 U.S.C. §§ 6801-6809), the Fair Credit Reporting Act (15 U.S.C. § 1681 et seq.), and the Children's Online Privacy Protection Act (15 U.S.C. §§ 6501-6506).

[18] Questions about “paternalistic” uses of personal data – and these are good questions – are humorously treated here: https://medium.com/message/dada-data-and-the-internet-of-paternalistic-things-7bb4321d35c4.  But one wonders, in the absence of equally paternalistic health regulations forcing the purchase of such products, what consumer who objects to a refrigerator that can lock out the beer cabinet would buy one?

[19]See also Evgeny Morozov, “The Real Privacy Problem,” MIT Technology Review http://www.technologyreview.com/featuredstory/520426/the-real-privacy-problem/ (noting the importance of data to citizen debate). 

[20]See, e.g., Lior Strahilevitz, "Toward a Positive Theory of Privacy Law" (Coase-Sandor Institute for Law & Economics Working Paper No. 637, 2013) available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2230151 (examining median voter models and public choice theory to explain the content of U.S. privacy law)



Add new comment