Tool Without a Handle: Quantum Paradox #3

This installment on the quantum paradox extends the privacy analysis to the adjacent field of free expression and content moderation.  Free expression issues are in many respects related to “privacy as fairness” – at the core of free expression questions is the fair treatment of personal content (or pseudonymous content) that is expressive.  Whether the content is artistic, political, or simply intended to amuse, and whether the author is identified or not, personal expression goes to the heart of a person’s identity.  A person’s artistic expression is in many ways more personal than other personal data such as your zip code.  

Additionally, understanding the quantum paradox in the context of free expression can help shed light on a wide variety of controversial issues.  There has been considerable discussion of the content moderation policies of major online platforms (including search engines, social media, Internet service providers, operating systems).  Issues include the obligations of platforms to restrict child abuse,[1] obscenity,[2] or adult pornography;[3] the discretion of platforms to filter content offensive in certain countries (e.g., Nazi or racist content,[4] or countries with lese-majeste laws[5]), the policies that restrict pseudonymous use,[6] the ability of law to address harassing speech,[7] and the market power of platforms as it impacts civil liberties and political discourse,[8] just to name a few.

To moderate (edit or block objectionable or illegal content) in the online context implicates the quantum paradox of privacy.  To effectively moderate a user, you must know who the user is, and give the user a fixed identity.  It can of course be a pseudonymous identity, but even in such cases some degree of privacy, some degree of invisibility, is lost.  Accepting that this is necessary if there is to be some effective form of content moderation can move past unworkable solutions and better frame the trade-offs involved in different approaches.

Here are two illustrations of the quantum paradox in contexts that implicate free expression.  First, consider how to integrate commercial rights to free expression with “privacy as fairness” concerning use of online behavioral data – i.e., attempts to manage ad preferences through the use of online cookies.  Ad preferences are a user-side form of content moderation – the user sets a policy that determines what content is acceptable to be displayed by the service provider.[9]  The service provider enables this through technology that captures user decisions.

In more detail:  an online cookie (code placed on a user’s browser) can provide signals to websites about ad preferences.  Ad preferences are signals that indicate whether a user’s data may be used to tailor the content rendered (including advertisements) based on perceived interests or dislikes of the user associated with the browser.  This implicates the expressive rights of advertisers, and the networks that support them, as well as the privacy interests of online users.  Policy decisions thus must draw a balance between control by users and the limits such control creates for the expressive rights of advertisers.  In general, because these expressive rights involve the use of personal data, it does seem reasonable to allow some form of user control over data use. 

Nonetheless, this issue introduces a quantum paradox.  A “do not track” or “do not use my data” signal is, in quantum terms, an act of observation.  Webpages loaded by the browser observe the cookie and therefore determine something important about the browser user.  Put differently, in order to know not to gather or to use information about the user, the technology must gather and use information about the user.   

The user, then, cannot both be completely “private” and at the same time have his/her privacy respected by respecting his/her ad choices.  Moreover, attempts by the user to create greater privacy by clearing cookies from the browser would render privacy protections null and void, where doing so deletes the indicators signaling not to collect and/or use certain behavioral data. 

The paradox remains even with other forms of unique identifiers that indicate advertising preferences.  For example, in iOS version 6.1 Apple replaced its previous unique ID with an “Advertising Identifier” capable of being reset by the user.[10]  Android took a similar step in 2013, starting with the “KitKat” version.[11]  Users concerned about having their personal usage habits tracked by apps and advertisers can reset the Advertising Id so that, to an app or ad network, they will appear as a new user, with no record of prior activity available (as such records were associated with the previous identifier).  Even with these innovations, it’s still necessary to know something about the user in order to not gather and use other information.[12]  In fact, these developments are best seen as recognition of the paradox, and an attempt to manage around it thoughtfully.[13]

In the context of user content moderation by platforms, the same paradox applies:  to effectively moderate a user (e.g., a user posting harassing comments or other unwelcome content), you must know who the user is, and give the user a fixed identity so blocked users can’t instantly re-appear (or continue to post restricted content under other names).  To illustrate further, imagine a person feels their privacy is harmed by the non-consensual posting of their private photos by a platform user, and requests the platform to take action against the poster (and to further protect the user’s account from such content).  The platform must therefore assign a fixed identity to both the poster and the victim, in order to prevent the poster from re-appearing on the platform and committing the same wrong, and in order to identify the victim and honor any “block” signals he or she may set regarding the poster.

This is not to say users must post content associated with their birth name – pseudonymity to the public is possible, provided that a unique identity is assigned and enforced by the software that powers the service.  There is an argument that online platforms should be open for pseudonymous use (such as to allow users to publicize matters of public interest without undue fear of retaliation).  There is, of course, also an argument that better behavior will follow if users may only engage with the platform under the name by which they socially recognized (this is most often a person’s birth name).  

Whichever argument one prefers, securing the rights of individuals and the quality of the online community require some form of identity.  Identity enables protection for victims and accountability for online users who post certain disfavored types of content, such as:

·       private material (such as sexually explicit photos) without consent of the subject;[14]

·       false personal information (e.g., malicious libel);

·       false public information (rumors intended to manipulate stock prices, e.g.);

·       propaganda directly intended to incent immediate violence;[15]

·       intellectual property used in ways that exceed bounds of 'fair use" and harm the rights owner.

In an earlier blog, I noted that privacy law derives from a state interest in protecting personal well-being, an interest that requires identity online, and the accountability it enables, for protections to be sufficient.[16]  This is particularly so if content moderation is needed to effectuate a remedy granted under law by a court.  As with other aspects of privacy, this requires managing around a paradox and, in turn, recognizing that the paradox is unavoidable so as to root discussions in more productive possibilities.



[1]See, e.g., 18 USC § 2258a (obligation of electronic communications service providers to report child abuse content if it is discovered on their systems). 

[2]See Nitke v. Gonzalez, 413 F.Supp.2d 262 (S.D.N.Y. 2005) (upholding constitutionality of restrictions on online obscenity).

[3]Reno v. American Civil Liberties Union, 521 U.S. 844 (1997)(holding restrictions on online indecency unconstitutional); see also ACLU v. Gonzalez,  No. 98-5591 Final Adjudication (upholding injunction against enforcement of the Child Online Protection Act, 47 U.S.C. § 231 (“COPA”) on grounds the law is unconstitutional), online at http://www.paed.uscourts.gov/documents/opinions/07D0346P.pdf

[9]Various forms of parental controls or pornography filters/blocking software are similarly user-side content moderation.  Where these are at the election of the individual user (or the user is a minor child) they do not present the same concerns as content moderation by platform powers, which hold a different degree of power.  Nonetheless, they also present the same paradox – to gain control, the user must give up some degree of anonymity; there is no other way. 

[11]For technical details as to how the Android Advertising ID can be used by app developers, see https://support.google.com/adxbuyer/answer/3221407?hl=en

[12]A similar point has been made with respect to legislation that would cabin surveillance in a way that promotes respect for foreign laws – the privacy trade-off seems to be that service providers must know and track the citizenship of their users.  See http://thehill.com/policy/technology/232649-web-giants-warn-email-privacy-bill-would-undermine-protections

[13]Similarly, “best practices” in this area must take account of the paradox.  So, for example, the California AG recommendations on mobile app privacy do not (for they cannot) recommend cessation of all unique identifiers but instead recommend use of app-specific or other non-persistent device identifiers rather than a persistent, globally unique identifier.  http://oag.ca.gov/sites/all/files/agweb/pdfs/privacy/privacy_on_the_go.pdf, p.9.

[14]See https://cyberlaw.stanford.edu/blog/2012/12/tool-without-handle-%E2%80%9Ckittens-cities-and-creepshots%E2%80%9D (arguments against allowing publication of “creepshots” - public photos (and associated comments) of women taken without the consent of the subject).

[15]Even under the strong free speech protections of the US First Amendment, as expanded under more recent Supreme Court cases, legal sanctions are available for speech that constitutes “incitement to imminent lawless action.”  Brandenburg v. Ohio, 395 U.S. 444 (1969).

 

Add new comment