“Tool Without A Handle” – "Justified Regulation”
This blog post picks up (finally) on the topic of regulation – in particular to discuss cases where the issue is universally understood as worthy of regulation, so much so that variation in regulatory approaches is less desirable. One example of tool use that is worthy of sanction is the non-consensual public distribution of private, sexually explicit images.[1]
One need not be an actor or celebrity to suffer a cognizable personal and privacy harm in such circumstances. In contrast to certain debates where privacy harms are less well agreed, in this case there are concrete harms. In the case of images of children, moreover, regulation is justifiable not only for the direct actor, but for intermediaries who have no direct role in the creation of the harm, but are in an economic and technical position to contribute significantly to addressing it, and can do so with few offsetting costs (provided regulation is carefully designed and targeted). This blog articulates the reasons why such regulation is justifiable.
As I describe in my last blog post,[2] in many cases, it makes sense for governments to withhold regulatory force or to experiment with different types and levels of regulation, with variations driven by the degree of social consensus as to a preferred approach. For example, I believe jurisdictions should experiment with whether websites should provide prior notice/consent options, or simply general disclosures, about the use of cookies. This makes sense given the diversity of opinion about the value of such disclosures, particularly where the cookie use is for ordinary website operation and not long-term tracking.[3]
In other cases, though, there is a very strong consensus about both the decision to regulate and the regulatory approach. Then, the primary concern becomes how to improve interoperability across diverse systems (both legal and technical) to improve enforcement, which actors should be subject to regulation, what methods users, providers, and distributors should employ to accomplish the regulatory objective, and how to preserve other values such as commercial innovation and free expression.
In the case of child abuse online, for example, US and EU laws regulate both criminal perpetrators and intermediaries; laws oblige Internet service providers ("ISPs") to report child abuse images they detect.[4] The decision to regulate intermediaries in this case is rooted in a strong social consensus about harms. Harms related to distribution of sexual images of children are well-documented in the legislative history of such laws.[5]
There is also value in regulation of intermediaries here, rather than just self-regulation by responsible providers, because society is worse off if some providers are required to report and others are not. Such variation creates weak points in the system that abusers can use to traffic in abuse images.[6] And, there is strong social consensus as to the balance of responsibility: online service providers are not required to actively search for abuse images and are not subject to liability for their presence, only obliged to report if such content is found.[7]
Additionally, there is strong social consensus that the regulation is appropriately tailored. Codifying restrictions on the basis of an objective criterion (sexual images of children) enables action by regulated parties without regard to context.[8] Limiting the rule to content which apparently violates existing child abuse laws also tailors the regulation to afford regulated entities both a bright line and a practical level of cost burden.
This suggests some core criteria that should be present whenever any regulation of tool providers is considered: 1) strong social consensus that there are concrete and significant harms to be addressed; 2) strong consensus that obligations should apply equally across all providers; 3) strong consensus the regulation is appropriately tailored and enforceable as a technical and practical matter.
In many cases, the third criterion is the most important (and the most complex). Some techniques can reduce the flow of child abuse online but also create other issues, leading to a lack of strong social consensus. For example, Internet service provider (“ISP”) filtering against a blacklist can create problems in both policy consistency and Internet functionality, especially if not tailored carefully. Transparency of the list is complicated as there are downsides to advertising homes for illegal content, and also there is a risk of considerable imprecision in whether an address points to illegal content. This has led to strong opposition to blacklist proposals.[9]
Some techniques of filtering are more advanced than others in dealing with externalities. “PhotoDNA” is a technique developed by Microsoft and researchers at Dartmouth that creates a unique signature for a digital image, which can be compared with the signatures of other images to find copies of that image. The unique signature is generated from a base set of known criminal images maintained by the National Center for Missing and Exploited Children (“NCMEC”).[10] If more points on the network applied PhotoDNA, it would further reduce the ability to exchange abuse images. Because any action taken to block or remove content found with this technique is taken against the precise image itself, PhotoDNA solves many of the over breadth problems involved in filtering based on web addresses.[11]
Non-consensual distribution of sexual images of adults is a different matter. I believe penalties for those who engage in such actions are warranted under existing law - laws against online harassment, invasions of privacy, and intentional infliction of emotional distress, among others. But this is case where there a weaker rationale for regulation of intermediaries. The weaknesses relate primarily to the ability to tailor regulation, not to lack of social consensus on the seriousness of harms from such behavior.[12] Arguments for regulating online providers in such cases (and there are some), should recognize important differences in principle and practice between laws requiring online service providers to report apparent online child abuse they discover, and a hypothetical law requiring reporting (or takedown) of apparent harassment if discovered (or upon notice).[13]
A first set of issues concerns whether the content intermediaries would report could be identified objectively. Given enough facts, harassment can be identified with workable precision. There are ample common law and statutory precedents for regulation of various types of threatening or disorderly conduct, all of which pre-date modern digital networked information technologies. The challenge, as I note below, is whether providers will have all the facts.
There are concerns that laws against harassing speech can present free expression concerns.[14] Yet both prosecutors[15] and academics[16] have outlined principled distinctions between regulation of speech and regulation of harassing behavior. And, regulation could, like the reporting requirements for child abuse images, be tailored to avoid encroaching on protected expression by limiting itself to apparent violations of other law (e.g., obligation to report apparent violations of 18 U.S.C. § 875(c) – prohibiting online threats).
There remains, though, the question whether regulation of intermediaries could be appropriately tailored. Violations of law against online threats can come through speech as well as images (indeed, it may be hard to discern whether an image is a cognizable threat), and speech must be interpreted in context, including knowledge of the identities and intentions of speaker and recipient. Because of this, such a rule would not afford regulated entities an equally clear bright line as to what must be reported. Distribution of sexual images intended in jest, fiction, and satire (or for commercial purposes) could all be unintentionally swept in.
Additionally, determining whether a sexual image was posted on a consensual or non-consensual basis would be challenging. Particularly if the image is not of a well-known person, whether an image is an invasion of privacy for a given person would require authenticated identity for the party depicted (or making a claim). Competing privacy interests could be infringed if personal conversations were reported to law enforcement.
Again, in this example, the question is not whether an online provider is obligated to remove the images where an individual depicted takes the trouble to prove identity and, in court, a claim of harassment, but whether online providers should be penalized for failure to remove and/or report images they happen to detect. Such a rule would not afford regulated entities a practical level of cost burden, given the volume of content distributed and the inability to automate the contextual judgments required.
Finally, in the case of illegal images, NCMEC is funded and equipped to manage the intake of such reports, while in the case of harassment there is no central organization that could handle such reports productively.
This is not to say, of course, that online service providers shouldn't undertake their own efforts to enforce terms of use, create safer online communities, and sanction harassing uses of the tools they offer. Many do, and can do so effectively. Market forces provide incentives to do so - those that do not are likely to be less attractive commercially.[17] As I’ve noted earlier, the metaphor of networked information technologies as a utopian/anarchist “frontier,” has less explanatory power in a modern context where so much of public and private life is conducted through such technologies.[18]
Regulation of the online distribution of child abuse content, and carefully tailored laws against online harassment, are good examples of regulation of the uses of networked information technologies that make sense – both as a matter of principle and practice. Even so, there are important differences that in turn, lead to different conclusions about whether regulation should extend to intermediaries. Moreover, such laws illuminate core principles about privacy regulation, and in particular identifying cognizable privacy harms that justify regulation. I’ll pick up that topic in a following blog post.
[1]A timely topic – see, e.g., http://www.theatlantic.com/entertainment/archive/2014/09/leaked-photos-nude-celebrities-abuse/379434/
[2]http://cyberlaw.stanford.edu/blog/2013/12/tool-without-handle-%E2%80%9Cgetting-grip%E2%80%9D
[3]See, e.g., http://diginomica.com/2014/02/10/eu-cookie-law-proving-useful/#.VAIofPldV8E (informal poll indicates 87% of website visitors ignore the cookie disclosure; only 0.73% opt out of cookies);
[4]See, e.g., 18 U.S. Code § 2258A; Directive 2000/31/EC (“E-Commerce Directive”), at para. 46 (June 8, 2000). Some variation as to what gets reported is necessarily introduced by the fact that child abuse content has different legal definitions in different countries – what has strong consensus is the requirement to report content that appears to be criminal in the jurisdiction in which the online service provider operates.
[5]See legislative findings at Pub. L. 109–248, title V, § 501, July 27, 2006, 120 Stat. 623.
[6]This is not to say this regulation eliminates all weak points. Trafficking in child abuse content often occurs through private networks, or peer-to-peer, where individuals share files directly and bypass intermediary detection. See, e.g., http://www.theguardian.com/technology/2013/nov/18/microsoft-google-summit-halt-child-abuse-images. Accordingly, coalitions have formed to address tracking through other points, such as financial payments for abuse content, at both the international and national levels. http://www.icmec.org/en_X1/pdf/FCACPBackgrounder1-13.pdf; http://www.financialcoalition.se.
[7]See Id., n.4.
[8]Unlike sexual images of adults, there is no category of “consensual sexual images of children” that can exist as children lack the capacity to meaningfully consent to either the image creation or the distribution. Also, unlike sexual images of adults, harm is created at the moment of image creation, whereas for adults there may be no emotional harm in image creation or in private sharing of such images among trusted partners – the harm inures when images are shared out of context.
[9]See, e.g., http://globalvoicesonline.org/2008/12/11/australia-rallies-to-stop-the-clean-feed/
[10]See http://www.missingkids.com/Exploitation/Industry and https://www.microsoft.com/en-us/news/presskits/photodna/. Because the PhotoDNA technology operates with a base set of known unlawful images, it is not applicable to other cases of illegal content – such as online copyright infringement – for two reasons: 1) there is no known authoritative base set of illegal content maintained by a credible provider and 2) in the case of copyright infringement, whether a given image offends the law or not depends on context and may not be determinable from inherent qualities of the image.
[11]NCMEC also offers industry a URL-based resource for voluntary action against child abuse content. http://www.missingkids.com/Exploitation/Industry. While the URL list is developed carefully by NCMEC, the imprecision involved in URL-based filtering is a good reason to keep use of the URL list voluntary. Industry has also contributed to technology that assists NCMEC in identifying, managing, and analyzing illegal images: http://gcn.com/articles/2014/08/27/image-analysis-exploited-children.aspx (describing "Project Vic").
[12]See http://www.withoutmyconsent.org/quick_link/what%E2%80%99s-real-harm (harms related to non-consensual sharing of sexual images of adults); see also Ellison, L., & Akdeniz, Y., “Cyber-stalking: the Regulation of Harassment on the Internet,” http://www.academia.edu/943428/Cyber-stalking_the_Regulation_of_Harassment_on_the_Internet (review of concerns with legal regulation of harassment).
[13] http://www.slate.com/articles/news_and_politics/jurisprudence/2014/09/celebrity_photo_leak_it_won_t_stop_until_there_are_legal_repercussions_for.html (calling for changes to 47 USC § 230 to incentize online providers to remove non-consensual sexual images of adults). Of course, a statute could also penalize online providers for failure to report and/or remove such images, if found, even without changing liability under Section 230.
[14]See http://www2.law.ucla.edu/volokh/harass/substanc.htm for a review of arguments that harassment laws do not constitute an identifiable exception to the First Amendment.
[15]http://www.justice.gov/usao/briefing_room/cc/internet_stalking.html
[16]See, e.g., Danielle Citron, “Free Speech Does Not Protect Cyberharassment,” http://www.nytimes.com/roomfordebate/2014/08/19/the-war-against-online-trolls/free-speech-does-not-protect-cyberharassment; and Hate Crimes in Cyberspace http://www.hup.harvard.edu/catalog.php?isbn=9780674368293
[17]For example, as part of a recent business acquisition, Ask.com recently hired a full-time safety officer and worked with both leading safety consultants and state regulators to address concerns with cyber harassment, noting the commercial incentives to do so. See, e.g., http://www.forbes.com/sites/larrymagid/2014/08/14/iacs-ask-com-buys-ask-fm-and-hires-a-safety-officer-to-stem-bullying/
[18]https://cyberlaw.stanford.edu/blog/2012/10/tool-without-handle; see also http://www.transformationsjournal.org/journal/issue_11/article_04.shtml (contrasting the utopian rhetoric of early Internet policy discussions and making the case for a concept of free expression rights that, like international standards, integrates free expression with freedom from defamation and harassment).