Chuck Cosson is Director, Legal Affairs, Privacy & Security, at T-Mobile US, based in Bellevue, WA. At T-Mobile, Chuck oversees privacy compliance programs and provides legal guidance on mobile Internet, location services, incident response, and other privacy, security, and business issues. Chuck spent 7 years at Microsoft leading that company’s public policy work on human rights, free expression, and child online safety. He has also worked in Washington, D.C. on telecommunications policy and regulation. His engagement with Stanford focuses on the role of metaphor as a guide for contemporary technology law and policy - a conception of the Internet not as a “place you go” but as a “tool you use.”
On 26 June 1997, in Reno v ACLU, the US Supreme Court decided the fate of the Communications Decency Act (“CDA”), insofar as it criminalized the intentional transmission of "obscene or indecent" messages or information. In doing so, the Court made not only a finding that this provision of the CDA violated the 1st Amendment, but applied an approach to Internet cases with clear implications for cases the Court faces today.
Reno established that it is essential the Court recognize differences between the measured pace of judge-made law and the blistering pace of technology’s evolution, a point that is still cited by the Court today. And, it identified that the capabilities and availability of the tools at issue have an important role to play in the constitutional analysis. As the Court continues to address Internet and technology-related constitutional cases, the importance of considering the capabilities of Internet tools may well be the most impactful legacy of Reno.
“Tool Without a Handle: Mutual Transparency in Social Media”
“I wish that for just one time
You could stand inside my shoes
And just for that one moment
I could be you”
Bob Dylan – “Positively 4th Street”
In my previous blog on propaganda, I noted private information, when stolen and published, can prove useful for propaganda efforts. This post develops that concept in more detail, with an emphasis on privacy considerations.
I agree with interpretations of the First Amendment finding important protections for publication of private information without consent. And I concur that, as a matter of principle, the public interest can justify such publication. But too often the “public interest” defense is rather often a post hoc rationalization rather than a reasoned justification.
Contemporary analyses give insufficient weight to privacy and information security interests. Such interests often outweigh the public interest value of making private information public without the consent of the owner or data subject, but that weight may not be recognized. Some reasons for this potentially include media business models that reward clicks and attention, increased partisan polarity (and the utility of such disclosures for propaganda), mistrust of government, and insufficient enforcement of laws on cybercrime.
“Tool Without a Handle: Trustworthy Tools”
“’What is truth?’ said jesting Pilate, who did not stay for an answer.” – Francis Bacon, Of Truth (Essays, Civil and Moral (1625).
This blog previously dealt with one flavor of “fake news”: provocative fictions that can prompt panic and violence. In this blog, I’ll deal with the related issue of propaganda. My conclusion: propaganda achieves its harmful effects by the meaning readers assign to the content. As such, responses to propaganda should focus on the process by which readers assign meaning, and how that process leads to anxiety or anger and then, in turn, to harmful action.
In this blog, I address one type of “fake news” - content that causes tangible harm; provocative fictions that can prompt panic and violence. The “PizzaGate” events are case in point: fictional accusations a restaurant was being used for child abuse prompted a case of assault with a deadly weapon. In this context, President Obama referred recently to the “dust cloud” of false information online. The “dust cloud” metaphor is apt as a “dust cloud” a) obscures; b) interferes with intended functionality; c) appears to come from no single origin; d) can be harmful to life and property.
As with other problematic uses of Internet tools, technology and innovation are often responsive to public concerns, and self-regulation can best integrate liberty and safety interests in responding to “fake news” concerns. That’s not to say government action against certain types of “fake news” is completely out of the question. Government action can take the form of private action: civil laws against defamation.
Further, though, responses to fictional provocations should look for opportunities for re-connecting to shared beliefs. Of course “fake news” is false, but what to do with that fact? Too often, human psychology will conflate receipt of corrective facts with an attempt to challenge motivations – or as an attack on one’s moral agenda. A “mike drop” moment feels really good, but by definition it ends dialogue. As with effective responses to violent extremism online, effective responses to "fake news" will recognize this and offer corrections accordingly.