Chuck Cosson is Director, Legal Affairs, Privacy & Security, at T-Mobile US, based in Bellevue, WA. At T-Mobile, Chuck oversees privacy compliance programs and provides legal guidance on mobile Internet, location services, incident response, and other privacy, security, and business issues. Chuck spent 7 years at Microsoft leading that company’s public policy work on human rights, free expression, and child online safety. He has also worked in Washington, D.C. on telecommunications policy and regulation. His engagement with Stanford focuses on the role of metaphor as a guide for contemporary technology law and policy - a conception of the Internet not as a “place you go” but as a “tool you use.”
In my previous blog on propaganda, I noted private information, when stolen and published, can prove useful for propaganda efforts. This post develops that concept in more detail, with an emphasis on privacy considerations.
I agree with interpretations of the First Amendment finding important protections for publication of private information without consent. And I concur that, as a matter of principle, the public interest can justify such publication. But too often the “public interest” defense is rather often a post hoc rationalization rather than a reasoned justification.
Contemporary analyses give insufficient weight to privacy and information security interests. Such interests often outweigh the public interest value of making private information public without the consent of the owner or data subject, but that weight may not be recognized. Some reasons for this potentially include media business models that reward clicks and attention, increased partisan polarity (and the utility of such disclosures for propaganda), mistrust of government, and insufficient enforcement of laws on cybercrime.
“Tool Without a Handle: Trustworthy Tools”
“’What is truth?’ said jesting Pilate, who did not stay for an answer.” – Francis Bacon, Of Truth (Essays, Civil and Moral (1625).
This blog previously dealt with one flavor of “fake news”: provocative fictions that can prompt panic and violence. In this blog, I’ll deal with the related issue of propaganda. My conclusion: propaganda achieves its harmful effects by the meaning readers assign to the content. As such, responses to propaganda should focus on the process by which readers assign meaning, and how that process leads to anxiety or anger and then, in turn, to harmful action.
In this blog, I address one type of “fake news” - content that causes tangible harm; provocative fictions that can prompt panic and violence. The “PizzaGate” events are case in point: fictional accusations a restaurant was being used for child abuse prompted a case of assault with a deadly weapon. In this context, President Obama referred recently to the “dust cloud” of false information online. The “dust cloud” metaphor is apt as a “dust cloud” a) obscures; b) interferes with intended functionality; c) appears to come from no single origin; d) can be harmful to life and property.
As with other problematic uses of Internet tools, technology and innovation are often responsive to public concerns, and self-regulation can best integrate liberty and safety interests in responding to “fake news” concerns. That’s not to say government action against certain types of “fake news” is completely out of the question. Government action can take the form of private action: civil laws against defamation.
Further, though, responses to fictional provocations should look for opportunities for re-connecting to shared beliefs. Of course “fake news” is false, but what to do with that fact? Too often, human psychology will conflate receipt of corrective facts with an attempt to challenge motivations – or as an attack on one’s moral agenda. A “mike drop” moment feels really good, but by definition it ends dialogue. As with effective responses to violent extremism online, effective responses to "fake news" will recognize this and offer corrections accordingly.
This post continues my thoughts on qualities of digital tools that have helped make political and artistic expression more subjective, accessible and fluid. In the previous post, we looked at the searchability of text. In this post I examine the impact of mobility: the ability afforded by digital tools to access vast troves of information, to communicate, to record, and to create from virtually anywhere on the planet.
There are at least three significant capabilities of digital technologies that have been shaped by portability: mobile commerce, access to news and information, and visual communications. Each of these capabilities accelerated significantly with the development of the “smartphone” – in particular the Apple iPhone in 2007 – but were inherent in mobile technologies from their initiation. Below, I discuss each quality in turn and identify some of its impacts.
This blog has, to date, primarily focused on the qualities of networked information technologies and regulatory responses to them – in particular qualities that raise issues of privacy and free expression. This installment of “Tool Without a Handle” looks at the qualities that render these tools influential on artistic and political discourse. In this first part, I will look at one particular quality: searchable text.
The searchability of digital content may contribute to confirmation bias and the “filter bubble” phenomena. A pre-conceived preference as to what news or information is desired can be easily bolstered with references found via targeted searches. At the same time, a distinct value of search technology is that it has enabled a vast number of entertainment and creative works to find audiences. A library of programming the size of Netflix or YouTube is simply not navigable with the rotary dials that graced the television set of my childhood.