Tool Without a Handle: “A Dust Cloud of Nonsense”
On October 30, 1938, the “Mercury Theater on the Air” broadcast a radio drama adaptation of H.G. Wells’ science fiction tale “The War of the Worlds,” featuring news bulletins narrated by Orson Welles in a realistic style. The event is famously believed to have caused mass panic among people who mistook the drama for a real Martian invasion.
While some research suggests that actual panic was modest (as the audience for the program was modest and, presumably, many were dubious that hostile intelligent life actually lived on Mars), the episode has nonetheless come to symbolize the hazards of mass media and its influence. Hazards under contemporary discussion under the topic of “fake news.”
Before any further consideration is given to the topic of “fake news,” we should parse out different considerations. I consider here fictional items similar to (or even more malevolent than) the “War of the Worlds” broadcast: content intended to arouse, entertain, and/or provoke, published without full consideration to the impact on the gullible or uninformed, rendered so as to be just plausible and realistic enough to be indistinguishable from factual news and commentary.
There are other items which some have identified as “fake news”: poorly sourced and shoddy journalism; outright fraudulent stories submitted as journalism (see Janet Cooke and Jayson Blair, among other examples), and knowingly or negligently false statements by politicians and their supporters. These are also unfortunate – and contribute to the corrosion of shared, objective truth. In a future post, I’ll address propaganda and privacy violations that undermine democracy and rule of law.
In this blog, though, I’m most concerned with “fake news” that causes tangible harm; provocative fictions that can prompt panic and violence. The “PizzaGate” events are case in point: fictional accusations a restaurant was being used for child abuse prompted threats, harassment of its owners and one case of assault with a deadly weapon.
In this context, President Obama referred recently to the “dust cloud” of false information online. The “dust cloud” metaphor is apt as a “dust cloud” a) obscures; b) interferes with intended functionality; c) appears to come from no single origin; d) can be harmful to life and property.
First, I consider whether there is a basis to address such provocative fictions through law or regulation. Generally, there is not – especially with respect to media platforms regulated as “interactive computer services” in the Communications Act. There may, however, be opportunities for government support of industry self-regulation, limited cases for regulation of broadcast licensees, and in some cases private rights of action for defamation.
As with current “fake news” events, the “War of the Worlds” occasion was taken to chide broadcasters about the responsibility of the media to the truth and to the public. The question of broadcasters’ obligations to the truth and to the public, then as now, rested legally within the jurisdiction of the Federal Communications Commission (“FCC”).
The FCC’s authority in the late 1930’s was set against movement towards greater free speech protections, even for potentially harmful speech. Between the 1912 Radio Act and the Communications Act of 1934, federal statutory sanction against false broadcasting was narrowed from a prohibition on “any false or fraudulent signal of any kind,” to a prohibition on false distress calls.
Thus the FCC’s response to concerns about the “War of the Worlds” broadcast effectively landed on self-regulation (e.g., voluntary acts by CBS to improve its disclaimers) as the optimal solution. The Commission sought, then as now, to both promote use of broadcasting for the dramatic arts and the public interest in preventing undue alarm.
Over time, the FCC has considered other instances, generally with the same result, although in a few cases reprimands were issued to stations whose broadcasts caused alarm. Broadcast rules effective today do prohibit “hoaxes” – specifically false information about a crime or catastrophe where the licensee knows the information is false, harm is foreseeable, and harm results.
A possible example would be if a broadcaster had repeated the PizzaGate child-abuse allegations as true (while nonetheless having evidence of the Democratic officials’ innocence), with reason to know some citizens would take arms and investigate and (as did occur) a weapon was discharged in the restaurant. Enforcement is, of course, at the discretion of the FCC, but broadcasters are subject to this narrow regulation on hoaxes.
The policy issues regarding hoaxes are the same for contemporary media platforms: how can public policy best integrate fostering free expression and ameliorating public anxiety from widely published provocative fictions? Numerous discussions of “fake news” touch on these issues, rightly focused on social media platforms. One analysis found the number of people who get their news from social media is double that who do so from print newspapers. Another found more than half of millennials use Facebook as their primary source for news about government and politics.
At the same time, social media platforms are properly not regulated as broadcasters, but instead – under Section 230 of the Communications Act - are not treated as the publishers of the content they distribute. The regulation of radio and TV broadcaster “hoaxes” is ostensibly justified by scarcity of the airwaves (and the licensing scheme that follows from it). That rationale does not apply to online platforms.
In particular, Congress opted (wisely, in my view) to remove disincentives for Internet tool providers to monitor and restrict certain forms of content such as harassment or pornography. Section 230 removed disincentives indicated by a prior court case that had suggested such editorial decisions would subject services to liability as a “publisher” of the content they did otherwise allow. In fact, in an early Internet law case applying Section 230, the facts were similar to the “Pizza Gate” scenario: someone pranked Robert Zeran by putting his phone number ads for tasteless products making light of the tragic Oklahoma City bombing, resulting in extensive harassment and threats made upon Zeran. The court nonetheless found that, under Section 230, AOL was not subject to liability for the hoax ads posted to its service.
Despite Section 230’s advantages, it’s understandable some would be unsettled by the absence of a legal remedy for Mr. Zeran (or Mr. Alefantis, owner of the “PizzaGate” restaurant) against social media platforms, and that this would lead to calls for regulation of intermediaries to address “fake news” issues. However, most varieties of government regulation of intermediaries through whom “fake news” is published are likely to be imprecise, contrary to existing statutory law, and possibly unconstitutional. Moreover, online harassment and safety awareness has grown considerably since 1995, and most of the major intermediaries have been responsive to public concerns, and recognize that their scale and importance create a responsibility to address the matter.
For example, most recently Facebook announced plans to down-rank content flagged as untrue and misleading and to make it easier for users to flag content as a hoax. This approach is similar to that of Wikipedia, which both fosters free expression by allowing article edits from nearly anyone, but at the same time allows users to flag articles as disputed (or as failing to meet certain criteria). Platforms have also responded by restricting access to their advertising services to publishers who generate false news content for profit, ad content policies that disfavor misleading “clickbait” headlines, funding fact-checking services, and exploring other options for user feedback. Some additional options will undoubtedly be developed; these could benefit from social science and both psychology research. As with other problematic uses of Internet tools, technology and innovation are often responsive to these problems, and self-regulation can integrate liberty and safety interests in responses to “fake news” concerns.
That’s not to say government action against “fake news” is completely out of the question. Government action can take the form of authority for private action e.g., as civil laws against defamation. Private parties who publish content are not granted the same immunity as the platforms through which content is published, and those subject to harm from defamatory publications can, in some cases, obtain relief through legal action. Even under the appropriately exacting standards of New York Times v. Sullivan, false remarks are actionable where the speaker shows "reckless disregard" of whether a defamatory statement about a public official is false or not, and there is sufficient evidence to support a finding the speaker seriously doubted the truth of his publication.
Under both the Sullivan standard and FCC broadcast regulations, foreseeability and intent are key to deciding if legal sanction is acceptable and/or constitutional. In particular, policy should consider what distinctions can be made where anxiety or even violent reactions are an unintentional result of clever presentation, where it is unintended though foreseeable, and where it is intentional.
The Onion is a classic example of the former: a website which is a clever satire of both news and news content. Much of its content is a humorously exaggerated form of reality, and by and large most readers recognize it as satire and not actual news, though there have been some exceptions. While its parody or satire articles may unintentionally mislead readers into taking it seriously, legal action on the basis of reader over-reaction would be inappropriate, and the answer is simply for readers to be more alert to context.
Even where harmful impacts from “fake news” are foreseeable or intended, the First Amendment poses restrictions on legal action based on false speech. For example, as noted above broadcast regulations against “hoaxes” are only triggered where actual harm results. This is consistent with recent Supreme Court decisions finding false speech is still worthy of protection, including under strict scrutiny where rules target speech content. Several states have laws against fake campaign speech, though in two cases those have also been ruled unconstitutional.
Accordingly, while defamation claims may have merit in some cases, the optimal responses are based on self-regulation by the relevant platforms. Coupled with this should be better understanding of the human characteristics that make “fake news” attractive, and which can frustrate productive discourse and civic engagement.
Much political attention has also been given to combating violent extremism online. I’ve addressed these issues in a previous blog.  The concerns there are similar: that false, misleading and/or provocative “news” incents violence, dehumanizes an ethnic, political or religious group, or amplifies an extremist ideology. These concerns are well placed regardless of whether the proponents are Islamic extremists, white nationalist extremists, a political party or a national agenda. White supremacists and ISIS may use online propaganda in different ways, but the issues are common to both.
In combating violent extremism online, strategies include personal narratives, feedback tools, and similar methods that put false and/or provocative content in context. Approaches like these which seek to correct understanding without challenging the egoic defenses of those who post “fake news” are similarly likely to be more successful.
Provocative fictions can lead to panic in part because those seeking to prove them wrong can, in their fervor, drive believers into further defenses, leading to cyclonic emotional conditions. On one hand, the well-grounded can understandably get anxious and frustrated when others believe in provocative fictions (such as that President Obama is a Muslim who founded ISIS), while on the other, “fact-checking” can make those who believe such fictions simply more inclined to stand their ground.
It’s a mistake to assume having correct facts is enough to orient misguided people. Being “against” the Weathermen only fueled their conspiracy theories, and being “against” the Branch Davidians led to deaths. Too often, human psychology will conflate receipt of corrective facts with an attack on one’s moral agenda. A “mike drop” moment feels really good, and it makes for great entertainment (which is what much “real news” has become), but by definition it ends dialogue and admits no further understanding.
Preferable, I think, for all of us to seek to speak differently – particularly through Internet tools. One aspect of communicating differently online involves distinguishing between addressing incorrect premises differently than incorrect goals, or conflating errors in facts with ethical faults in motivation. Commentary and counter-speech against false provocative content is likely to be more effective where it recognizes these distinctions. 
In the PizzaGate context, for example, the individual who brought a rifle to the restaurant was clearly incorrect on the facts (there was no child abuse ring), and misguided in judgment (even if there had been a child abuse ring, vigilante justice with an assault rifle is not the right reaction), but from his perspective, he apparently felt it moral to defend the well-being of children. And of course, it is moral to defend the well-being of children. This individual was “wrong” on his facts and his judgment, but one can see why he might well believe he was engaged in sound moral reasoning.
In other contexts, conflicts were de-escalated by reference to a trusted, institutional source. Scrabble games are nearly impossible to complete without a dictionary. “Fake news” issues could once have been resolved by reference to trusted institutions: Walter Cronkite, the Supreme Court, one’s parents. Breakdown in trust in institutions is also related to “fake news” issues, and trust in institutions has declined in many ways.
In the next post, then, I’ll take up this variety of “fake news”: content that undermines the foundational institutions of democracy. Propaganda and misinformation that undermines elections, rule of law, prohibitions against corruption and self-dealing respect for science and economics, civil debate, transparency, and accountability, are likely more dangerous to the long-term health of people and society than fictional provocations that may lead to short-term panic but are, in most cases, quickly (and thankfully) extinguished.
http://www.slate.com/articles/arts/history/2013/10/orson_welles_war_of_the_worlds_panic_myth_the_infamous_radio_broadcast_did.html; see also http://www.dailymail.co.uk/news/article-2048091/BBC-1926-radio-bulletin-Bolsheviks-attacking-Palace-Big-Ben-destroyed.html (recounting a 1926 spoof radio broadcast that reportedly caused mass panic in the UK).
http://www.slate.com/articles/technology/technology/2016/12/stop_calling_everything_fake_news.html (noting the unhelpful tendency to “carelessly blur the lines between fabricated news, conspiracy theories, and right-wing opinion by lumping them all under the fake news banner…”); see also https://storify.com/zittrain/of-fake-news-and-filter-bubbles (Jonathan Zittrain distinguishing between the “shallow problem” of fake news (clickbait) and the “deep problem” (biased stories from established sites).
See W. Joseph Campbell, Getting it Wrong: Ten of the Most Misreported Stories in Journalism (University of California Press (2010), p.188.
Compare Debs v. United States, 249 U.S. 211 (1919) (conviction under the Espionage Act upheld for speech protesting the war) with Near v. Minnesota, 283 U.S. 697 (1931) (overturning injunction issued under Minnesota law establishing it is a “public nuisance” to publish a “malicious, scandalous and defamatory newspaper.” J.M. Near published a newspaper with anti-Semitic overtones and allegations of public corruption – content not dissimilar to some provocative Internet “news” services under current discussion.
Levine, Justin (2000) "A History and Analysis of the Federal Communications Commission’s Response to Radio Broadcast Hoaxes," Federal Communications Law Journal: Vol. 52: Issue. 2, Article 3. Online at: http://www.repository.law.indiana.edu/fclj/vol52/iss2/3
See Levine, n.7 supra at p.289.
See, e.g., http://www.huffingtonpost.com/entry/bernie-sanders-could-replace-president-trump-with-little_us_5829f25fe4b02b1f5257a; http://wapo.st/2hyFSCr; https://shift.newco.co/im-sorry-mr-zuckerberg-but-you-are-wrong-65dbf8513424#.cidxrjnf5; http://slate.me/2eWZkXE ; http://slate.me/1MOCM4q; http://www.economist.com/news/science-and-technology/21710228-our-deputy-editor-tom-standage-weighs-debate-about-false-news-aftermath-americas?fsrc=scn/tw/te/bl/ed/; https://www.scu.edu/ethics/internet-ethics-blog/fake-news-on-the-internet/; http://www.politico.com/story/2016/11/obama-fake-news-231565; https://medium.com/@dangillmor/facebook-google-twitter-et-al-need-to-be-champions-for-media-literacy-a58ecea5edbe#.3utpeoafy; http://www.nytimes.com/2016/11/15/opinion/mark-zuckerberg-is-in-denial.html
This article also points out, importantly, that this means 61% of millennials have a primary news source customized to appeal to the interests of themselves and their personal network.
47 USC § 230.
See Robert Cannon, “The Legislative History of Senator Exon ' s Communications Decency Act: Regulating Barbarians on the Information Superhighway ,” Federal Communications Law Journal, Vol 49, Issue 1 (1996), online at: http://www.repository.law.indiana.edu/cgi/viewcontent.cgi?article=1115&context=fclj
Zeran v. AOL, 129 F.3d 327 (1997), opinion online at: http://techlawjournal.com/courts/zeran/71112opn.htm ; see also Remarks of Kenneth Zeran at 15th Anniversary Conference of 47 USC § 230 (“Cultural anthropologists often refer to human evolutionary periods based upon the development of tools. I submit that, ‘Digital technology’ is a decisive point of demarcation in the human timeline. We are experiencing the dawn of that tool in the hands and minds of human beings”), online at: http://www.kennethzeran.com/zeran_sec_230_commentary.html.
See, e.g., http://www.independent.co.uk/voices/editorials/the-facebook-fake-news-scandal-is-important-but-regulation-isnt-the-answer-a7419386.html; but see http://www.reuters.com/article/us-germany-facebook-hatespeech-idUSKBN13C29A?il=0 (German Justice Minister proposes regulating Facebook as a media company).
https://en.wikipedia.org/wiki/Wikipedia:Accuracy_dispute. “Fake news” commentators have also noted, generally, the importance of collective governance to addressing the issue. See https://medium.com/@McDapper/gawker-facebook-governing-truth-7ef747d9841e#.m9mzqej31. Collective input to signal truth value can take various forms, in addition to the one recently announced by Facebook. One could also craft, for example, a browser add-on that classifies posts (https://devpost.com/software/fib) and then showed the user the relative level of verified content in his/her feed.
For example, if consumer selections are done sub-consciously, how could those insights apply to user news feed interactions? See, e.g., https://www.martinlindstrom.com/; http://adage.com/buyology/pdf/Buyology_Symposium_Brochure.pdf
See, e.g., https://medium.com/@SunilPaul/we-can-fix-it-saving-the-truth-from-the-internet-7bec83df150d#.3z6ujvlkj; (comparing “fake news” (sensationalist, low-value content) to spam (also sensationalist, low-value content). See also design solutions working paper posted here: https://docs.google.com/document/d/1OPghC4ra6QLhaHhW8QvPJRMKGEXT7KaZtG_7s5-UQrw/edit#
St. Amant v. Thompson, 390 U.S. 727 (1968)
See, e.g., http://www.thedailybeast.com/articles/2012/09/29/fooled-by-the-onion-8-most-embarrassing-fails.html Notably, both the New York Times and Fox News are among those who reportedly treated content from “The Onion” as true. See http://www.huffingtonpost.com/2011/04/25/new-york-times-fooled-onion_n_853151.html and http://www.rawstory.com/2010/11/fox-nation-readers-confuse-onion-article-real-news/
See United States v. Alvarez, 132 S. Ct. 2537, 2549 (2012) (plurality opinion) (finding the Stolen Valor Act overbroad because violations of the law do not result in a cognizable harm).
See Care Comm. v. Arneson, 638 F.3d 621, 635 (8th Cir. 2011), cert. denied, 2012 WL 2470100 (June 29, 2012); Rickert v. State, Pub. Disclosure Comm’n, 168 P.3d 826, 827 (Wash. 2007); Washington ex rel. Pub. Disclosure Comm’n v. Vote No! Comm., 957 P.2d 691, 693 (Wash. 1998)
See, e.g., https://www.csis.org/features/turning-point; https://www.fosi.org/policy-research/violent-extremism-new-online-safety-discussion/; see also Extremist Content and the ICT Sector: A Global Network Initiative Policy Brief November 2016,” online at: http://bit.ly/2h9N8EN
 Senator McCain in 2008 responded to a woman persuaded Senator Obama was Muslim – to his credit, he both corrected her but addressed her as “ma’am” – signifying that she was entitled to respect notwithstanding her misguided beliefs - and he legitimized the aspects of her concerns that had merit. That is, she was not wrong to believe that if a candidate for President was in fact subject to undue influence from an anti-American power, that would indeed be cause for concern. She was wrong on the facts that formed her premise, but not wrong in the logic of her reasoning. Online at: https://www.youtube.com/watch?v=MRq6Y4NmB6U
Gilad Lotan, “Fake News Is Not the Only Problem,” https://points.datasociety.net/fake-news-is-not-the-problem-f00ec8cdfcb#.av2o6kqmy