Stanford CIS

Tool Without A Handle: A Duty of Candor

By Chuck Cosson on

“Tool Without A Handle:  A Duty of Candor”

The law and legal professional ethics require of counsel a duty of candor in the practice of law.[1]  This includes a duty to not knowingly make false statements of fact, to not conceal controlling legal authority, and to not offer evidence the lawyer knows to be false.[2] These principles are considered essential to maintaining both substantive fairness for participants in the process, and trust in the integrity of the process for those outside of it.

Users of information tools in public contexts are not, of course, subject to the same duties.  And publication of false information is generally protected by the First Amendment, unless it falls into one of the defined exceptions.[3]  We cannot, legally, require the same duty of candor for discourse outside of the formal legal environment, including the conduct of political campaigns, or for professional journalism.[4]  There is a cognizable government interest in such a restriction on speech, given the volume and impact of misinformation spread through information tools.[5]  I’m doubtful, though, that a law against publication of false information would be sustained.

It is, however, perfectly acceptable for most information technology platforms to adopt such a policy and seek to enforce it as best they can.  That is, platforms could create and enforce rules against publication of information known to be false.  A recent publication from the NYU Stern Center for Business and Human Rights contends platforms should do so.[6]  This post concurs: subject to some limitations, private platforms can and should take a position that use of their services to intentionally or carelessly spread false information violates terms of service.

For reasons of concision, I don't fully deal with the limitations in this post, but I will note two:  first, determining what constitutes "false" information and acting on it at scale are non-trivial challenges.  The second is that application of such a rule by major platforms could lead simply to false content migrating elsewhere; which is also to say that platform terms of use policies may not necessarily be the ideal focus for addressing misinformation concerns.

This is not to say, either, that platforms have not already applied policies against uses to publish false information.  Some platforms have adopted “fact-checking” approaches, flagging information as disputed or supplying more information to put posts in context.[7]  Platforms may seek to reduce the distribution of false information as part of its algorithmic approach to prioritizing content, such as in displaying news feeds or search results.[8]  Platforms also, of course, already act on false information that may violate other rules of the platform such as spam, harassment, privacy violations or hate speech,[9] or which use false information to attract visitors to sites distributing malware.[10]  Sites can be taken down on some platforms (but not on all) because they are registered using fake names.[11]  But I am not aware of any platform that considers it a violation of its community rules to knowingly post false information.[12]

A first objection to a rule against false information is it would give platforms inordinate power to control information flows on controversial topics, including those where facts are in dispute.  A response to that objection is that such rules would only sanction intentional or negligent disinformation, rather than opinion.  Another response is to, in a sense, accept the point and focus not on expanding the already complex and immense challenges of content moderation and fact-checking, but on reducing the incentives to spread misinformation by, in turn, leveraging a policy prohibiting intentional spread of false information to also reduce demand for it.

That is, a community policy against publication of false information can embody a social consensus that such acts should be prohibited, which in turn provides support for others to report such wrongs, to discourage such wrongs, and to be alert to their own susceptibility to such wrongs.  In this way, the prohibition is not the only tool in the box but supplements other approaches. It becomes a reflection of the community rather than an external force upon it.

This helps avoid placing too much weight on the platform’s policing power, and not enough on human capacity to resist knowing or careless sharing or consumption of misinformation.  The well-known slogan “only you can prevent forest fires” is an illustrative analogy.  Forestry and forest rangers, as well as laws against arson, are important for preventing fires.  But when fires can be started with small acts, including those of carelessness, and the territory to police is too vast for human moderation to be everywhere, campaigns to prevent forest fires also focus on individual actors and their roles and responsibilities.[13]  In a sense, peer pressure becomes the fulcrum to help prevent fires.  So too, campaigns to prevent harms from misinformation could help support policy prohibitions.

Another challenge is how to distinguish between unintentional acts that lack candor and intentional acts.  Sometimes one doesn't know the truth and speculates.  Sometimes one believes something is true but it's not.  This can be due to deception on the part of the proponent that is very difficult to uncover (spoofed identities) or due to deception that is less difficult to uncover, if one takes the time (political claims that can be fact-checked).

It can also be because someone knows something isn't true but wants to believe it is; that's also on a spectrum:  unconscious bias on one end, conscious bias on the other.  One can argue something is true while believing it is (ignorance) or believing it is not (gaslighting)[14]; it’s only the latter my proposed policy would subject to more severe penalties.

A Scientific American article on the types and impacts from misinformation used a graphic from a Council of Europe report to illustrate important distinctions between intent to harm and simply distributing false information.15]  Intent to harm is here distinguished from simply distributing false information.  It’s an important distinction. But an effective rule should create disincentives for both deliberate fabrication and unintentional mistakes.  This means some understanding of psychology is needed to inform both how to ascertain which actions are intentional and how to signal users to avoid unintentional mistakes.

Which is to say an environment in which such a rule could be both accepted and effective would likely require an evolution in consciousness.  I’m well aware the phrase “evolution in consciousness” has both a bit of presumptuousness and a bit of unreality about it.  Yale Law professor Charles Reich is better remembered (usually skeptically) for his treatise on The Greening of America, describing his view of three stages of consciousness, roughly described as:  1) individual and self-reliance; 2) technology and bureaucracy and 3) a shared quest for understanding and a nonviolent sense of cooperation.[16]

That Reich is better remembered, skeptically, for this work than his influential legal scholarship is a testament to the extent to which the counterculture of the 1960’s is held to ridicule.  Some of that ridicule is certainly deserved, but the prospect of change in consciousness, aimed at creating an environment of understanding and cooperation (across valid differences of opinion) was not a foolhardy idea, then or now.  One can see evidence of it occuring in nature, in fact.[17]

Accordingly, it’s worth thinking about changes in consciousness that could address misinformation concerns.  I use the term “consciousness” well, consciously.  Changes in behavior are best rooted in changes in consciousness.  For a prohibition on intentional or negligent distribution of false information to work, it needs to be accompanied by such a change in consciousness.  Simply providing factual context has not been effective.  And, as I’ve argued in previous posts, effective rules require a strong social consensus.[18]

One can certainly start by observing behavior, and from there seek to identify the nature of consciousness; in particular the rewards motivating that behavior.  But as with so many things for which some extent of social change in behavior occurred (such as smoking, fair treatment of women, drunk driving, or racial attitudes) the root was a change in consciousness; in particular in the patterns and beliefs about what would yield emotional reward.

As the Council of Europe authors found, “[t]he most ‘successful’ of problematic content is that which plays on people’s emotions, encouraging feelings of superiority, anger or fear.”[19]  Superiority, anger, fear – to which one could add cynicism[20] and contempt[21] - are key concepts at the root of both incentives and causes of misinformation and disinformation.  They are also perennial human emotions and evolution, even of consciousness, can move slowly.

It is a reasonable objection to say my argument is too simplistic in blaming human nature for disinformation and too unrealistic in expecting change in such nature to be attainable.  Humans are, the argument goes, simply too susceptible to conspiracy theories, and the media system too disintermediated.[22]  Thus a flat-out duty of candor enforced by information technology platforms could be prone to concentrating power that is abused against a minority, and equally prone to furthering cynicism and contempt because it would be incompletely enforced.

These objections feel valid insofar as they point out likely challenges and identify undesirable outcomes.  But there are reasons to believe an evolution in consciousness, which in turn leads to measurable declines in the attractiveness of false information, and thus reduces the behavior subject to the power of enforcement, is possible.  The Economist leader I cited on cynicism observes a top-down reason:  politicians who forsake outrage for hope can and do find success.

A first non-cynical step could be to recall Americans voted in, only recently, a President iconized in a poster featuring the word “hope.”[23]  Millions in Hong Kong have taken to the streets to press for democracy and rule of law.  The politics of outrage, successful as it may be in recent years, is by no means the only, or even the most effective, path to mass influence.

Additionally, the same mind that generates inclinations towards distribution of conspiracy theories, false (but intriguing) news stories, and outrage-inducing anecdotes can serve to discourage such things as well.  Conspiracy theories can be attractive because they provide both drama and explanation; salving emotions in response to occasions of fear, anxiety and loss.  But their falseness means those salves don’t satisfy.

Like eating a packet of chips, or indulging in an extramarital affair, or throwing a vase, pushing a conspiracy theory may feel good for a moment but the gaps in facts or logic or virtue mean they lack true nutrition, and are ultimately incomplete at addressing the negative emotions at which they aim.  That many individuals cannot break free of these remedies doesn’t mean they wouldn’t wish for more fuller relief, especially if it was equally as convenient.

Aristotle contended it was human nature to “desire to know.”[25]  Aquinas, building on that, believed a desire to know the truth was among the 5 core inclinations of human beings.[26]  These inclinations are, indeed, subject to obstacles, including inclinations to bias.  But for every instance of confirmation bias there is a book, essay, training module or conversation that seeks to blunt its effects.

Whether one is interested in being a better stock trader, manager, parent, spouse, or simply being influential on social media, a track record of making (and acting on) defensible statements is a desirable accomplishment. And the current level of attention paid to the challenges of disinformation is evidence that this inclination to know the truth is motivated by more than self-interest.[27]

This quest for success – the attraction of perfecting ourselves and our democratic union - is at least as strong as the quest for the emotional zing of gossip, rumor or secret knowledge.  And over the long term, I believe, it is likely to be stronger.  Nicotine in cigarettes, after all, is demonstrably addictive.  And, as a consequence, cigarette smoking is still common despite years of documented health hazards.

But trends in smoking show continual rates of decline[28] and bans on smoking in public places are not only widespread, but widely supported and effective.[29]  To propose private platforms establish a ban on intentional or negligent use of their services for false information, as addictive as it may be, could well prove to be a similar success.

[1]See, e.g., American Bar Association Rule 3.3: https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_3_3_candor_toward_the_tribunal/

[2]This duty of candor is not the same as a duty to present a balanced case; indeed, the legal profession requires precisely the opposite:  a zealous presentation of arguments for one’s client.  The duty of candor simply says one may not knowingly make false statements about the applicable law or the facts of the case.

[3]For example, the tort of defamation, which requires negligent publication of a false statement purporting to be fact, which causes injury. https://www.law.cornell.edu/wex/defamation.  See also https://www.theatlantic.com/ideas/archive/2019/08/free-speech-cliches-media-should-stop-using/596506/ (explaining that “First Amendment exceptions are few and well established”).

[4]Journalists can, of course, create codes of self-regulation for their profession, see, e.g., Society of Professional Journalists, “Code of Ethics,” online at: https://www.spj.org/ethicscode.asp.  And news organizations are free to impose punishments or rewards for observation of ethics in journalism.

[5]For illustrations of the extent and impact of false information shared through information technology, see research of the American Press Institute’s “Fact Checking Project,” which found false information on Twitter overpowers efforts to correct it by a ratio of about 3 to 1. https://www.americanpressinstitute.org/fact-checking-project/new-fact-checking-research-false-information-floods-twitter-many-americans-confidently-wrong/

The API’s research contends that fact-checking can be “remarkably successful at eliminating false beliefs, even among those who were very confident in their misperceptions.”  Id.  Other studies are not so optimistic.  Research published in the Public Library of Science contends that confirmation bias, which facilitates echo chambers, usually wins out over fact-checking efforts.  Fabiana Zollo et. al, “Debunking in a World of Tribes,” (July 24, 2017), online at: https://doi.org/10.1371/journal.pone.0181821

[6]“Disinformation and the 2020 Election:  How the Social Media Industry Should Prepare,” NYU Stern (Sept 2019), online at: https://issuu.com/nyusterncenterforbusinessandhumanri/docs/nyu_election_2020_report?fr=sY2QzYzI0MjMwMA

[7]See, e.g., https://blog.google/products/search/fact-check-now-available-google-search-and-news-around-world/; https://newsroom.fb.com/news/2018/06/hard-questions-fact-checking/; https://blogs.bing.com/Webmaster-Blog/September-2017/Bing-adds-Fact-Check-label-in-SERP-to-support-the-ClaimReview-markup.

[8]See, e.g., https://blog.google/products/search/our-latest-quality-improvements-search/.  Platforms can also “de-monetize” pages or publishers that repeatedly publish false information by removing the display of ads on those pages.  See, e.g., https://newsroom.fb.com/news/2017/08/blocking-ads-from-pages-that-repeatedly-share-false-news/

[9]See, e.g., https://newsroom.fb.com/news/2018/05/hard-questions-false-news/; https://newsroom.fb.com/news/2018/06/hard-questions-fact-checking/; see also https://www.facebook.com/help/publisher/182222309230722 (guidance for publishers)

[10]See, e.g., https://blogs.microsoft.com/on-the-issues/2018/08/20/we-are-taking-new-steps-against-broadening-threats-to-democracy/

[11]This appears to be the basis upon which Facebook acted against accounts it identified as being part of Russian organized disinformation efforts.  U.S. Senate Select Committee on Intelligence, Testimony of Colin Stretch (November 1, 2017); online at /content/files/sites/default/files/documents/os-cstretch-110117.pdf.  Independent tools can “audit” a Twitter account to provide an estimate of how many followers of a given account are fake accounts (my Twitter account has only mild popularity but those who do follow are real accounts).  See, e.g., https://www.twitteraudit.com/ChuckCosson

[12]See, e.g., Facebook news release, May 28, 2018 (“Although false news does not violate our Community Standards, it often violates our polices in other categories, such as spam, hate speech or fake accounts, which we remove”).  https://newsroom.fb.com/news/2018/05/hard-questions-false-news/

[13]See https://www.adcouncil.org/Our-Campaigns/The-Classics/Wildfire-Prevention.  At the time of the campaign’s creation, the United States was at war and consequently many other resources were deployed elsewhere, and a sense of national and public responsibility was strongly felt.  See https://smokeybear.com/en/smokeys-history/about-the-campaign.  It’s not excessive to appeal to that same sense of national and public responsibility today, in order to incent individuals not to knowingly spread false news and to be more alert to signs of it.

[14]https://en.wikipedia.org/wiki/Gaslighting

[15]https://www.scientificamerican.com/article/misinformation-has-created-a-new-world-disorder/ Credit: Jen Christiansen; Source: Information Disorder: Toward an Interdisciplinary Framework for Research and Policymaking, by Claire Wardle and Hossein Derakhshan. Council of Europe, October 2017, online at:  /content/files/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c.pdf (“Council of Europe Information Disorder Report”).

[16]The Greening of America (Random House: 1970); see also Obituary, June 18, 2019, https://wapo.st/2MzWTiX; Rodger D. Citron, “Charles Reich’s Journey From the Yale Law Journal to the New York Times Bestseller List: The Personal History of The Greening of America,” Yale Law Journal, Vol.52 (2007/08), online at: https://bit.ly/31YkJsn

[17]Examples include changes in attitudes towards LGBTQ persons and their eligibility for the legal institution of marriage.  See, e.g., https://www.pewforum.org/fact-sheet/changing-attitudes-on-gay-marriage/; see also Obergefell v. Hodges, 576 U.S. ___, 135 S. Ct. 2584 (2015), Roberts, dissenting (“Supporters of same-sex marriage have achieved considerable success persuading their fellow citizens—through the democratic process—to adopt their view”).  Evolution in consciousness is occurring even if its spread is by no means uniform.  Some changes in conscious understanding of the world, including on biological evolution, have taken firm root among those who observe scientific methods, while remaining in flux among the broader public.  See National Center for Science Education, “Views on evolution among the public and scientists,” online at: https://ncse.com/news/2009/07/views-evolution-among-public-scientists-004904; David Masci, “Essay: Darwin in America- the evolution debate in the United States,” Pew Research Center, (February 6, 2019); online at: https://www.pewforum.org/essay/darwin-in-america/

[18]“Tool Without a Handle:  Justified Regulation,” online at: https://cyberlaw.stanford.edu/blog/2014/09/tool-without-handle-justified-regulation

[19]Council of Europe Information Disorder Report, p.7.

[20]The Economist, Leader, “Democracy’s Enemy Within,” https://www.economist.com/leaders/2019/08/29/the-corrupting-of-democracy (“Cynicism drags democracy down...[t]he riposte to cynicism starts with politicians who forsake outrage for hope”).

[21]Arthur Brooks, Love Your Enemies: How Decent People Can Save America from the Culture of Contempt (Broadside Books (March 12, 2019); see also https://arthurbrooks.com/love-your-enemies/ (series of short videos on key themes of the book).

[22]For a study identifying both the attraction of conspiracy theories and the role of alternative media channels, see Kate Starbird, “Examining the Alternative Media Ecosystem through the Production of Alternative Narratives of Mass Shooting Events on Twitter,” online at:  /content/files/kstarbi/alt_narratives_icwsm17-cameraready.pdf

[23]https://en.wikipedia.org/wiki/Barack_Obama_%22Hope%22_poster

[24]https://www.realclearpolitics.com/epolls/other/president_trump_job_approval-6179.html

[25]Aristotle, Metaphysics. p. I.I.,980a2I-7, W.D. Ross, Translator; online at: http://classics.mit.edu/Aristotle/metaphysics.1.i.html.

[26]St. Thomas Aquinas, Summa Theologiae I-II.94.2, (Benziger Bros. edition, 1947) Translated by Fathers of the English Dominican Province; online at: http://www.ccel.org/a/aquinas/summa/FS/FS094.html

[27]And platforms used to spread disinformation can generate social networks aimed at combatting it.  https://www.facebook.com/factcheckbeforesharing

[28]Center for Disease Control, Fact Sheet, “Current Cigarette Smoking Among Adults in the United States,” https://bit.ly/2hQsVrr; K. Michael Cummings and Robert N. Proctor, “The Changing Public Image of Smoking in the United States: 1964–2014,online at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3894634/

[29]See, e.g., Mojtabi, et. al, “Clean indoor air laws, cigarette excise taxes, and smoking: Results from the current population survey-tobacco use supplement, 2003-2011,” online at: https://www.ncbi.nlm.nih.gov/pubmed/31173803 (“Participants living in states and counties with higher excise taxes and more comprehensive clean indoor air laws had a higher likelihood of quitting and lower likelihood of everyday smoking”).