The current public health crisis makes effective management of misinformation more than just a theoretical or constitutional law exercise. Misinformation costs lives, both as a general matter and in individual cases, particularly where the misinformation lands in a susceptible mind that may be prone to poor judgment. The World Health Organization expressed concerns about an “infodemic” So it is altogether timely to again consider some principles for such management.
In the February 2020 issue of “Wired,” a series of articles contends, as this blog often has, that current problems with misinformation will not be solved with technical innovation alone, but require insight as to why humans use information tools as they do. In particular, one article by Gideon Lewis-Kraus observes we need to understand why misinformation and rancor are so popular, rather than expect algorithms to sort this content for us.
Lewis-Kraus is no apologist for large platform companies, but he hits on a key observation: “[t]he case for corporate blame is, at any rate, probably more expedient than it is empirical.” That is, while we should consider platform company rules and behavior critically, we often fail to apply that same critical thinking to human agents. In truth, platforms have radically de-centralized the generation, distribution, and consumption of news and entertainment. And, therefore, radically amplified to the extent to which user preferences for content drive events.
For further insights on managing misinformation, then, it makes sense to look to human factors. These include ways in which humans form identity through imitation, purge enmity through scapegoating, and may lack the inability to internally generate a clear sense of preferences or make choices that align with them.
This occasional failure to act in accordance with our own values has been a feature of humans for millennia. Group identity formation mechanisms, including violence and its proxies (e.g., snark and ridicule), also well predate modern information tools. Add to this good faith disagreements in values and priorities for regulating online content, and the illusion of a single “silver bullet” solution to misinformation is shown for what it is. Misinformation, like many other tech policy questions, cannot be solved by an algorithm that will correctly answer “yes” or “no” in a way that is universally valid.
Accordingly, it makes sense to think of these human characteristics, and their consequences, as less problems to “solve” (at least in our lifetimes) than realities to manage. Still, because of the centrality of the human agent, managing those characteristics may prove more fruitful than demanding platforms create better algorithms.
One characteristic worth analyzing is the human tendency to project events into the future or to assign trajectories to immediate observations. This tendency contributes to misinformation problems as it assigns undue weight to both the ability of the predictor and the probability the prediction will come to pass.
In addition, because the number of data points needed to establish a “trend” is entirely at the discretion of the writer, a “trend” can be claimed with either light evidence or persuasive data; the weak claims are not necessarily distinguished from strong ones. Correctly anticipating future trends is difficult, and therefore there are strong incentives for authors claiming a trend to insulate themselves from accountability for wrong predictions.
Accordingly, "trend stories" about important matters such as public health, economic and political matters, or technology policy are at times presented like fashion “trends” – a set of claims destined to come and go as part of passing entertainment, untested for any permanent truth, linked by an untested hypothesis. As "trend stories" don't require a particular standard of probability, they are simple to create. And so we see article after article on “trends to watch,” each one easier to write than the one before it.
In addition to being insulated from reality testing, and simple to create, trend articles are appealing. Each trend article speaks to, among other things, a desire to belong to the right group, and to be (or at least to seem) knowledgeable. As readers are acutely social animals, inclined to orient their actions based on cues from other people, trend stories are catnip to readers. And trend stories that reinforce pre-existing beliefs about "where the world is going" can be very appealing, particularly if there is an eschatological bent to them.
Finally, trend stories operate to reduce uncertainty about the future and its attendant anxiety. Yet as Professor Mark Lilla explains in a New York Times commentary, the pursuit of antidotes to uncertainty is a fraught exercise. Instead, a dose of humbly accepting uncertainty would do us good.
This digression into “trend stories” illustrates that while we reflect on what algorithms and other technology are doing to us that fosters misinformation problems, we may be failing to ask what we are doing to technology to spread misinformation. Our predliction for "trend stories," however improbable the trend asserted, leads to both creation and further consumption of such information, which can often be wrong or at least worth more scrutiny.
Now, all of this inquiry presupposes some elasticity in how humans take on information. If human beings are incapable of improving their discernment, then insights on how to do so would be less than fruitful. I’ve previously argued humans are “meaning makers,” and I do believe are capable of stepping back from first assumptions and making more thoughtful choices about the meaning of information, including misinformation (and disinformation).
Propaganda, though, does present challenges to this faith in human discernment. Theologian Jacques Ellul, for example, argued “it is not true that [humans] can choose freely with regard to what is presented to him as the truth,” and “[p]ropaganda furnishes objectives, organizes the traits of an individual into a system, and freezes them into a mold.” And, it may be of some comfort to St. Paul to know there is evidence that the choices we make are indeed not necessarily in our conscious control.
Even if those critics are right, I'd respond that rightness demands we protect the right of humans to so choose, even if it means humans may choose poorly. Just because free choice is inhabited with a bit of illusion – subconscious beliefs that control our thinking, and thus our actions, without our immediate awareness - doesn't mean a government, corporation or other collection of humans should be granted total control of such choices.
Doing so is part of a consistent ethic advocating protection for the entire human person. And it is consistent with voluminous studies from economists, psychologists, marketers, and counselors regarding how beliefs are formed, change, and are influenced. If propaganda is indeed as ineluctably effective as some contend, then certainly its methods could influence positive change in how persons interact with information and technology.
For example, misinformation flourishes in environments where shared perspectives are weak. Hence the first targets of authoritarians are independent media, churches, and artists. Just as misinformation can induce people to undue skepticism of experts, authorities can bolster their trustworthiness through unifying messaging. Just as propaganda may seek to drive group identity through scapegoating, art can drive group identity through shared experiences.
“The play’s the thing,” famously said Shakespeare’s Hamlet, facing his own crisis of misinformation and propaganda from the reigning King of Denmark. This bit of wisdom, self-serving as it is for a dramatist, nevertheless illustrates how the antidote to propaganda may well be art. Art can help illustrate, in ways that argument and evidence cannot, shared qualities of experience and perspective.
And equally, so, freer societies can produce more opportunity for genuine understanding across political realities. Focusing on what humans do to technology can throw into relief, appropriately enough, what art can do to humans. In that light, I’ll close with words from Cate Blanchett, writing about her performance as the arch-conservative Phyllis Schlafly:
“[A]rt, in its ability to cross social, partisan and even temporal gaps can help foster a shared sense of understanding. It can bring us together physically and emotionally. And it can teach us about one another, inspiring empathy rather than anger. Art matters because it lets us engage with our complex social fabric, allowing us to cross divides and work toward a safer and more meaningful existence together.”
This is a point of agreement among those who otherwise have deep disagreements as to whether the risks of coronavirus itself or the harms resulting from social distancing rules aimed at containing it are harmfully over-amplified by misinformation. See Steve Hilton, “Fear-mongering media's dangerous, divisive coronavirus misinformation has a human cost,” https://www.foxnews.com/opinion/steve-hilton-fear-mongering-medias-dangerous-divisive-coronavirus-misinformation-has-a-human-cost (“This misinformation, this fear they stoke, it's not just empty words. It has a real-world impact -- the human costs of the shutdown pushed by the shutdown fanatics on TV”); Jonathan Capehart, “Susan Rice on Trump’s coronavirus response,” (“He has cost tens of thousands of American lives”); https://www.washingtonpost.com/opinions/2020/04/06/susan-rice-trumps-coronavirus-response-he-has-cost-tens-thousands-american-lives/ And likely many on both sides of the “when to reopen” debate would agree there is nothing helpful about flat out untruths, such as the proposition the design of the new £20 note has a symbol representing a “5G tower” and the coronavirus, and that 5G contributes to illness. See https://fullfact.org/online/5g-coronavirus-20-note/.
See, e.g., NBC News, “Man Dies After Ingesting Chloroquine in Attempt to Prevent Coronavirus,” https://www.nbcnews.com/health/health-news/man-dies-after-ingesting-chloroquine-attempt-prevent-coronavirus-n1167166
Natasha Kassam, “Disinformation and coronavirus,” https://www.lowyinstitute.org/the-interpreter/disinformation-and-coronavirus.
Gideon Lewis-Kraus, “The Anatomy of Desire,” WIRED, February 2020, p. 76; online at: https://www.wired.com/story/polarization-politics-misinformation-social-media/ ;
See, e.g., Romans 7:18 (“For I have the desire to do what is right, but not the ability to carry it out. 19 For I do not do the good I want, but the evil I do not want is what I keep on doing….”). https://www.biblegateway.com/passage/?search=Romans+7%3A15-20&version=ESV
As with many such concepts, there is a fancy German word for it: the “Entscheidungsproblem” – a problem that asks for precisely such an algorithm. See https://en.wikipedia.org/wiki/Entscheidungsproblem
Indeed, this then posits the question “better at what?” If we mean “better at reflecting the preferences of users,” this hypothesis suggests that such “better” algorithms make matters worse. There seems to be little appeal for the alternative, though, which is make to the algorithms better at showing content that is “good for us,” as that does seem to allocate undue responsibility on the platform to define what information serves the flourishing of each particular user, and pushes the model closer to the broadcaster/publisher business and further from the distributed tool social media, and the Web before it, aimed to be. Not to mention economic impacts. Posit a grocery that refused to sell soda, chips, alcohol or other unhealthful food items competing with one that did, e.g.
Famous examples including the record label who declined to sign The Beatles (https://www.independent.co.uk/arts-entertainment/music/news/the-man-who-rejected-the-beatles-6782008.html); Microsoft’s dismissal of the potential market share for the iPhone (https://arstechnica.com/information-technology/2007/04/ballmer-says-iphone-has-no-chance-to-gain-significant-market-share/); and the many writers (yours truly included) dubious wireless would ever replace traditional phone service. See Charles D. Cosson, You Say You Want a Revolution? Fact and Fiction Regarding Broadband CMRS and Local Competition, 7 CommLaw Conspectus 233 (1999), online at: https://scholarship.law.edu/commlaw/vol7/iss2/3/
See, e.g., Rob Henderson, “The Science Behind Why People Follow the Crowd,” Psychology Today (May 2017), https://www.psychologytoday.com/us/blog/after-service/201705/the-science-behind-why-people-follow-the-crowd
See https://www.inc.com/travis-bradberry/11-ways-successful-people-overcome-uncertainty.html (“Our brains give us fits when facing uncertainty because they're wired to react to it with fear”).
Mark Lilla, "No One Knows What's Going to Happen," NY Times, May 22, 2020, https://www.nytimes.com/2020/05/22/opinion/sunday/coronavirus-prediction-future.html
See Jacques Ellul, Propaganda: The Formation of Men's Attitudes (Konrad Keller / Jean Learner trans.) (1973), p 160
See, e.g., Dan Ariely, “Are We In Control of Our Own Decisions?,” online at: https://www.ted.com/talks/dan_ariely_are_we_in_control_of_our_own_decisions?
The modern Catholic Church, for example, teaches it is the duty of every person to seek the truth, and relatedly that it is no one is ever permitted to coerce anyone to accept Catholic faith against that person’s conscience. See Code of Canon Law, Book III, Canon 748, online at: http://www.vatican.va/archive/ENG1104/_P2H.HTM
Cate Blanchett, “I’m Not ‘Mrs. America.’ That’s the Point,” NY Times, May 21, 2020, https://www.nytimes.com/2020/05/21/opinion/cate-blanchett-art-mrs-america.html