Tool Without A Handle: Tools For Meaning, Part 2

“Tool Without A Handle”:  Tools and the Search for Meaning – Part II

If you are distressed by anything external, the pain is not due to the thing itself, but to your estimate of it; and this you have the power to revoke at any moment. ~ Marcus Aurelius

The last post in this series observed optimal policy thinking aims at allowing people sufficient control over technologies they may use them to apply their own capacities and, in that process, find meaning.  This post explores that point further; in particular how emphasis on technology, rather than people, falls short of that aim.

What Technology is Doing To Us, Is Up to Us

I see too often an assumption technology is itself the problem.[1]  Blaming technology for problems such as misinformation, online harassment, and publication of hate speech is both insufficient (people use the same tools to spread facts, encouragement, research, and spiritual wisdom), and unsatisfying (the tools exist, so there’s little point in arguing they shouldn’t).  Internet technology is not like other tools (say, chemical weapons) whose very nature invites harmful uses and admits of few others.[2] The uses to which the tools are put is far more impactful than tools themselves.

It’s also incomplete, though, to say tools are only as good as the people using them.[3]  User choices matter a great deal, but so do choices of those who design the tools, adding or limiting their capacities.  Those are also human decisions.  In fact, arguments against treating Internet technology as neutral tools tend to collapse into this point:  that the impact of tools is inextricably linked to the intentions of the persons designing them.[4]  As I explain further below, though, the impact of tools also involves much more than just design choices.

Another argument against technology, as tools in themselves, involves a similar shift: technology is not neutral as it can have non-neutral consequences.[5]  This is because technological affordances, introduced into societal context, create new opportunities for use and those opportunities may favor certain people or groups, for example.[6]  Again, this illuminates the role of humans, including the economic and governmental systems they’ve created.

The import of both arguments is less, then, that technology itself creates harms such as misinformation or harassment, than that human choices about both use and design matter.  Rather than say “design matters,” then, it’s preferable to say “designers matter.”  Cynical designers would use psychology to deceive or manipulate; optimistic designers would use it to afford opportunities for flourishing.  But design frequently involves trade-offs that accept certain undesirable uses to enable desirable ones.  Or to accept that design features can have desirable and undesirable impacts, in part because of the variety of choices available to users.

For example, Internet technologies can extend the impact of privacy harms through permanent digital records and broad potential audiences.[7]  But those same qualities of permanence and breadth can foster accountability, potentially increasing online trust.[8]  And let’s be sure to recognize who is included among “designers.”  Generative Internet technologies, particularly online platforms, are by design used in ways not limited to the designs of their designers.[9]

What we ultimately have to accept, and wrestle with, is that freedom of choice means there is no perfect design that precludes all potential harms and enables all (or only) positive uses.  More often, Internet policy questions involve designers making choices between reasonable but competing alternatives.  So we cannot solve privacy or expression concerns with Internet tools solely by better design.  The application of technology, and the interpretation of information, occurs at the level of the makers, the users and the recipients. We must consider the nature of the humans building and using tools as well.  And humans, designers and consumers alike, are invariably meaning makers.

We Are Meaning Makers

Design choices and business priorities are motivated by a search for meaning. Self-actualization is at the peak of Maslow’s famous pyramid of human needs.[10]  Inventing something beautiful, easy to use, and which makes a difference to both investors and society are immensely important forms of self-actualization.

Users of technology are also meaning makers.  Users do not always use this capacity effectively, and may well have evolved inclinations towards assigning meanings that are familiar rather than those that are accurate.[11]  But many others take comfort in learning, and in growth, facilitated by information technology. In any event, users are not automatons.

A Brookings study found that 57% of those surveyed had seen “fake news” during the 2016 election, and 19% said it influenced their vote.[12]   So 38%, more than a majority, were not influenced in their voting.  I’m not saying misinformation isn’t a problem or that technology designers shouldn’t respond.[13]  But something useful is shown where, for a majority, misinformation does not ultimately influence voting.

The meaning assigned to information is more determinative than the information itself.  Daniel Goleman, among many others, has described the processes of the mind involved in forming emotions which, in turn, guide behavior.  In particular, he's described how events are assigned meaning by the mind based on past memories (in some cases out of date or only crudely related memories) stored in the amygdala.[14]

As another psychologist emphasized, “no event has an inherent meaning because any event could have a multitude of meanings… [m]eaning exists only in the mind, not in the world.[15]  Alan Watts famously illustrated this with a parable of a Chinese farmer, who, when faced with various events, wisely declined to accept the popularly suggested meaning and simply deferred a determination as to whether an event was good or bad.[16]  And, as this blog’s Stoic epigram shows, this wisdom is not unique to Asia, nor to modernity.

It is, therefore, clearly within the human capacity to cultivate emotional intelligence, both with respect to design of technology and the interpretation of information whose distribution technology facilitates.  I agree with critics, such as Neil Postman, who voiced concern over disconnection of information technology and human purposes.[17]  Where I diverge, though, is in advocating the proper response is more human purpose, not less information technology.

For example, Jason Pontin published three principles for technology that put humans at the center, including “design technologies to swell happiness.”[18]  I agree the best design principles put humans at the center as well.  What the discussion of meaning-making shows, though, is that technology design itself cannot “swell happiness.”  Happiness is a choice, a meaning made by a person, not by a tool.[19]  Technology designers can certainly lose trustworthiness by their choices, but it is only humans, as meaning makers, who can create a sense of trust for themselves.

To reiterate a bit, then, this post is not an argument against focusing on design, but rather an argument for focus on the meanings that designers assign to their choices, and to give due recognition to the fact that some perceived harms are not the result of insufficient forethought, but of difficult choices between design possibilities.  As an additional example, obscurity and anonymity are, in many contexts, “privacy-related values,” but those design qualities also frustrate law enforcement and intelligence goals, can reduce cybersecurity, and can contribute to harassment and abuse.[20]  Indeed, in an earlier blog post, I noted how anonymity online can violate human rights.[21]  There will never be a single perfect design solution to online harms.

To generate greater trust in information technology, then, it’s important to focus also on emotional intelligence, civic education, and improved critical thinking among technology users, who are also assigning meaning to the posts they share, the information they receive, and the services they use.  This focus can certainly include regulation of user actions, [22] as I’ve noted before.[23]

Crime and abuse will always be among the universe of user choices made.  Rather than blame technology itself, though, or even technology designers for misinformation and similar harms, let’s also account for the role of human users, their role in determining meaning, and hold users accountable for their role in perceived harms.  This is as it should be. This is the gift, and the burden, of freedom.

 

[1]For example, in this interview with Jaron Lanier, the interviewer starts the conversation by asking Lanier for his thoughts on “what technology has done to our spiritual health”; https://www.wired.com/story/interview-with-jaron-lanier/.  Lanier, correctly, goes on to discuss the role of human beings:  both those using and those designing technology.  In another example, an article discusses the role of “mindfulness technology” in saving us from other technology that distracts. Implied, but insufficiently stated in the article, is the fact that the “mindfulness technology” was invented by humans, in order to address a perceived market for tools to help resist distractions. Molly McHugh, “Will Mindful Technology Save Us From Our Phones—and Ourselves?,” The Ringer (Oct 25, 2018), online at: https://www.theringer.com/tech/2018/10/25/18022246/mindfulness-technology-smartphones-palm-light-phone-mindful-tech

[2]One can readily identify harmful and unethical uses of facial recognition tools, for example.  Yet those same tools admit of uses that are “positive and even potentially profound,” such as location of missing children, identification of terrorists, and assistance for the visually impaired.  See, e.g., Microsoft on the Issues, “Facial Recognition and the Need for Public Regulation and Corporate Responsibility,” online at: https://blogs.microsoft.com/on-the-issues/2018/07/13/facial-recognition-technology-the-need-for-public-regulation-and-corporate-responsibility/  So even there, the question is not “should facial recognition technology exist,” but “what are the principles that should govern its use?”  For one candidate as to such principles, see ACLU, “An Ethical Framework for Facial Recognition,” online at: https://bit.ly/2B2rJKg

[3]See Bret Stephens, “How Plato Foresaw Facebook’s Folly.” New York Times (Nov 16, 2018) https://www.nytimes.com/2018/11/16/opinion/facebook-zuckerberg-investigation-election.html?

[4]See, e.g., Mellissa Gregg and Jason Wilson, “The Myth of Neutral Technology,” The Atlantic (Jan 13, 2015), online at: https://www.theatlantic.com/technology/archive/2015/01/the-myth-of-neutral-technology/384330/.

[5]Melvin Kranzberg’s first “law of technology” framed it as “technology isn’t good or bad; nor is it neutral.”  See  Michael Sacasas, “Kranzberg’s Six Laws of Technology, a Metaphor, and a Story” (August 25, 2011); online at: https://thefrailestthing.com/2011/08/25/kranzbergs-six-laws-of-technology-a-metaphor-and-a-story/ (“Sacasas”); see also https://twitter.com/tweetinjules/status/1063080722959724544

[6]See Sacasas, supra n.3 (What Kranzberg meant by non-neutrality is that “[t]echnology’s interaction with the social ecology is such that technical developments frequently have environmental, social, and human consequences that go far beyond the immediate purposes of the technical devices and practices themselves, and the same technology can have quite different results when introduced into different contexts or under different circumstances.”

[7]See, e.g., Danielle Citron, Hate Crimes in Cyberspace (Harvard University Press, 2004), p.5

[9]See Jonathan Zittrain, “The Generative Internet,” Harvard Law Review, Vol. 119 (2006), online at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=847124

[10]Abraham Maslow, "A theory of human motivation". Psychological Review. 50 (4): 370–96 (1943); see https://en.wikipedia.org/wiki/Maslow%27s_hierarchy_of_needs

[12]Darrell West, Brookings Series on AI and Emerging Technologies, (October 23, 2018), https://brook.gs/2R0v4z9

[13]Indeed, in that survey, 45% said fake news is very much a threat to democracy, and over 55% said both government and technology firms should be doing more to address it.

[14]Daniel Goleman, Emotional Intelligence, New Delhi: Bloomsbury (1995), p.14-29.

[15]Morty Lefkoe, “Why Do We Need to Create Meaning?” online at:  http://www.mortylefkoe.com/why-create-meaning/

[17]Neil Postman, “Science and the Story That We Need,” January 1997, online athttps://www.firstthings.com/article/1997/01/science-and-the-story-that-we-need

[18]Jason Pontin, “Three Commandments for Technologists,” online at:  https://www.wired.com/story/ideas-jason-pontin-three-commandments-for-technologists/mbid=BottomRelatedStories_WIRED25.  His other two commandments are: enact reasonable laws that limit the potential damage of a new technology (at least as further evidence is forthcoming), and prioritize technologies that have utility, but also provide fresh scientific insights. 

[19]Indeed, some philosophers would disagree that there is any material difference between the person and the experiences themselves.  Views vary from the Buddhist concept of Anattā (no-self), to John Locke’s memory theory of personal identity, to more modern existentialist and phenomenological views.

[20]See Woodrow Hartzog, “Privacy’s Blueprint” (Harvard University Press, 2018), p.25.

[21]Tool Without a Handle, “Privacy and Regulation: An Expanded Rationale,” https://cyberlaw.stanford.edu/blog/2014/12/%E2%80%9Ctool-without-handle%E2%80%9D-privacy-and-regulation-%E2%80%93-expanded-rationale (citing K.U. v. Finland, European Court of Human Rights (December 2, 2008); online at https://www.crin.org/en/library/legal-database/ku-v-finland

[22]In some cases, online actions warrant application of the criminal justice system.  See Marlisse Silver Sweeney, “What the Law Can (and Can't) Do About Online Harassment,” The Atlantic (Nov 12, 2014) online athttps://www.theatlantic.com/technology/archive/2014/11/what-the-law-can-and-cant-do-about-online-harassment/382638/

[23]“Tool Without A Handle: A Dust Cloud of Nonsense,” http://cyberlaw.stanford.edu/blog/2016/12/tool-without-handle-dust-cloud-nonsense

 

Add new comment