Stanford CIS

Tool Without A Handle: The Dark Side

By Chuck Cosson on

“Tool Without a Handle:   The Dark Side”

“Chaos, control.  Chaos, control.  You like, you like?”  -  John Guare, Six Degrees of Separation

Every tool can be misused, and potential harms from such misuse are often checked through force of law.   Information technology is no different.  To consider regulating information technology through use of the tool metaphor, we need to first consider the dynamics of information technology regulation in general.   This blog post addresses key aspects of information technology regulation:  how regulation is at once incapable of fully addressing the potential harms that flow from misuse of information technology, but powerful nonetheless in shaping how information technology tools are built and used.

I accept, as given, that the pace of innovation will always outstrip the responsive speed of regulatory ideas.   But, like the proverbial tortoise, regulation balances the speed of innovation with the determined strength of legal sanction.  If regulation were truly ineffective, innovators would not seek to avoid it, and incumbents would not seek to foist it on challengers.  Moreover, regulation carries not only the force of formal law,[1] but the force of social consensus.  Regulation, in short, represents a large (if rough) social consensus that a problem exists, that it won’t be solved absent rules, and a reasonably stable agreement on what those rules should be.

The strength of that force is proportionate to the degree of the consensus among and political influence of those who hold it.  If the consensus is strong, regulatory ideas that become embedded can be a nearly immovable influence on both how technology is used, and on how innovators and service providers govern themselves.  In this fashion, regulation serves as an impetus for action, by removing uncertainty that might otherwise hang over innovation and slow the path from idea to implementation.

Regulation of indecency and advertising provide useful example to frame this discussion.  With respect to indecency, the Communications Decency Act (“CDA”), as amended, represents a rough social consensus about the types of actions online service providers should take with respect to blocking certain types of content if found.[2]  While the precise boundaries are rightly the subject of regular debate,[3] the CDA nonetheless established that online service providers need not be tasked with program control obligations such as those held by TV broadcasters.  This provided relief from uncertainty, enabling rapid growth in online content offerings and in content hosts and exchange media.

At the same time, the CDA established that online providers should be responding to “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” content in certain cases (and provided a safe harbor for them to do so).[4]  With respect to child abuse images, a later law in fact required action:  online providers face fines for failure to report child abuse found on their systems.[5]

The case of child abuse content leads, naturally, to the recognition that regulation is necessary in certain contexts.  It is simply not acceptable to treat network information technologies as a “territory” unreachable by legal sanction given the harms to person and property that people can inflict using the technology.  While I know of no way one could commit homicide directly over the Internet (perhaps poison delivered via Google Nose),[6] the violation of privacy and dignity associated with the re-publication of explicit photos using Internet tools is undoubtedly a grave harm.

Regulation of advertising demonstrates the same principles in a context in which the harms of misuse of the tool are ordinarily less grave.  Unsolicited bulk commercial email – “spam” – illustrates three key characteristics of Internet regulation:  1) law and regulation only follow after community-based and technical measures have been implemented; 2) enforcement is inevitably incomplete, and often wisely so given other interests at stake and; 3) not every harm is immediately apparent.

Advertising has been essential to commerce for centuries, and it may have struck salesman Gary Thuerk as both productive and efficient to use the ARPANET computer network director of users to send an unsolicited commercial message about his company’s new computer system.  Non-commercial researchers, operating from a different perspective than salesmen, objected strongly to this use of the network.  At the end of the day, reportedly Thuerk’s company also sold more than twenty of the computer systems, for a million dollars apiece.[7]  In other words, unsolicited commercial email was both disfavored and successful.

The CAN-SPAM legislation reflects these varied interests.  It requires email marketers to honor opt-out requests for marketing email messages, and to include true contact information so recipients can make such requests.  It does not ban unsolicited commercial email outright.  The regulations go as far as there was an agreement among Congress to impose limits on a marketing method that is effective for many businesses; the goal being to “kill the dark side without killing the promise,” as one testifying witness put it.[8]  But while anti-spam laws only go so far, behind the law are important ideas that incent voluntary behavior - innovations in spam filters and “we won’t spam you” marketing messages.

So it is with “the dark side” of information technology.  We cannot legislate away the harmful misuses of these tools, nor would we want to – any more than we would want government dictating every aspect of any form of human behavior.  In a subsequent post, I’ll look at how thinking of information technology as “tool” informs particular approaches to regulation.  A key aspect of that will be:  with tools that are widely distributed and over which consistent enforcement of rules-of-use would be difficult, regulation against misuses of the tools will draw much of its power from the social consensus that both creates, and is created by, the process of regulation, which in turn informs individual choices about how to use the tools.


[1]Self-regulation, such as industry codes of conduct, has an important role to play, and I consider it included in “regulation,” as discussed here.  Self-regulatory systems often contain binding rules and private enforcement mechanisms and are, in some cases, enforceable by governments under unfair trade laws.

[2]I’m referring to the version of the CDA amended in 2003 to remove the indecency provisions struck down as unconstitutional restrictions of speech.  See Reno v. ACLU, 521 U.S. 844 (1997).

[3]For example, Abraham Foxman and Christopher Wolf’s recent book, Viral Hate, calls on online service providers to address racist speech through enforcement of private Terms of Service.  Jillian York, writing in Slate magazine, argues that private companies should not block offensive speech because they lack the capacity to do so fairly and transparently, and to make determinations as to what speech is worthy of sanction.

[4]47 USC § 230(c)(2)(A).

[5]18 USC § 2258A, online  at http://www.law.cornell.edu/uscode/text/18/2258A

[6]http://www.google.com/landing/nose/.

[7]“Damn Spam – the losing war on junk e-mail,” Michael Specter, The New Yorker, August 6, 2007, http://www.newyorker.com/reporting/2007/08/06/070806fa_fact_specter .

[8]Testimony of Jerry Cerasale, Direct Marketing Association, at Hearing on “Spam and its Effects on Small Businesses.” https://bulk.resource.org/gpo.gov/hearings/108h/93042.pdf.