“Tool Without a Handle”: Tools for Terror; Tools for Peace
This blog has addressed principles and challenges in countering odious online content – both content which transgresses the law and content which, while odious, is nonetheless protected free expression.[1] In particular, I’ve touched on regulation of such content, noting principled distinctions between regulation of protected speech and regulation of justifiably restricted content that is illegal even though it may have an expressive element.[2]
These principles and challenges apply equally to online content – and activity – aimed at promoting and recruiting for terrorism. This includes “radical Islamic terrorism” (connoting a radical vision of Islam at odds with much of Islam’s history and mainstream practice), and other ideologies, including racial animosity,[3] the irreligious, and those fueled by economic and social fears.[4] Domestic “militia” movements are also expanding numbers of adherents,[5] and use networked information technologies to recruit followers, and to plan violent activity.[6]
The characteristics of networked information tools make them particularly useful for terrorist recruitment and organization – and thus make the tools equally useful to groups and individuals with other agendas, including those promoting peace, tolerance, and justice.[7] This illuminates a broader point – not only the obvious point that tools can be used for good and for ill – but that an effective response to terrorist recruitment online starts with upholding the principles we wish to promote as alternatives to extremism. Illiberal constraints on privacy and expression do not bode well for discouraging an illiberal political and social agenda.
Three characteristics that make networked information technologies particularly useful are: 1) near-global connectivity and broad availability of both publishing and access methods; 2) pseudonymity and non-traceability of actors; and 3) multi-media and interactive capabilities.[8]
1) First, broad availability of access means radical ideologies can reach receptive individuals in areas unserved by print newspapers and television stations, and can be published directly without prior permission from gatekeepers or intermediaries.
Extremist messages may gravitate to the Internet, for in many jurisdictions commercial (or state-owned) television broadcasters may be very unlikely to air such views, particularly where the terrorist agenda is (as it often is) at odds with that of the incumbent government.
By the same token, the ability to publish without prior permission from intermediaries facilitates distribution of counter-terrorist information and messages.
2) Pseudonymity and non-traceability enable communications by terrorists that are difficult to link to their source, and enable interested recipients to engage or view material without revealing their identity as an adherent. That same capability, of course, enables undercover agents to communicate out regarding terrorist activities, and for others to monitor terrorist communications in secret.
Importantly, these characteristics allow activists and journalists to publish with reduced fear of retribution, and for ordinary citizens to organize. The ‘Arab Spring,’ gave considerable credit (perhaps excessive credit) to the affordances offered by online tools.
3) Multi-media and interactive capabilities make Internet tools especially rich for activism, persuasion, and communication. Creating a narrated film is very simple and often does not require capturing original content – merely reassembling (and in many cases radically decontextualizing) other content available online.
With this in mind, some key questions. Below are the “core discussion areas” reported to be the topics of a recent meeting with technology executives and government officials.[9]
a. How can we make it harder for terrorists to leveraging the internet to recruit, radicalize, and mobilize followers to violence?
b. How can we help others to create, publish, and amplify alternative content that would undercut ISIL?
c. In what ways can we use technology to help disrupt paths to radicalization to violence, identify recruitment patterns, and provide metrics to help measure our efforts to counter radicalization to violence?
d. How can we make it harder for terrorists to use the internet to mobilize, facilitate, and operationalize attacks, and make it easier for law enforcement and the intelligence community to identify terrorist operatives and prevent attacks?
Some observations on these questions follow.
“Closing the Internet”:
First, “closing” the Internet in regions where terrorist groups are active is likely to be ineffective at addressing any of these goals. Notably, advocates of this approach have been vague as to methods and criteria for areas warranting closure – an indication that those issues are difficult to surmount.[10]
There are two obvious problems with this, the first of which is that removing access to these capabilities frustrates all uses, good, bad and neutral, and the second of which is that it is not technically possible, at least not without the cooperation of Internet service providers in those regions. Adding to the ill-advised nature of this approach is that, if imposed from outside, it would radicalize additional persons and amplify messages of persecution terrorists covet.
Targeted content takedowns
Other proposals have been a bit more focused - selective takedowns of terrorist recruitment and propaganda websites.[11] Application of content moderation (human or algorithmic) is warranted in some cases. I’ve noted before there are legitimate bases for measures such as domain name seizure, website blocking, and criminal penalties for certain uses of websites (such as child abuse and harassment). And, individuals may well have privacy claims that justify content removal (e.g., a family requests removal of a beheading video of a family member).
To the extent technology is being actively used to plan an illegal act, including acts of violence, there are legitimate reasons to both interdict these uses and surveil those involved. Offensive cyber capabilities could be effective in some cases at interdicting or tracking certain technologies. But such overtly terroristic content is unlikely to be out in the open – the question really arises with respect to a ‘takedown” strategy for terrorist recruitment communications, which may disguise violent agendas among religious and social criticism (or promises of earthly rewards). To address extremist recruitment, a targeted takedown effort by either the public or private sector needs to address some key considerations.
First, there is the “whack a mole” objection: content taken down can easily reappear. But this is true of any type of content and this is not a unique objection. Similarly, questions of scope and definition are not unique to this problem, and can be addressed in several ways.
In terms of scope, “advocacy of political violence,” is too broad by itself to warrant content removal. A principle needs to distinguish recruitment solicitations for terrorist groups from propaganda for guerilla armies and, in turn, from US military recruiting.[12] Refinements could include whether advocacy of political violence advocates imminent violence, whether it advocates violence on the basis of race, religion, etc., or whether content depicts gruesome violence. The fact that judgment calls are involved in enforcing laws, regulations, or social media terms of use is no reason not to enforce such rules. But some principled limit is required, and should expect to be tested and debated.[13]
There are finally questions as to who decides to remove such content: government enforcement, government legislation guiding the private sector, or the private sector acting independently? For example, Twitter has opted to act on certain forms of terrorist recruitment messages through private enforcement of its terms of use.[14]
There are pros and cons to each approach. An official government finding has added credibility in at least two ways: first, such actions are limited by the First Amendment and so both require ample justification and are subject to judicial oversight; second, knowledge of specific terrorist organizations is more concentrated in government hands than in the private sector.
On balance, though, there are good reasons to leave such matters to the discretion of the private sector. Doing so avoids several problems, and helps strike a better balance between freedom of expression and social goals of reducing political violence. The key is to provide clarity in guidance for moderators and transparency in guidance for users.[15] Granularity and specificity are particularly important – among the reasons are that private sector content takedowns can be susceptible to illiberal social pressures, which can impact minority voices.
In theory, content takedowns could be ordered by government action – for example a finding that the party holding a social media account is a barred terrorist organization.[16] Under US law, it is illegal to “knowingly provide material support or resources" to a designated foreign terrorist organization.[17] However, this law presently excludes providing telephonic or personal communications,[18] and expansion of this regulation would be both complex to administer (given current approaches to identity online) and challenging to coordinate internationally.
Also, it would be important for government direction to the private sector to stay at arm’s length – child abuse prosecutions have faced questions as to whether a service provider relaying evidence was an agent of the government and therefore also subject to constitutional limitations - here, the First Amendment could limit private sector takedowns if the provider is found to be a state agent. So long as the service provider acts voluntarily, even if based on statutory guidance, such a claim is likely to be unsuccessful, but it is a boundary to bear in mind.[19]
Which brings me to the option of private action under statute (as with the case of online harassment), where law provides guidance to the private sector but does not dictate specific takedowns (or, as in the case of copyright, leaves the takedown requests to private parties). For terrorist content, a law expressing encouraging private sector takedowns seems to combine the risks of government restrictions on freedom of expression, with no real advantage for the private sector or guidance as to what should be taken down, and thus seems less attractive than discretionary action by the private sector.
That is not to say there are no cases where government enforcement actions could not be effectively targeted for application by the private sector. For example, use of online technologies to collectively further the purposes of terrorism could constitute predicate crimes under the Racketeer Influenced and Corrupt Organizations Act (“RICO”). The RICO statute affords both a civil cause of action and criminal penalties for those proven to have engaged in a pattern of criminal activity undertaken as part of an enterprise.[20] A takedown of resources, accounts, etc., of a party found, after due process, to have violated this law could be sufficiently targeted to avoid objections.
Alternative Approaches
In a following blog, I’ll explore active use of technologies to dissuade potential recruits from aligning with extremist groups. In particular, I’ll cover three areas:
1) Countering misinformation
- Transparent and credible material, targeted to potential extremist audiences, providing not only facts but history and perspective;
- Personal narratives, especially from those with direct knowledge and personal experience, can be particularly powerful at countering misinformation;
2) Active recruitment to alternative missions
- To the extent lack of educational and employment opportunities is a motivating factor for people to align with extremist groups, online technology can provide such opportunities, and allow for organizations with alternative agendas to flourish.[21]
3) Areas beyond communication
- Terrorist organizations use information technology (and social media in particular) in ways that are subtler and more varied than simply communicating out propaganda. For example, the ADL has documented ways in which ISIS seeks to game Twitter’s hashtag and algorithms to spread its reach.[22]
- Social media and search engines can, and should, make their own business decisions as to how their algorithms promote (or demote) content. This in itself is a form of free expression on the part of those firms; these algorithms are frequently adjusted based on improving relevance or desirability to consumers. There is nothing unusual should such firms opt to adjust these calculations to demote terrorist content.
In general, the core of the best responses may turn out not to be tools, but people, and those people may well not be makers of tools, but their users. When the concerns were recruitment of the easily influenced (i.e., the young) to illicit sexual relationships, it became plain to many, including a leading group of experts assembled by the Berkman Center to advise state attorneys general, that “technology can play a helpful role, but there is no one technological solution or specific combination of technological solutions to the problem of online safety for minors.”[23]
Similarly, there is unlikely to be a solely technological solution to the problem of radicalization or its products, including planning of terrorist attacks. An excellent resource for countering radicalization is, in my view, individuals who have been radicalized and turned away from it, or who understand the cultural, social and emotional factors that lead to it. [24] In turn, tools can amplify their voices, and in turn encourage others who would help turn the susceptible away from terrorism, or who would provide information about those who are engaged in it. To quote an observer from an article on this topic, “the lens we use to look at things like radicalization… improves dramatically when we have more people from that community."[25]
[1]https://cyberlaw.stanford.edu/blog/2013/06/tool-without-handle-dark-side; https://cyberlaw.stanford.edu/blog/2012/12/tool-without-handle-%E2%80%9Ckittens-cities-and-creepshots%E2%80%9D
[2]https://cyberlaw.stanford.edu/blog/2014/09/tool-without-handle-justified-regulation
[3]See, e.g., the 2012 attack on a Sikh temple https://www.splcenter.org/fighting-hate/intelligence-report/2012/sikh-temple-killer-wade-michael-page-radicalized-army and the 2015 murders of black parishioners in South Carolina http://www.cnn.com/2015/07/22/us/charleston-shooting-hate-crime-charges/
[4]See Department of Homeland Security, “Rightwing Extremism: Current Economic and Political Climate Fueling Resurgence in Radicalization and Recruitment,” /content/files/irp/eprint/rightwing.pdf
[5]https://www.splcenter.org/news/2016/01/04/antigovernment-militia-groups-grew-more-one-third-last-year
[6]http://www.adl.org/press-center/press-releases/extremism/adl-report-examines-state-of-white-supremacy-america.html#.VpFoIPkrJeg
[7]For simplicity’s sake, I refer here to ‘characteristics,’ though I agree that there is a rich vein of understanding to be applied to law and technology questions through James Gibson’s theory of ‘affordances’ - /content/files/courses/cs137/readings/gibson-aff.pdf. For an application of that theory to questions of privacy and surveillance, see Ryan Calo, “Can Americans Resist Surveillance?,” online at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2635181
[8]See /content/files/documents/australia/publicaffairs/establishing_end_to_end_trust.pdf for a useful discussion of the first two of these characteristics (connectivity and non-traceability) and online security;
[9]http://www.theguardian.com/technology/2016/jan/07/white-house-summit-silicon-valley-tech-summit-agenda-terrorism
[10]See, e.g., http://arstechnica.com/tech-policy/2015/12/trump-wants-bill-gates-to-help-close-that-internet-from-terrorists/;
[11]See, e.g., https://www.washingtonpost.com/news/the-switch/wp/2015/11/17/one-gop-lawmakers-plan-to-stop-isis-censor-the-internet/; http://arstechnica.com/tech-policy/2015/11/congressman-to-stop-isis-lets...
[12]Candidates for such distinctions could be, for example, that the content recruits for service in an organization recognized as affiliated with an established UN member nation, that the content recruits for an organization that does not advocate for violence or discrimination based on race, religion, ethnicity, or sexual orientation, or that the content recruits or promotes an organization subject to certain international treaties, such as the Geneva Convention. Even these criteria, though, will allow for disputes as to whether a given content takedown is ‘fair.’
[13]See https://www.eff.org/press/releases/onlinecensorshiporg-tracks-content-takedowns-facebook-twitter-and-other-social-media for discussion of an advocacy group project on content takedowns on social medial
[14]https://blog.twitter.com/2016/combating-violent-extremism. Twitter acknowledged, in doing so, that “there is no “magic algorithm” for identifying terrorist content on the internet, so global online platforms are forced to make challenging judgement calls based on very limited information and guidance.”
[15]See http://concurringopinions.com/archives/2012/03/actualizing-digital-citizenship-with-transparent-tos-policies-facebooks-leaked-policies.html
[16]See http://www.state.gov/j/ct/list/ for a US government list of designated organizations
[17]The term "material support or resources" is defined in 18 U.S.C. § 2339A(b).
[18]See 31 CFR § 595.206
[19]See http://cyb3rcrim3.blogspot.com/2010/06/state-action-and-4th-amendment.html; see also United States v. Keith, No. 11-0294 (D. Mass. 2013)(court held that AOL was not acting as a state actor, though the National Center for Missing and Exploited Children (“NCMEC”) was doing conducting a Fourth Amendment protected search.
[20]18 U.S.C. §§ 1961–1968. See Zvi Joseph, The Application of RICO to International Terrorism, 58 Fordham L. Rev. 1071 (1990), : http://ir.lawnet.fordham.edu/flr/vol58/iss5/8
[21]See, e.g., http://www.muslimsforpeace.org/ and http://www.gainpeace.com/; see also http://www.nytimes.com/2012/02/24/us/gain-peace-in-chicago-aims-to-counter-anti-muslim-sentiment.html (NY Times article illustrating efforts of Muslim groups to counter misinformation).
[22]http://www.adl.org/combating-hate/international-extremism-terrorism/c/isis-islamic-state-social-media.html?#.VtkZVPkrJeg; https://www.chathamhouse.org/event/waging-digital-counterinsurgency
[23]/content/files/sites/cyber.law.harvard.edu/files/isttf_final_report-executive_summary.pdf
[24]This is my own view but reports are there are others (more influential than I) who agree. See http://www.theguardian.com/technology/2016/jan/20/facebook-davos-isis-sheryl-sandberg
[25] http://www.buzzfeed.com/sheerafrenkel/inside-the-obama-administrations-attempt-to-bring-tech-compa#.aiXz4W5AQ