“Tool Without a Handle: “Tools for Terror, Tools for Peace,” part II

This blog continues the analysis of how to respond to terrorist activity (including recruitment and planning of attacks) using network information technology, in particular social media.  As noted earlier, I think promising avenues to investigate include three areas:

1) Countering misinformation

2) Active recruitment to alternative missions

3) Areas beyond communication – e.g., algorithmic adjustments by social media platforms

In each of these areas, an effective response to terrorist recruitment online starts with upholding the principles we wish to promote as alternatives to extremism.  It cannot be said enough that illiberal responses do not bode well for discouraging an illiberal agenda.  Accordingly, we must accept that a certain amount of content advocating extremist agendas will be created, shared, and consumed through information technologies.  The point is to counter the extremist agenda itself – the expression of it through social media content is the symptom, not the root, and censoring the symptoms may not have productive effects, including on eradicating the root.

My own personal expertise does not extend to extremist groups, but I do have extensive personal experience with campaigns aimed at reinforcing positive online behaviors, and discouraging negative behaviors (especially in children).[1]  As a recent statement from the White House noted, experience with other Internet policy concerns, such as online threats, such as cyber bullies, scammers, gangs, and sexual predators, is informative of and consistent with a strategy for responding to extremism.[2]

From that work, I know that empirical analysis is important, because what may seem to be a common-sense effective approach may in fact not be.  I know that the personal narrative often is more moving than the abstract, alternatives are more appealing than simple prohibitions, and that misinformation can spread rapidly even among well-intentioned policy analysts.  And, I know that effective methods will, by design, also not interfere with lawful Internet use or the privacy and civil liberties of users.

This is not to say there’s no place for restraints on content.  Content can be restrained by governments, where it crosses boundaries created to protect privacy and dignity, and by private sector companies, where it crosses policies created to improve the user experience for all.  But even private sector sanctions should be carefully, fairly and consistently applied. 

As with many objectives, it is difficult to achieve results if we are unclear what successful results look like. I posit that success looks like a measurable spread of liberal beliefs and values such that it crowds out extremist beliefs and movements from being able to gain traction.  Beliefs such as “religious diversity is good”; “peaceful dispute resolution is the only acceptable option,”, and “the future is bright” are already widespread, but clearly not widespread enough to curb the appeal of extremism to desired levels.  The tools of social media afford the capability to measure the spread of such extremist statements, online interactions, and can even attempt to infer beliefs.[3]

In other words, the immediate objective is not to eliminate extremism (though I agree with John Lennon it’s important to imagine what that would be like, it’s a vision, not an action item).   Rather, the action item is to expand the scale and depth of certain social beliefs so that calls to extremism and violence are seeds that more often fall on dry soil and do not grow.  This has been done before, with behaviors such as smoking and drunk driving (which are also both addictive and deadly). 

So this is a process, and while that process plays out there will be some extremists (and extremist groups) who will not part from their ways, and most certainly will not be so parted by reason.[4]  But over time we should be able to make measurable progress in terms of fewer actual recruits to terrorist organizations, fewer supporters of their political objectives and by more natural, frequent, and unobjectionable content expressing alternatives to the extremist agenda.[5]

With that perspective in mind, some suggestions:

1)      Countering misinformation

There are several tactics, already commonly understood, to bring to bear on misinformation:

Creation of material that not only provides history and perspective, but is oriented narrowly to speak directly to the personal experience of potential extremist recruits.  For example, to address extremist propaganda that re-writes history to foment a sense of injustice or to scapegoat particular groups, consider sponsoring local-language content authored by local figures, which provides a personal, factual, history of events in a city or town – a counter-narrative to extremist stories.  Among other things, this recognizes the importance of the local perspective, and of the essential role that local organizations play in countering extremism.[6]

Where safety considerations do not require anonymity, narratives from those with personal experience of participation in, and then disillusion from, an extremist organization can be particularly powerful, particularly with young people.  Youth can be vulnerable to misinformation for many reasons.  Those from very religious households may feel spiritually lost between the strict messages of childhood and the natural expansion of their connections to a larger, secular world.  Young people are in the process of developing a personal identity, and/or looking for an adventure to create personal purpose. As part of breaking free from parental and school authorities, they may come to question established authorities and doctrines of all kinds.  Content from peers who have had similar feelings, but found other outlets – or who have found their way to extremist groups only to regret it – can be more effective than abstract or cliché’ statements in reach such audiences.[7]

2)      Active recruitment to alternative missions

To the extent lack of educational and employment opportunities is a motivating factor for people to align with extremist groups, online technology can provide such opportunities, and allow for organizations with alternative agendas to flourish.  As President Obama noted, poverty neither creates nor excuses violent extremism.  Nonetheless, it makes sense to address the grievances terrorists exploit, including economic grievances, and education in online technology skills should be part of this effort.[8]

I agree with this approach, but it illustrates an important caveat - this policy approach is, to some extent, a statement of faith in market economics – that acquiring skills will lead to opportunities, and a statement of faith in human nature that – that those who acquire employable skills and opportunities will feel more free of grievance and put those skills to good use. 

A more ruthless approach would seek to slow extremists’ use of online technology by restricting their access to such technology and by withholding information about how to design, build and use such tools – not providing skills training.  That path, though seems likely to both perpetuate grievances that lurk at the roots of extremism’s appeal, to be ineffective since the skills and tools are widely available, and to harm those who would use information technology skills for positive ends, frustrating the counter-narrative just discussed.  Moreover, extremism may be fueled less by poverty or lack of opportunity, but from the emotional sense of injustice that such conditions are artificially imposed by foreign or illegitimate powers – an emotional state unlikely to be quelled by sanctions on skills training.[9]

Rather, the education in online technology skills should be coupled with active recruitment to use those skills for alternative, positive missions.  Both the skills and the opportunities to use them should be part of this strategy.  Efforts at creating opportunities could include both macro-level measures such as economic development and education reforms, and targeted programs such as youth exchange programs/internships.  Economic opportunity also needs, we should not forget, effective rule of law and effective policing, both so legitimate businesses can thrive and so those looking for opportunities are deterred from finding them in hijackings, trafficking drugs and other forms of contraband, and other crimes.

3)      Management of online content

In my previous post, I noted how terrorist organizations use information technology (and social media in particular) in ways that are subtler and more varied than simply communicating out propaganda.  Some, for example, have attempted to hijack popular hashtags (e.g., #worldcup) to drive display of their messages.  These forms of manipulation are precisely what social media and search engine product teams seek to manage (except if such manipulation proves positive for users, in which case it may be converted into a feature…).

Social media and search engines can, and should, make their own business decisions as to how their platforms promote (or demote) content.  This in itself is a form of free expression on the part of those firms; algorithms are frequently adjusted based on improving relevance or desirability to consumers.  There is nothing unusual should such firms opt to adjust these calculations to demote terrorist content.  Nor should private sector social media shy away from using its own editorial discretion to manually remove content that praises or supports terrorism, provided these editorial policies are explained publically and applied consistently. 

There are, nonetheless, legitimate concerns about the extent to which political views – of any kind – should enter into algorithmic or manual editing of social media platforms, and, understandably, debate about the demarcation point where their application crosses over from legitimate business discretion into illegitimate political bias.  Collective approaches seem best positioned to address these concerns. 

Industry has collaborated on principles to guide effective responses and preclude them from crossing over into private-sector censorship.  For example, the Anti-Defamation League worked with leading online companies to formulate principles for responding to online cyberhate.[10]  The “Dangerous Speech” project has developed guidelines to identify when speech presents concrete risks of catalyzing violence.[11] The Global Network Initiative principles and guidelines remain relevant in the way in which they address non-voluntary content removal, i.e., government demands to limit online content.[12]

This variety of principles and views, though, shows that algorithmic or automated methods to remove extremist content would be difficult to implement with consistency.  This distinguishes them from tools that can systematically block child abuse images, such as the PhotoDNA tool.[13]  Such images are both universally illegal, and have fixed characteristics that classify them into the illegal category.  Extremist content lacks both of these qualities. 

In general, the core of the best responses may turn out not to be tools, but people.  Combating violent extremism (aka “CVE”) online will depend on people who have the knowledge to rebut attempts to rewrite history, people with the personal knowledge of disillusionment with extremist agendas and organizations, people with the ability to provide skills training and opportunities, and people who choose to make the most of those rather than apply skills with information technology to bad ends.

And, it will depend upon people who design and build tools and processes that moderate content fairly and consistently, people who use good judgment in reporting abuse of online tools, and people who are fair and pragmatic in their analyses of both industry and policymakers seeking to counter extremism and spread more positive messages effectively through information technology.

Success will require focus and persistence, but these thoughts illustrate a vision to imagine for a more effective response to violent extremism online and, as the song says, imagining is, in fact, easy.

 

[1]For example, I participated in the Internet Safety Technical Task Force convened by the Berkman Center at Harvard Law in 2008 (see https://cyber.law.harvard.edu/research/isttf), helped manage safety citizenship campaigns for Microsoft, and was on the board of the Internet Content Rating Association, whose tools to manage content ultimately failed to gain traction – but whose work created insights that lead to the now successful Family Online Safety Institute; see https://www.fosi.org/icra/

[3]The tools for tracking sentiment via social media are wide and varied, and several tools can be applied at once to yield a fuller picture.  For example, a UNICEF study aimed to track the spread of false anti-vaccination beliefs on social media, noted four different techniques for social media monitoring: 1) monitoring by volume (number of mentions, likes, posts, etc.); 2) monitoring by channels maps and examines the various networks that users use to exchange content; 3) monitoring by engagement (measuring numbers of users who respond, like, share and participate with content; 4) monitoring by sentiment analysis (a qualitative approach that uses word libraries to detect positive or negative attitudes by users towards an issue).  See “Tracking Anti-Vaccine Sentiment in Eastern European Social Media Networks (April 2013), online at http://uni.cf/1TdxLu9.  Dynamic methods have been proposed so that trends in sentiment can be analyzed over time, and in reaction to shifts in events.  See, e.g., He, Lin, Gao and Wong, “Tracking Sentiment and Topic Dynamics from Social Media,” http://www.aaai.org/ocs/index.php/ICWSM/ICWSM12/paper/view/4496/5038.  These techniques necessarily measure concrete actions and may provide inferences about beliefs, though it is challenging to measure the subjective state of a given person (which is naturally fluid in any event).

[4]And thus there will always be a need for both law enforcement and national security-oriented surveillance, and for actions based on that intelligence, including those that are lethal.

[5]Content that is “against” extremism is, by itself, insufficient.  Angry voices against an extremist agenda simply amplify the forces of anger rather than promote alternatives.  Being “against” the Weathermen only fueled their conspiracy theories, and being “against” the Branch Davidians led to deaths (an experience that has informed law enforcement training and tactics ever since).  Alternatives to extremism should be empathetic and rational.

[6]See, e.g., “Strategic Implementation Plan for Empowering Local Partners to Prevent Violent Extremism in the United States,” https://www.whitehouse.gov/sites/default/files/sip-final.pdf

[7]For example, the group WiredSafety has created a Teen Angels program for youth to become trained in online safety and help educate their peers on safe online tool usage, including responding to bullying and harassment.  See http://teenangels.org/

[8]“Remarks by the President at the Summit on Countering Violent Extremism,” (February 19, 2015), State Department, Washington, D.C. https://www.whitehouse.gov/the-press-office/2015/02/19/remarks-president-summit-countering-violent-extremism-february-19-2015

[9]See, e.g., Hafez Ghanem, “Economic inclusion can help prevent violent extremism in the Arab world,”  http://www.brookings.edu/blogs/up-front/posts/2015/11/10-economic-inclusion-violent-extremism-arab-world-ghanem (“the usual statements about violent extremism— “it’s all about poverty,” or “we should focus on job creation,” or “we need to fix the education system”—are probably wrong. Or at least still need to be validated”).

 

Add new comment