Tool Without A Handle: Guerilla Information Warfare

Tool Without A Handle:  Guerilla Information Warfare

“The enemies of liberal democracy hack our feelings of fear and hate and vanity, and then use these feelings to polarize and destroy" - Yuval Noah Harari.[1]

Most of the discussion of the report of the Office of the Special Counsel[2] has, understandably focused on the political implications, and interestingly for a report prompted by foreign intelligence operations, it is not really a counterintelligence report.[3]  But Volume I does identify some interesting points about the use of networked information tools for expression that, in many respects, constitutes hostile military/intelligence activity.  Or at least, uses which blur the lines between military activity of a foreign government and private activity of foreign persons.

The Internet, when considered under the “place you go” metaphor, has been likened to the Wild West,[4] or an anti-sovereign kind of “place.”[5]  But recent events illustrate the value of considering information technology as tools rather than a “place” or even a medium.  These events include the intelligence activities carried out by Russian forces during the 2016 election, attacks using tools known as Stuxnet[6] and WannaCry,[7] and attacks by Hamas (an attack to which Israel responded with an air strike, marking perhaps the first armed response to a cyberattack).[8] What these events illustrate is there is no “Wild West” free from the interests or activities of national sovereigns, or their military.  There is only the physical space we naturally occupy and electronic tools to send and receive information, for various purposes including warfare, and related intelligence work.[9]

These events also illuminate some important points about, among other things, ongoing challenges with asymmetry in cybersecurity.  Asymmetry refers to the fact that attackers need only succeed once, while defenders must succeed at every engagement.  With the resources of national militaries behind efforts that already have asymmetric advantages, achieving a desired level of security becomes more difficult and more expensive.  And access to information tools that can be used with malice is much simpler than access to physical, chemical or biological weapons.[10]

It’s encouraging that I’ve seen recent usage of the term “cybertools.”  I don’t know, unfortunately, who to credit for this coinage, but it has appeared in articles on National Security Agency hacking tools,[11] government technology for cybersecurity, [12] and use of AI and the Internet for military offensive efforts[13].  Considering technologies as “tools,” that can be wielded by anyone, military forces included, produces a better mental picture, one more likely to prompt understanding of the appropriate urgency.

A “place” connotes something with boundaries, and possibly with some organizing principles or shared expectations of behavior.  “Tools,” in contrast, prompts an understanding that there are no inherent limits on their behavior other than those imposed by their designers and their users.[14]

The issues concern not just “cyberwarfare,” but also “information warfare” – a battle for dominance, or at least security and stability, among ideas and information.  I’m grateful for a Lawfare article sent by a friend that described a “Jesuitical” debate within the Air Force in the 1990’s as to whether networked information technologies required a “cyber” strategy or an “information” strategy.[15]  That is, should the military focus on the security of private information systems or the security of public information systems?

Hint:  it's both, but the Air Force (and others) were slow to catch on to the latter strategy.  Enemies, of course, also saw the potential for use of public information systems. And as that article notes, that is the topic of a recent book by Peter Singer and Emerson Brooking, who observed that “social media is being mobilized as an adjunct to kinetic combat.”[16]

To be sure, the article also notes military and political efforts aimed at disinformation and disruption of public opinion are not new.  During the Revolutionary War, Benjamin Franklin created “fake news” intended to undermine British public support for colonial governance.[17]  “Disinformation” campaigns are features of Cold War espionage, as well as fictitious treatments of espionage efforts.[18]

In the political context, the election between Thomas Jefferson and John Adams was peppered with scandalous and often false rumor-mongering in an attempt to win a battle of information and ideas.[19]  More recently, political operative Dick Tuck’s stunts – fake mimeos claiming candidate Barry Goldwater was upset at fluoride in his water – were a fairly benign form of political misinformation campaign, but Tuck’s pushing of the envelope did play a role in encouraging others to take up more aggressive misinformation endeavors.[20]

These political efforts, like the Russian activity in the 2016 election, show the challenges of asymmetry in the information warfare area.  It takes only one person to publish a false narrative that becomes widely spread and changes the collective opinion for the worse.  As Churchill famously said, "a lie gets halfway around the world before the truth has a chance to get its pants on.

The nature of information warfare also illuminates what may be an under-appreciated fact about intelligence work, which is that much of it presents as ordinary and benign activity.  Attending a business conference, touring a factory, supporting a political cause, all of which are valuable (and ordinarily 1st Amendment protected) activities, can be reconnaissance and/or influence opportunities in an information war.  The sentencing memo in the recent case of Russian operative Maria Butina illustrates plainly why ordinary activity can be of counter-intelligence concern.21]

One practical conclusion from this analysis is security of our news and media information systems matters as much as security of personal and commercial information systems.  Information warfare shows harms can arise even when there is no unauthorized access, when the tools are used as intended, and when there’s no compromise of user privacy settings.  In both cases, the threats are asymmetric, the tools readily available, useful for many productive and positive purposes, and threats are easily disguised as benign, whether in the form of a business conference used for economic espionage, or a product order form email used to inject malware. 

Fortunately, there are variety of interesting proposals to respond to information warfare.  Social media platforms have experimented with novel ways of ‘fact checking,’ – putting information in context as reputable journalists attempt to do.[22]  Cass Sunstein proposed application of libel law concepts.[23]  Professor Philip Howard is, among others, doing important work to understand the role of automated information tools (e.g., “bots”) in information campaigns.[24]

These are important topics of conversation because there are few easy answers.  Networked information tools are still important for those who seek to advance human rights, freedom and democracy, notwithstanding their use against such causes,[25] and because even well-meaning responses to information warfare can threaten the very rights and democratic values they seek to protect.[26]

My own contribution, as I’ve noted in this blog several times, is the most reliable forms of defense involve critical and informed users.  Robust resistance to phishing and social engineering, as well as defenses against “fake news” and disinformation campaigns, is unlikely to be fully achieved on the basis of refinements to the tools (including information platforms).  It will require designers and users to, to the extent possible, acquire greater understanding of themselves as sorters and interpreters of information[27]

Indeed, ideally, designers and users of tools would develop a stronger sense of themselves as the source of experience and thus of knowledge.[28]  Effective responses to cyberattacks and “information warfare” are thus very much the province of psychologists, as well as technologists and legal experts.  If there is a territory to explore in addressing issues of information warfare, let it be the territory of the mind.

[3]Benjamin Wittes explains that counterintelligence information, analysis, and conclusions were left within the FBI, rather than handled by the Special Counsel’s office.  See https://www.lawfareblog.com/notes-mueller-report-reading-diary#Introduction%20to%20Volume%20I

[4]See, e.g., Stratfor, “How the Wild West of the Internet Will Be Won,” (July 26, 2017) https://worldview.stratfor.com/article/how-wild-west-internet-will-be-won

[5]See, e.g., “John Perry Barlow: Is Cyberspace Still Anti-Sovereign?,” California Magazine (March/April 2006), https://alumni.berkeley.edu/california-magazine/march-april-2006-can-we-know-everything/jp-barlow-cyberspace-still-anti

[6]David Kushner, “The Real Story of Stuxnet,” IEEE Spectrum (Feb 26, 2013), https://spectrum.ieee.org/telecom/security/the-real-story-of-stuxnet

[8]“Israeli Military Strikes and Destroys Hamas Cyber HQ in World First,” Forbes, May 6, 2019; https://www.forbes.com/sites/zakdoffman/2019/05/06/israeli-military-strikes-and-destroys-hamas-cyber-hq-in-world-first/#57bfb28aafb5

[10]See, e.g., Robert Vamosi, Security Week, “Guerilla Cyber Warfare:  Are We Thinking Defensively?” (September 01, 2011) https://www.securityweek.com/guerilla-cyber-warfare-are-we-thinking-defensively (noting the phenomenon of asymmetric threats)

[11]“N.S.A. Denies That Its Tool Was Used in Hacking of Baltimore, Lawmaker Says,” NY Times, June 2, 2019, https://www.nytimes.com/2019/05/31/us/nsa-baltimore-ransomware.html  As the NY Times points out, it's an interesting question how the N.S.A. came to any knowledge that its tool was not involved, assuming the details of the forensic investigation by the City of Baltimore and its consultants have been kept confidential to those parties.

[12]See “NASA Official Credits DHS’ Cyber Tools with Transforming Its Cyber Stance,” NextGov.com, May 22, 2019, https://www.nextgov.com/cybersecurity/2019/05/nasa-official-credits-dhs-cyber-tools-transforming-its-cyber-stance/157204/

[13]See, e.g., Maryann Lawlor, “AI May Pose More Questions Than Answers,” The Cyber Edge, May 31, 2019, https://www.afcea.org/content/ai-may-pose-more-questions-answers

[14]See “Tool Without A Handle: Tools for Meaning, Part 2,” https://cyberlaw.stanford.edu/blog/2018/11/tool-without-handle-tools-meaning-part-2 (discussing the importance of design in policy outcomes).

[16]Id.; see Peter Singer and Emerson Brooking, Like War: The Weaponization of Social Media (Eamon Dolan/Houghton Mifflin Harcourt, (2018);

[17]See Hugh T. Harrington, “Propaganda Warfare: Benjamin Franklin Fakes a Newspaper,” (November 2014), https://allthingsliberty.com/2014/11/propaganda-warfare-benjamin-franklin-fakes-a-newspaper/.

[18]See, e.g., Adam Taylor, “Before Fake News There Was Soviet Disinformation,” Washington Post, (November 26, 2016) https://www.washingtonpost.com/news/worldviews/wp/2016/11/26/before-fake-news-there-was-soviet-disinformation/.  The premise of Ian Fleming's novel From Russia, With Love is a Soviet attempt to bring about not only the death of James Bond, but a phony scandal that would be picked up by worldwide media and bring disrepute on MI6.

[19]See, e.g., http://mentalfloss.com/article/12487/adams-vs-jefferson-birth-negative-campaigning-us

[20]See https://www.washingtonpost.com/wp-srv/national/longterm/watergate/stories/tapes.htm (“H.R. Haldeman led a parade of White House aides who publicly tried to explain away [their campaign efforts at] sabotage as "an attempt to get a Dick Tuck capability”).

[21]See https://assets.documentcloud.org/documents/5972875/4-19-19-US-Sentencing-Memo-Butina.pdf  (“Butina was not a spy in the traditional sense of trying to gain access to classified information to send back to her home country. She was not a trained intelligence officer. But the actions she took were nonetheless taken on behalf of the Russian Official for the benefit of the Russian Federation, and those actions had the potential to damage the national security of the United States”).

[22]Facebook News Release, “Hard Questions: How Is Facebook’s Fact-Checking Program Working?,” June 14, 2018; https://newsroom.fb.com/news/2018/06/hard-questions-fact-checking/

[23]Cass Sunstein, “Act Now to Head Off Looming ‘Deepfakes’ Disasters,” Bloomberg, May 29, 2019; https://www.bloomberg.com/opinion/articles/2019-05-29/pelosi-video-russia-threat-and-deepfake-threat

[24]Philip Howard, “Computational Propaganda,” February 17, 2018; https://www.oii.ox.ac.uk/blog/computational-propaganda-2/ ; see also Marianna Spring and Lucy Webster, “European elections: How disinformation spread in Facebook groups,” BBC Newsnight, May 30, 2019; https://www.bbc.com/news/blogs-trending-48356351 (describing sociological research to track disinformation via Facebook groups).

[26]See, e.g., Riana Pfefferkorn, “Democracy’s Dilemma,” Boston Review, May 15, 2019; https://bostonreview.net/forum/democracys-dilemma/riana-pfefferkorn-dont-put-anonymous-speech-on-the-chopping-block; https://cyberlaw.stanford.edu/blog/2019/05/essay-boston-review (noting value and tradition of anonymous speech).

[27]See, e.g., Walter Veit, “How to Avoid Falling Victim to Fake News,” Psychology Today, May 14, 2019; https://www.psychologytoday.com/us/blog/science-and-philosophy/201905/how-avoid-falling-victim-fake-news; Jeff Grabmeier, “Tech Fixes Can’t Protect Us from Disinformation Campaigns,” Science Daily, April 25, 2019, https://www.sciencedaily.com/releases/2019/04/190425115634.htm

[28]See, e.g., Dr. Kevin Perry & Bettie J. Spruill, “A Life That Matters:  Re-Thinking Responsibility as Intimacy and Wholeness.” https://ontologicalliving.com/blog/generating-life-matters-re-thinking-responsibility-intimacy-wholeness/

 

Add new comment