Tool Without A Handle: Cybersecurity Paradoxes

Tool Without A Handle: Cybersecurity Paradoxes

"I think I can safely say that nobody understands quantum mechanics." —Richard Feynman[1]

Earlier I wrote about certain paradoxes in data privacy that have features in common with quantum physics.  For example, I noted personal data collection is required to allow data subjects to remove personal data.[2]  Also, that personal data is “entangled” – e.g., when my data is breached on one system that affects the integrity of my data on all other systems.[3]

In this post, I identify similar quantum puzzles in cybersecurity.  In particular, I discuss paradoxes in defining cybersecurity goals, and in dealing with nation-state threat actors.  These paradoxes share some of the quantum-like qualities found in data privacy. That is, more effective solutions to certain cybersecurity challenges can be counter-intuitive, and often have quantum-like non-dualistic qualities.[4]

I discuss here two illustrative cases of paradoxical puzzles in cybersecurity: 

1) To reduce failures, aim at having some failures;

2) To get better international cybersecurity, have fewer rules and limit prosecutorial-type enforcement.

Cybersecurity Goals – To Reduce Failures, Aim at Having Failures

Traditionally stated, cybersecurity aims to ensure managed data retains three qualities:  confidentiality, integrity, and availability, known colloquially as the “CIA Triad.”[5]  That is, the goal is to prevent access to those who should not have such access, maintain the properties, values, and reliability of the data over time and in context, and ensure data is available to those properly authenticated and authorized.[6]

The NIST Cybersecurity Framework (“NIST CSF”) defines cybersecurity goals in more of a functional way, i.e., by defining functions every cybersecurity program should have.  Specifically, the NIST CSF defines functions that organize information, enable risk management, address threats, support business continuity, and foster continuous improvement.[7]  This practical set of goals is not inconsistent with the CIA triad approach; indeed both are useful.  For example, while the NIST CSF identifies continuous improvement as a goal, the CIA triad helps answer the question “continuous improvement at what?”

In either case, though, the goal of cybersecurity is stated as a state of affairs where managed data is always secured, always maintained, and always accessible with proper authentication and authorization. Security functions should ideally operate like clockwork:  repeatedly, predictably, and without failures.[8]

There are at least two paradoxes involved here, however.  The obvious one is that cybersecurity maturity typically proceeds asymptotically.  Even the best cybersecurity programs will approach these desired states but never reach the ideal at which they aim.  Aiming for a goal that’s ultimately unattainable can lead to misplaced emphasis on advanced solutions, rather than rigor in basic security.  And it can de-motivate rather than inspire better cybersecurity hygiene.[9]

More sophisticated approaches to cybersecurity embrace paradox (or, if you will, irony).  One salient example is the concept of “zero trust,” where, in effect, cybersecurity never sleeps.  Zero trust accepts that a perfectly secured perimeter will never be achieved, and thus aims to manage risk by applying authentication and similar verification controls to all components of an information ecosystem.

Another paradox is encapsulated in Ken Kesey’s aphorism “nothing lasts.”[10]  Information systems, and human economic and political interests, are in a constant state of flux.  So even if perfect cybersecurity for a given information system could be achieved, it won’t last as changes in that system are deployed.

Also, human nature with respect to goals is, understandably, to move on once a goal has been achieved.  This means a perfect state of security would lead to worse security than would a state where cybersecurity periodically fails.  That is, a state of perfect security would breed complacency.  Preferable to have imperfect security, where skirmishes lead to vigilance, and modest occurrences of failure cultivate determination. 

Absolutely there is important value in safeguards such as penetration testing and threat intelligence that help organizations stay ahead of potential failures, and organizations must aim to prevent catastrophic failures (and have backup data servers and other continuity plans in case they do occur).  But it is preferable to periodically have minor failures than not to have learning the opportunities failure affords.  And expecting, even welcoming, minor problems also reinforces the value to the enterprise of security measures that may burden efficiency, such as encryption or segmentation.[11]

It’s well understood that risk management is at the core of cybersecurity, and rightfully so.  The question is what orientation such risk management should take.  Akin to the quantum phenomenon where particles don’t move in ways ordinary mechanics would indicate, aiming at total risk elimination may paradoxically create more risks.

Cybersecurity Law – Law Without Rules

The code is more what you'd call 'guidelines' than actual rules." – “Pirates of the Caribbean”

Cybersecurity risks arise from a wide range of threat actors, among them actors interested in their own amusement, actors interested in malicious activity, actors interested in profitable malicious activity, organized crime, and nation-state actors.  Nation-state actors are among the more difficult to address because, by definition, they are typically not subject to domestic law enforcement.  That is, they are akin to pirates operating outside of national and international laws, but still within a framework of rules.

Ordinarily, security (cyber or otherwise) is achieved through rules; the clearer, the more specifically applicable, and the more rigorously enforced those rules are, the better.  For example, if an organization has a rule that data of a certain classification must be encrypted in transit, it’s preferable to have a rule that lists all the data types falling into that classification, and the types and levels of encryption required, rather than a rule which leaves those criteria to guesswork.

But just as laws of nature that operate in predictable Newtonian ways at the ordinary levels of perception begin to do strange things at the quantum level, cybersecurity laws that apply neatly and effectively at the level of an industry can do strange things at the nation-state level. 

First, even in contexts where rules and enforcement make sense, there are downsides to rules:

 - Those who one aims to protect with such rules can be lulled into a false sense of security, believing that the rule (or contract) creates in fact the conditions it proposes in theory;

 - Rules can quickly become outdated due to innovations, in which case they can be come unwieldy to apply to situations not originally intended.  Rules can also become weakened by unexpected loopholes that arise when technology learns to do similar functions in ways that are not covered by the rules

- Rules can help guide threat actors by mapping out the walls of the maze.  For example, if rules specify access to a given database requires conditions A, B & C, then all the threat actor has to do is determine a way to create (or spoof) those conditions.  Whereas if the rules for access are unknown, or constantly changing, the threat actor has less to work with.

Moreover, in international law there are fewer opportunities for a central authority to apply rules and mete out punishments which a party is bound to respect.  Thus responses to nation-state threat actors require far more in the way of persuasion and less in the way of simply applying due process. 

This is so even in war.  As Vladimir Putin is hopefully learning, in the long term the question of Ukrainian sovereignty may well be shaped more by international consensus, in terms of how many nations bind together to provide supplies for Ukraine, sanctions against Russia, and support for Ukrainian military responses, than by the forces applied by his cyber threat actors (or his military on the ground).

A book I returned to over and over again in law school (when the inklings of the Internet were just dawning) posits that international law must constantly wrestle with two potential criticisms:  it is either simply an apologetic for what states will do (or what powerful states will impose) and thus no law at all, or it is a utopia that describes a world that does not exist and thus irrelevant.[12]

The introduction to that book encapsulates this paradox with a quote from Salman Rushdie, who observes how these tensions play out in the children’s game of Snakes and Ladders, where the “solid rationality of ladders balances the occult sinuosities of serpents.”  Rushdie points out that, while this is a wonderfully illustrative binary, our world often operates in ambiguous, non-binary ways. 

The game of cybersecurity, especially at the international level, both presents and can benefit from ambiguity.  For it happens that when the rules of the game move into ambiguity there are more possibilities:  one can climb up a snake, and slide down a ladder.[13]

This is not, of course, to defend anarchy.  Rather it is to say that at the international level, general principles, supported by broad consensus, can be more effective than strict rule enforcement.[14]  If the behavior of actors in cyberspace (including our own) is consistently going to be ‘hunt and kill’ as necessary, then pleading for respect for rules we don’t ourselves observe (or believe in) operates in a contrary way – it invites further disrespect for the rules.[15]

At the quantum level, things appear to be interconnected, and seemingly isolated actions can have effects even at great distance.  In the context of international cybersecurity challenges, norms crafted among certain groups can influence those outside the group.  This is so because nation-states are all interconnected.[16]  Because of this interconnection, actions among one group that are seemingly isolated from other nations can and do have effects even at a distance from other nations.

A recent address by Chris Inglis, U.S. National Cyber Director, made this point specifically with respect to Chinese efforts at espionage (economic and otherwise):  that the U.S. does not so much seek to oppose China’s efforts at growth but to create conditions where China’s cyber espionage occurs in an international context that respects cybersecurity.[17]  It is unsurprisingly aligned with remarks by U.S. Secretary of State Anthony Blinken, who stressed that the United States does not seek to block China from its objectives so much as to defend and strengthen international law, agreements, principles, and the institutions that maintain them, shaping the environment in which China pursues its goals.[18]

As a practical matter, then, this approach seeks to create a change in direction by observing and naming Chinese cybersecurity violations and by shaping the environment around them.  This is similar to the “observer effect,”[19] that is known even to a casual student of quantum physics.  It is logical to assume the observer effect occurs because the observed and the observer are somehow connected.  And even where China and the US do not, for example, agree to or ratify the same international agreements, certainly China and the international community are connected.

Connections among international cyber actors exist not only via the (admittedly, Great-Firewalled) Internet but via economics, finance, culture, education, defense and, of particular importance, change in the climate of our shared planet.  They exist in the context of organizations that help support shared resources, such as Internet registries and standards bodies.  China’s awareness of this creates leverage.

In addition, by pursuing cooperation as broadly as we can among those willing to do so, rather than obedience from those who are not, we strengthen the cybersecurity rules we expect will be observed. Father Richard Rohr’s writing well illustrates how awareness of interconnectedness yields a somewhat paradoxical approach to antagonistic relationships, one that can reduce risks of backlash, counterattack, and worsened relations.  He writes, “[w]hen Jesus commands us to love our enemy and to love our neighbor, he’s training us to [recognize that what] you do to another, you do to yourself.[20]

Writing from a different tradition, Buddhist teacher Thich Nhat Hanh makes a similar point:  “[w]hen you begin to see that your enemy is suffering, that is the beginning of insight.”[21]  That is, one can make progress through empathy, and understanding another’s motivations, rather than indulging vitriol and attempting prosecutorial approaches.

Because these points seem paradoxical, they can often be misunderstood.  Neither principle is a call to surrender; on the contrary both statements clearly identify that one likely does, in fact, have enemies.  Nothing about this approach suggests international actors should not invest in self-defense, nor in tools to attribute attacks (as best we can) and to call out our objections to cybercrime and cyber espionage.  

Nor is this necessarily a bias in favor of carrots vs. sticks – after all, some actors may happily accept carrots and then continue on their self-interested path.  Accountability should be among the principles about which we seek to drive broad consensus. 

So while it is right to understand enemies as such, it suggests a response to our enemies that is different than application of rules through prosecutions and punishments.  In particular, respond by managing our own actions so as to bring them under the principles we seek to advance, and by investing in consensus among others similarly inclined to manage their actions by such principles.

This approach can be, paradoxically, a more effective response to an enemy attack than to respond in kind, which in turn can undermine consensus around the very principles sought to be enforced.  That is, greater security can often be achieved where one declines to respond in kind, and thus denies an enemy the escalation they seek to provoke, while reinforcing the case for their exile.

And, to expand the circle of those who stand with locked arms, it’s often better to stand for principles that allow for some ambiguity and thus include, rather than either/or rules that exclude.  Cybersecurity is, in virtually every respect, a set of trade-offs.  Quantum physics, generally speaking, shares some of these paradoxical trade-offs. 

So too, as it happens, does quantum computing.  It’s reported quantum computing may render current encryption obsolete. And yet the very same qualities of quantum computing afford the possibility of new, even more secure forms of encryption. [22]  In cybersecurity, as in the quantum world, not only is there a ladder for every snake, properly seen there are no snakes, and there are no ladders, for like a quantum bit each ladder can be a threat, and each threat can be a ladder.

[1]Richard Feynman, “The Character of Physical Law (MIT Press: Cambridge, Massachusetts, 1995), p.129; see also “Feynman - Nobody understands Quantum Mechanics,” https://www.youtube.com/watch?v=w3ZRLllWgHI

[3]“Tool Without a Handle,” “21st Century Privacy – A Quantum Puzzle,” online at: https://cyberlaw.stanford.edu/blog/2015/05/tool-without-handle-21st-century-privacy-%E2%80%93-quantum-puzzle;

[4]Which is apt, because as it happens the advent of quantum computing poses some of the most important challenges – and opportunities – for cybersecurity policy.  Quantum computing, in brief, is computing that utilizes the properties of a quantum state to create “bits” of information that have four possibilities rather than the traditional two.  As Microsoft explains, “[w]hile a bit, or binary digit, can have a value either 0 or 1, a qubit [a quantum bit] can have a value that is either 0, 1 or a quantum superposition of 0 and 1.” This means quantum computers can be far more effective at  calculating a large number of possible combinations, notably as applied in cryptography.  See, e.g., “Understanding Quantum Computing,” 24 May 2022, online at: https://docs.microsoft.com/en-us/azure/quantum/overview-understanding-quantum-computing; see also “Quantum Computing History and Background,”24 May 2022, online at: https://docs.microsoft.com/en-us/azure/quantum/concepts-overview.

[5]See, e.g., Fortinet, “Cybersecurity Glossary,” online at:  https://www.fortinet.com/resources/cyberglossary/cia-triad

[6]“Authentication” and “Authorization” are similar, but distinct concepts in cybersecurity.  Authentication is the process of establishing that a party is who they claim to be.  Authorization is the process of establishing the appropriate scope of access, use, or disclosure of data assuming a given party has successfully authenticated.

[7]See, e.g., GSA, “NIST Cybersecurity Framework,” online at: https://www.gsa.gov/technology/technology-products-services/it-security/nist-cybersecurity-framework-csf; the NIST Framework itself is published at: https://www.nist.gov/cyberframework

[8]See, e.g., “NIST Cybersecurity Framework Components,” describing tiers useful to measure the maturity of an organization’s cybersecurity implementation, online at: https://www.nist.gov/cyberframework/online-learning/components-framework

[9]See, e.g., Yang, et. al, “Why We Set Unattainable Goals,” Harvard Business Review, January 4, 2021, online at: https://hbr.org/2021/01/why-we-set-unattainable-goals; though compare Albert Camus, The Myth of Sisyphus (Vintage International, 2018 paperback edition), p. 123 (“The struggle itself ... is enough to fill a man's heart”).

[10]See Tom Wolfe, “The Electric Kool-Aid Acid Test,” (Transworld Publishers, 1989), p. 154.

[11]Data segmentation isolates data on separate systems so that when attacks do get through, they cannot lead to catastrophic failures or large data exfiltration.  See, e.g., https://www.datamation.com/security/data-segmentation.  It is similar to network segmentation, where functions are segregated for similar security purposes.  See, e.g., https://cybersecurity.att.com/blogs/security-essentials/network-segmentation-explained

[12]Martti Koskenniemi, From Apology to Utopia: the Structure of International Legal Argument, (Cambridge University Press, 1989);

[13]Salman Rushdie, Midnight’s Children (Picador, 1981), p.149.

[14]See, e.g., The Budapest Convention on Cybercrime, online at: https://www.coe.int/en/web/cybercrime/the-budapest-convention?ref=hackernoon.com

[15]See William Golding, The Lord of the Flies (1954).

[16]In particular, nation-states that are antagonistic are, in that way, more interconnected than those which are not, i.e., those which are indifferent to one another.

 [17]Lowy Institute, “Address by the US National Cyber Director on Cyber Cooperation,” online at: https://www.youtube.com/watch?v=x0n0Z18v4GM

[18]Secretary of State Anthony Blinken, “The Administration’s Approach to the People’s Republic of China,” May 16, 2022, online at: https://www.state.gov/the-administrations-approach-to-the-peoples-republic-of-china/

[19]See, e.g., https://en.wikipedia.org/wiki/Observer_effect_(physics); see also supra, n.2 (Quantum Puzzle – Part 2)

[20]Richard Rohr, Daily Meditations, “Non-Dual Consciousness,” online at: https://conta.cc/3a3n1Qj

[21]Thich Nhat Hanh, Peace Is Every Step: The Path of Mindfulness in Everyday Life (Bantam, 1992), p.120.

[22]See, e.g., Cade Metz and Raymond Zhong, “The Race Is On to Protect Data From the Next Leap in Computers. And China Has the Lead,” New York Times, Dec. 3, 2018, online at: https://www.nytimes.com/2018/12/03/technology/quantum-encryption.html; see also Cade Metz, “’Quantum Internet’ Inches Closer With Advance in Data Teleportation”, New York Times, May 25, 2022, online at: https://www.nytimes.com/2022/05/25/technology/quantum-internet-teleportation.html (outlining progress at capabilities to send quantum data to and from distant machines).

 

Add new comment