Among the principles that govern the current state of technology, two stand out as fundamental to our understanding of the information security problem. First, software is everywhere. The technology we adopt at an exponentially increasing rate is nearly always software at its core. In addition to the obvious tech devices like PCs, tablets, and handheld supercomputers once known as "phones," our cars, home heating/cooling controls, retail payment systems, medical devices, lawn mowers, and so much more now include powerful processors as function controllers--all of which is driven by software.
Second, software engineering is hard. Yes, it's possible to throw together a simple program (see, e.g., the ubiquitous "Hello, world" example) that is easy to understand, and therefore, debug. But most useful programs are, by necessity, far more large and complex, and contain dependencies and logic paths that can quickly become untraceable, or, more properly, undecidable. This means that, even when software developers take great care, programs may contain logic flaws that remain invisible until someone--often, a user--trips over them by executing the program under very specific conditions. Vulnerabilities like buffer overflows, race conditions, and unchecked user input, while well known as a species, still pop up frustratingly often in the wild. In addition to poor coding, these bugs can arise in more subtle ways, such as through the inclusion of buggy software libraries or via a flawed compiler (software which transforms human-readable source code into machine-readable object code).
These software flaws (colloquially known as bugs) sometimes manifest as mere user annoyances, but can often be used to force the processor running the buggy program to execute arbitrary instructions. For example, unchecked user input on an online retailer's payment page could allow a user to run arbitrary commands against the retailer's back-end database to get at the customer financial information stored there. Or flaws in a point-of-sale terminal's operating system could allow users to upload their own software (often characterized as "malware"), which is designed to lurk in the background and collect sensitive data as customers use that terminal. The list goes on and on.
Through these two core principles, it's easy to see our problem: We live in a world that's increasingly dependent on technology (read: software) that contains flaws that could result in the disruption of the technology's functions, sometimes with catastrophic real-world impacts. This dependence--with all of its good and bad aspects--was characterized by William Gibson as the "eversion of cyberspace," where rather than inserting ourselves into the virtual real estate of the Internet, the network inserts itself in the physicality of the real world. Our everted cyberspace becomes, therefore, a world in which technologies quickly become mere wallpaper, and where innovations are increasingly expected and unquestioningly adopted.
From the earliest days of software engineering, this problem of software quality was recognized, and an industry has grown up around the discipline of information security. Broadly speaking, this industry has developed two interrelated philosophies: Defense and offense. Defense, in information security terms, is what the term implies--research and development designed to improve software coding standards, identify and close security holes, and create best practices to minimize attack surfaces. This is important work, and has given us software quality guidelines such as "privacy by design" and "defense in depth."
The notion of "offense" in information security research is more controversial. Offensive researchers actively seek out the vulnerabilities inherent in technologies, creating and documenting proofs of concept around these bugs, cataloging them by levels of impact, and often working with manufacturers to develop fixes for these security holes. Until these vulnerabilities are disclosed, however, they represent a potential for exploitation. These undisclosed software flaws--known as zero-days or 0-days--present an ethical dilemma to the researcher. These researchers, like all of us, need to eat and pay the rent, and there are many organizations and governments who pay handsomely for 0-days, and markets (gray and black) have emerged around the sale of these undisclosed flaws.
In an attempt to address these growing markets, some technology companies have instituted "bug bounty" programs, where offensive security researchers are given rewards for disclosing newly discovered vulnerabilities to the company first, in order to give the company a chance to fix the problem before the bug becomes widely known, and, inevitably, exploited. These programs are designed to provide a viable income alternative to researchers who might otherwise consider selling the 0-day to the highest bidder. But bug bounties have also been criticized as a means for companies to legally gag researchers while sitting on the bugs for long spans of time, without any apparent attempts at fixing them. Further, the bug bounties often can't match the prices offered by governments and organizations seeking to build a portfolio of exploits for surveillance, criminal, or otherwise shady purposes. Either way, offensive information security research can provide a nice living.
Defensive research activity, however, lags behind offensive research in this respect. Information security defense research is usually carried out by the manufacturer of the technology, and is generally considered a cost sinkhole whose budget is to be minimized. That is, it's hard to argue a business case for defensive research beyond the minimum perceived by the industry to be required. The remorseless application of cost-benefit analysis has given us a parade of horribles throughout the years, but it is still difficult to argue that it makes fiscal sense to invest more in information security defense. Even such high-profile security events as the 2014 Target data breach have, in the end, generated shrugs from Wall Street, as the actual costs of the breach, while not insignificant, have been seen as a manageable cost of doing business. The incentives to increase investment in information security defense just aren't there, despite the apparent levels of concern about the likely effects to society and the economy due to the exploitation of unpatched vulnerabilities.
Which brings us to the Wassenaar Arrangement. The Arrangement, named for a town in the Netherlands where the text was written, was established to regulate the international transfer and sale of arms, munitions, and "dual-use" technologies, in order to minimize the destabilizing effects of weapon proliferation. Wassenaar has long included cryptographic protocols on its list of technologies to be controlled, and has recently added "information security" technologies as the "dual-use" category, with an eye toward their use as "cyberweapons." The United States Bureau of Industry and Security (BIS) has proposed a set of rules to comply with Wassenaar along these lines. But these proposed rules broaden the original scope of Wassenaar, including the restriction of technologies that could be used to develop information security weapons, which could conceivably include compilers and even text editors.
So where is the controversy? Surely 0-days have the capacity to become quite dangerous weapons in our everted cyberspace. Doesn't it make sense for governments to keep close tabs on the research--and researchers--that make it their business to seek out these potential weapons? The recent leak of over 400GB of sensitive internal documents and emails from Hacking Team, a notorious surveillance technology company that seems to have no qualms selling its spyware to oppressive regimes like Sudan, has quite clearly illustrated the need to keep 0-days out of the wrong hands.
While notionally true, this statement is made difficult by the definition of "wrong hands," which is often in the eye of the beholder. This is not to imply some sort of moral relativism where we argue whether injuring people by exploiting a software vulnerability in their pacemaker is good or bad for society (hint: it's bad). Rather, the difficulty stems from the imputed characterization onto the software tools themselves. It turns out that the technologies used by authoritarian governments to exert power over their citizens can also be used by researchers to defend those citizens from these abuses of power. Offensive research very often works hand-in-glove with defensive research, discovering and exploring technology vulnerabilities so that the systems and networks that we depend on in our everted cyberspace may be hardened to defend against the exploitation of these bugs.
At its core Wassenaar is about the maintenance of power. Governments are understandably made uncomfortable by the thought of weapons moving across borders with impunity. States are made especially nervous by the vague notion of "cyberweapons," which can't be seen or touched, but can exert power in the real world through our growing network of cyber-physical systems. Since the potential harms from these new weapons are very real, governments would prefer to extend their monopoly to these technologies, keeping a tight lid on offensive information security research in order to ensure the safety and stability of its citizens' networks. Problems can arise, however, when governments begin exploring the intelligence and surveillance opportunities presented by this same research.
In this case, the cure may be worse than the disease. Regulating offensive research through limits on international collaboration could very well make impotent an important component in our ongoing struggle to fix buggy code. If the true goal is to maximize information security in our everted cyberspace, the better solution is one that incentivizes defense rather than arbitrarily punishes offense. For example, it may be time to reexamine the concept of software product liability, a legal theory rendered moribund through boilerplate limitation language found in most software licenses. Arguably, if technology manufacturers stood to lose more if their code is unreasonably buggy (we can disagree over the definition of "unreasonable," of course), they might see more value in defensive research investment. And this increased investment in defense can benefit from ongoing offensive research, as well. It will never be perfect, but it's a start.
The unreasonable regulation of offensive information security research as part of a quixotic effort to permanently bottle 0-days will likely only serve to hobble ongoing cybersecurity efforts. As we move through our everted cyberspace, we need to keep the two core principles described above in mind, and take care not to rush into regulatory regimes motivated by overbroad security fears.