Stanford CIS

Tool Without a Handle: “Getting A Grip”

By Chuck Cosson on

“Tool Without A Handle:  Getting a Grip”

In my last post, I explored how law influences use of information technology through both rules and a concomitant degree of social consensus that a particular behavior creates undue risk or otherwise warrants a response.  In this post, I’ll explore this point further in the context of two areas:  legal obligations regarding data security[1] and attempts to regulate the use of “cookies.”[2]

These regulations can be classified into two distinct types:

1) Regulations require data controllers to help protect data they hold can be seen as a “duty of care” type of rule.[3]  These regulations assign a baseline expectation of responsibility for property of others.  It’s akin to the duty of a bus company to hire and train safe drivers, or of a hotel to have regular fire drills.  In the US, there are also specific security regulations for certain types of data, such as telephone call records.

2) Regulations (such as those adopted in the EU) requiring disclosure and consent by the user for “cookies” are a type of “fairness” rule.  “Cookies” are code used by a web page or server to identify or describe a given user, device, or software interacting with it.  Generally, these rules require a person not store information on, or gain information from, a user’s device unless the user is provided with clear information about the storage or access, and has given consent.

In the framework of services and software as tools, both “duty of care” and “fairness” regulations govern the way the tools are used.  Importantly, both types of rules are capable of calibration:  what acts constitute sufficient care, or ensure fairness, can be adjusted as experience provides additional insights or reveals new problems.  Both rules are animated by a belief that the provider of the tools, rather than the user, is in the best position to achieve the desired outcome (reasonable data security, or fairness to the end user).

Data security and fairness to end users are, naturally, appropriate considerations for rules.  But two additional concerns must be taken into account before concluding that regulation is the best approach.

The first is there will always be, to some extent, an imbalance between service provider and user:  the provider who holds the data and operates the service will naturally have greater information and control about how the service works (and evolves).  There will always be aspects of the provider’s interaction with the user that are hidden from the user or difficult for a non-technical person to grasp.  This inequality, by itself, can’t be considered an automatic basis for regulation – otherwise regulation would cover more aspects of a service, and in far more detail, than business or government could possibly manage.

The second concern, also commonly known, is that regulation has costs as well as benefits.  In addition to direct costs of compliance imposed on service providers, other costs of regulation are borne by the public at large.  For example, regulation can reduce innovation.  To the extent there are variations among consumer preferences, imposing rules can reduce the extent to which markets capture those variations in preferences.  Yes, there’s no reason a competing firm might choose to go above and beyond the standard of regulation (and many do) but regulation nonetheless has some effect – intentionally so - of homogenizing the market.

It can be the case that trust-enhancing regulation compensates for these costs.  A Bloomberg news article reports on a recent California panel at which both academics and industry representatives agreed privacy regulation and innovation are not mutually exclusive, and in some cases privacy regulation creates the trust needed for innovations to thrive.[4]  The question this blog (and that panel) necessarily confront, however, is what regulations would do so?  The analysis must do more than simply point out that service providers have a duty of care or should follow fundamental fairness.

So what should that analysis involve?  In my last post, I suggested a key element is that there should be a broad social consensus that:

1)      The target of regulation is the correct one and should be addressed;

2)      The problem should be addressed by regulation, in addition to (or in lieu of) other means;

3)      There is reasonably stable agreement on what those rules should be.

The reason to require this degree of social consensus is that, with tools that are widely distributed and over which consistent enforcement is difficult, regulation against misuses of the tools will draw much of its power from the social consensus that both creates, and is created by, the process of regulation, which in turn informs individual choices about how to use the tools.

Applying these considerations to the case of a duty of care, it’s interesting to note the most recent argument in the FTC v. Wyndham case, over the jurisdiction of the Federal Trade Commission to enforce data security rules. Regardless of which side of that argument one takes, it’s generally agreed that Congress has never officially passed broad data security policy.[5]  Put differently, the FTC and its supporters would have to agree that what is being regulated is not, strictly speaking, a failure of security itself, but allegedly deceptive practices created by failure to live up to public statements about security.

Other rules on data security govern only certain types of data, and there is only a developing consensus (and still plenty of debate) about what a national cybersecurity framework should include..[6]  The inevitable conclusion is that, except in certain narrow cases, broad social consensus about data security regulation does not yet exist.  There is still considerable debate about many aspects of data security, including:

1)      Should it focus on minimum standards for data processors, or better tools to catch intruders?

2)      Should it be addressed by regulation, industry self-regulation, or market choices?

3)      What should the minimum rules include – e.g., baseline encryption standards; regular audits?

My point is not that data security isn’t important or that the lack of social consensus means data controllers have no obligations – to the contrary.  It is simply that, at least in the US, it does not appear that there currently exists a basis to enact a comprehensive data security law.

In the case of required cookie disclosures, this too was the subject of considerable debate.   Clever videos depicted what could happen in the case of a literal interpretation of the rules – an endless cascade of interruptions for consent every time some aspect of a web page changed to add a new cookie.[7]  Regulations requiring cookie disclosure were seen as the quintessential example of regulation run amok, and which failed to match how technology works.[8]  For a considerable time, Europe at large (rather than just EU officials), did not agree on key aspects of the the e-Privacy Directive.

I don’t mean to underplay the views of those who still have concerns with the “cookie” regulations, but Europe does seem to be moving closer towards a social consensus on this regulation.  In the run up to the rule’s effective date, consultancies developed plans and resources to implement the Directive’s requirements.  Some commentators adopted a different tone – suggesting it was time to “just get on with it.”[9]

Regulatory authorities are also part of this refinement.  For example, this week French data protection authorities have adopted a defined set of rules that appear – at first reading - to be much simpler to implement:  essentially, 1) post a banner on the page providing notice and requiring consent, and 2) provide a mechanism to accept or reject any further cookies.[10]  Unless matters change direction significantly, it’s reasonable to believe the cookie disclosure requirements in the e-Privacy Directive will, over time, be integrated into online tools subject to EU law without extraordinary cost or disruption.

My point though, is the same in both cases:  effectively regulating the use of information technology tools effectively requires a degree of social consensus on critical points.[11]  Whether a moral or social obligation exists is only one of several considerations.  This, in turn, points out an advantage for thinking of information technology as tools, rather than as a “space” or as a thing analogous to a sovereignty unto itself.

To regulate a sovereignty, one enacts rules about acceptable behavior within the scope of that sovereignty.  But the Internet is not a single sovereignty, and is not immune from the authority of existing sovereign governments. [12]  What is really being regulated is not the “place” where connected information technology exists, but the uses of technology from within “regular” space.

As shown above, different parties will reach different views on what constitutes acceptable behavior, and on whether (and how) regulation should enforce them.  This is ultimately an advantage for innovation, as different models can be tested, and an advantage for social cohesion, since it allows for tighter agreement between the intent of the law and that of the governed.

Also, most societies understand that issues of “fairness” and of “duties of care” can be dealt with through mechanisms other than law and regulation.   Rather than a “law of cyberspace,” it seems generally preferable to approach legitimate concerns through “rules for tools” that can vary based on a degree of social consensus as to a preferred mechanism.

In a future blog, though, I’ll discuss some of the cases where the issue at hand is so universally understood as worthy of regulation that this variation is less desirable.  For issues such as child abuse online, for example, the primary concern is not whether to bring the force of law, but instead how to improve interoperability across diverse systems to improve enforcement.[13]


[1] See, e.g., 16 CFR § 314 (safeguards for non-public personal financial information); 45 CFR §§ 160, 162 and 164 (health information safeguards rules); 47 CFR § 64.2009 (safeguards for “customer proprietary network information”); Md. COMMERCIAL LAW Code Ann. § 14-3503 (businesses within the jurisdiction must implement and maintain reasonable security procedures to protect personal information).

[2]See E-Privacy Directive, Article 5(e) (Directive 2009/136/EC)

[3]I use this term generally.  Not all cybersecurity rules, guidelines or frameworks create a legal “duty of care.” Court decisions are not uniform in allowing for negligence claims against companies suffering a data breach. See, e.g., Bell v. Acxiom Corporation, 2006 WL 2850042 (E.D. Ark. Oct. 3, 2006), and laws generally do not allow for recovery for purely "economic loss." State laws vary as well:  Massachusetts and Nevada have data security laws; others do not, and some regulate only data disposal praqctices.  Similarly, the duty is not all-encompassing. For example, there are obvious differences between failure to protect against known weaknesses and innovative data breaches that have no known or effective defense. See John A. Fisher, Secure My Data or Pay the Price: Consumer Remedy for the Negligent Enablement of Data Breach, 4 Wm. & Mary Bus. L. Rev. 215 (2013), http://scholarship.law.wm.edu/wmblr/vol4/iss1/7

[4]http://www.bna.com/privacy-laws-create-N17179880743/

[5]http://www.npr.org/2013/12/14/251031687/tug-of-authority-over-legal-gap-in-online-privacy (quoting Stanford CIS fellow Woodrow Hartzog).  For the FTC's complaint, see http://www.ftc.gov/enforcement/cases-and-proceedings/cases/2012/08/wyndham-worldwide-corporation-ftc

,[6]http://www.nist.gov/itl/cybersecurity-102213.cfm

[7]See http://www.youtube.com/watch?v=arWJA0jVPAc

[8]See, e.g., http://techcrunch.com/2011/03/09/stupid-eu-cookie-law-will-hand-the-advantage-to-the-us-kill-our-startups-stone-dead/

[9]http://www.wired.co.uk/news/archive/2012-05/24/eu-cookie-law-moaning

[10]http://www.journaldunet.com/ebusiness/publicite/cookies-cnil-recommandations-1213.shtml

[11]Lawrence Lessig’s 1998 paper “Regulating Cyberspace” offers a more detailed version of this point; he lists 4 distinct constraints on developing effective rules (law, consensus (norms), markets, and architecture (code).  /content/files/works/lessig/laws_cyberspace.pdf

[12]See generally Wu and Goldsmith, “Who Controls the Internet?,” http://en.wikipedia.org/wiki/Who_Controls_the_Internet

[13]See generally http://cyber.law.harvard.edu/research/interoperability

Published in: Blog , Privacy