16:37
(Jake here blogging the second to last session. Most everything is paraphrased... For a much better – and more accurate – report, try the audio archive later in the week.)
Jennifer Granick, Esq., Stanford CIS
Jim Duncan, Cisco
Hal Varian, Professor, University of California, Berkeley
What are the practical considerations in formulating, implementing, and enforcing vulnerability disclosure policies or best practices?
Hal Varian: Referencing June 1, 2000 NYT article, “Managing Online Security Risks,” talks about why crypto systems fail:
Ross Anderson was accused by the banks in the UK of defrauding ATM systems. Banks were able to convince UK courts that their systems were infallible, and the burden of proof was upon the customer to prove that a transaction was unfair. In the US, where this is not the case, banks have incentive for better risk management. You assign liability with the party that is most involved with the risk. This is a good model.
Strict liability is not optimal because, in many cases, there will be more than one factor contributing to the incident. If one party bears all of the cost, the other parties don’t have reason to be careful because they will be compensated if something goes wrong. Ex: Microsoft.
You want another form of liability: the negligence rule. The courts establish a level of due care. If one of the parties shows that their care exceeds the standard, then there is no liability. If not, they have to compensate the injured parties.
If you set the due care standard optimally (social benefits minus social costs), the parties will have incentive to meet the due care standard naturally. The challenge is setting the due care standard appropriately.
What’s nice is that you only have to verify the appropriate levels of care after a lawsuit, therefore the monitoring costs are relatively low. [but wouldn’t this lead to different standards?]
Another story (really, a joke that I won’t ruin for future audiences) about moral hazard: if you are too well insured, you won’t have incentive to take care and you might have incentive to be injured.
Now talking about the mystery of publicly held firms buying insurance… if the firm is held among many members of the public, what is the big deal if it fails due to under/no insurance?
One look at insurance: Publicly held companies buy insurance because they are actually buying risk management systems from insurers. Insurers’ mandates cause you to, say, install X number of sprinklers where you might have not. “Insurance companies are somewhat playing the role of the judicial system by imposing a due care standard.” In some ways, they might be better at this than judges because they have financial incentive to ensure great care. Otherwise, they pay up.
The problem with bringing the insurance company into the picture, jump-starting the cyberinsurance business, is that the databases for the actuaries don’t exist. “They don’t have the data, so they don’t issue the policies, so they don’t collect the data.” Let’s give the actuaries something to grind away on.
Maybe in the future we will have better data and a better functioning system.
Jim Duncan: Approaching this from a different side. Before helping write policy, Jim worked on the security response team at Cisco. He built a tremendous library of policies and best practices.
Definitions:
Customers/Consumers: All too often, vendors deal with customers (those who directly purchase their products) but rarely have direct involvement with their consumers.
LEGOs (law enforcement government organizations)
COMMUNICATIONS SECURITY: Who has a need to know? You need some transparency.
The mark of maturity for security organization: When you tell them about a vulnerability and the y won’t shirk responsibility.
Reporting information safely: All of the principles of information security apply:
-identity of recipients and sender
-confidentiality and integrity
-availability and non-repudiation
Most people get crypto wrong. The US government has a lot of trouble running PGP. They need to get waivers – actually, you can’t get them anymore – for using PGP.
Out on a limb: the folks at the White House are not allowed to use PGP because they are charged by Congress to archive everything. Everything is either classified or readable. PGP doesn’t fit into that model.
Consensus on Severity: Scoring vulnerabilities is very subjective, each company has its own schema. This directly affects security and handling.
Consensus on Timeline: Everybody Jim works with agrees on full disclosure, but they disagree on timing. Some don’t want to report problems on workdays and holidays, for example. Quick fix vs. best fix. Multi-vendor cases are a major problem.
This is still a new area. Jim likes the fact that we’re focusing on this. We’re getting better and better as we go along. The National Infrastructure Council Vulnerability Working Group hopes to publish in January
Jennifer Granick: Is a universal policy of disclosure impossible?
JD: “I don’t believe there is any one policy or plan… the responsibility is not directly held by vendors.” The best processes work towards responsibilities of all the parties involved.
Internal transparency is tough (check the audio file to hear Bruce heckle Jim).
“Don’t invite Bruce next time”
HV: Of course it is not always the case that you don’t want to assign total liability to the vendor all the time.
JG: Commercialization of information: it could be bought by the mafia or other ner-do-wells. How do you see the commercialization of information as an economist?
HV: Buys the “need to know” theory. Wants information sharing among the parties most affected. Federal Reserve bank is a great example of sharing information. Only problem is that there are 12 Federal Reserve banks, each duplicating processes and willing to work with each other due to the non-competitive nature of the business.
Audience member: Do you have faith in the ability of the judiciary to come up with a segmented approach to the negligence standard? Or will it be a one-size fits all kind of solution?
HV: “You’re lawyers, you should know whether you have faith in judges or not…” More seriously, judges have responsibility, but the Internet generation also has a responsibility to codify these best practices.
Audience member: Cisco has a standard to follow in regard to people who report bugs to the company. What is your stance?
JD: “What I meant to say, I think, is that there should be as much transparency as possible, and, in particular, we should strive for schemes that are as externally transparent as possible so that people can make educated decisions.”
Disclaimer: No longer a member of the product security response team. Doesn’t want them to kill him.
JG: “That’s actually one of their practices.”
Audience member: What do you do in terms of standards to reduce the risk that bugs brought to you by a researcher will end up on bugtracker?
JD: If a submission was at all reasonable, it was investigated. But in many sense, security bakes no bread. We don’t know the benefit of the security… we only know in carefully constructed environments.
Wants a quantitative view of the world, at least in the “practical world.”
HV: The fallacy of disaster accounting. You assign monetary values to a hurricane or a network outage. This is junk. Transactions take place at a different time because of the disaster.
Jennifer: Hey Peter Harver (in the back), is there some sort of analysis that insurance companies could use to create rates for cyberinsurance?
Peter Harver: They’re still making numbers and trying to get clients. One of impediments seems to be that the management team gets it from a philosophical point (you can’t manage what you can’t measure), but knowing that quantifying electronic risks doesn’t necessarily bring in more revenue makes them not want to stick their necks out. But as more IT gets integrated into companies, you will start seeing more data and more cyberinsurance.