Stanford CIS

Second panel

By Stanford Center for Internet and Society on

09:55

Missed first 5 minutes.  Sorry.

Panel:  Chris Sprigman, Stanford CIS Fellow (moderator); Len Sassaman, Anonymizer; Chris Wysopal, @Stake

CS:  [missed the question]

CW(?): [...] Researchers could be paid through government sponsorships, but such sponsorships may come with strings attached that harm public good.

LS:  Ultimately, vendors are not motivated to release secure products (or secure released products) unless action or inaction affects their financial bottom line.  Many exploits are currently untargeted, malicious, and haphazard.  Zero-day, targeted exploits are possible and not yet widespread.  Must addres motivations of researchers, as well - what does a particular researcher gain (or lose) by adhering to vulnerability publication guidelines or norms?  For example, researchers often desire recognition or notoreity - must take this into account.  Should structure an environment in which "good" behaviour is rewarded and "bad" behaviour results in undesireable consequences. (paraphrased)  Balance is good.  Zero-day highly publicized exploits can actually be good, because nobody wants a situation in which a system has been remotely compromised for weeks, months or years prior to discovery.  Balance is good.

CS:  Explore motivation.  Is there a category of reseachers/programmers who are motivated by ideology?  Are there folks who simply want to prove that closed-source/commercial software (for example) is flawed?
? : Don't think there's must pure ideology, but probably does play a role.
LS: Won't be able to stop such motivation, so it's best

LS:  [lots, much rehashed]  Large entities cannot necessarily immediately deploy
CW: As a responsible researcher, how much time do you allow, and how much information do you release?

CS:  How do you factor in the reputation of the vendor?
LS: Some vendors definitely have earned their bad reputations for taking too long to address important issues.  Others have earned their good reputations, as well.
CW: #1 reason for proof-of-concept code released publicly before vendor has released a patch is because of unresponsiveness on the part of the vendor.

Audience (Peter Swire): Addressing Len's comment about expecting 0-day targeted exploits; pointed out that water supplies have yet to be poisoned in the US.  Why do you (Len) think that 0-day targeted exploits will become commonplace?

LS: Less physical risk exists in virtual attacks (breaking into a bank online) than in physical attacks (breaking into a bank building with a gun).  Also, 0-day targeted exploits have already begun - the problem is that they're often undetected for some time, and also not well publicized especially in the case of banking institutions.

Audience member:  Rumour has it that spammers are paying for exploit code.  True?

CW:  Yes, from personal experience with a former colleague, I believe that it happens.

LS: Have seen spammers attempting to brute-force passwords on SASL-authenticated mail servers.

Audience (from NIST):  Government-sponsored research in security certainly exists.  Do you want government-sponsored vulnerability research?  If so, think about how it will be used.

LS: Again, balance is good.  Protecting national internet infrastructure means protecting the whole internet - interconnectivity is king.

Audience:  How does legality come into play when anonymized reporting (etc.) is possible?
LS: If communication is truly anonymous, then of course legality becomes much less significant.

Audience:  How can legal system deal with ideological disclosure?  (Trying to make the point that it can't.)  Financial transactions are traceable, but ideology is easily and anonymously (and nonrivalrously) shared.
LS: Pointed out that entering into the "black hat"/illegal field opens up an entire new existence, and is therefore not necessarily isolated to just disclosure or exploitation of security vulnerabilities.

Published in: Blog