Seth Schoen has posted a very interesting blog entry about some trends in the trusted computing research community according to which educating users about computer security risks does not work and, therefore, one needs TC to protect the users from risks they cannot assess or are not even aware of. Here are four comments:
- As Seth acknowledges, this paternalistic approach may mean that TC features become implemented in security-sensitive areas only. You could have, e.g., a compartmentalized computer architecture where, "on the left side", you can do anything you want, whereas, "on the right side", a paternalistic TC system controls what you can do with your computer. At the end of his entry, Seth is concerned that, if one accepts this approach, it is tempting, over time, to broaden the "right side" up to a point where the "paternalistic" TC takes over the whole architecture. Some time ago, Eugene Volokh has written an interesting article analyzing these kinds of "slippery slope" arguments. While Seth may be right in warning of the slippery slope, I think it is important to point out why exactly such slippery slope is likely to occur in this context. Furthermore, there are many other policy areas where the mere fact that a slippery slope exists does not prevent us from making a decision that opens this slippery slope a bit.
- Before one engages in the slippery slope argument, I think one should discuss how one could agree on defining areas that are really "security-sensitive" and therefore necessitate a paternatlistic TC system. Such areas probably include viruses, trojan horses etc. But how about home banking? ISP access supported by TNC? DRM? Who decides where to draw the line between the "left side" and the "right side" in the first place, even before the slippery slope occurs?
- Seth also asks "how we know that people actually implementing security software will have the knowledge or the incentive to act in the user's interest and not in some other interest". How should the user who wants to buy some hardware or software know that the seller, who depicts himself as paternalistic, is actually paternalistic? Who controls the paternalist? Well, perhaps the market could solve the problem. Of course, the users themselves cannot assess whether a particular component actually behaves in a paternalist manner. But perhaps third parties would emerge that could compare different components and provide the necessary information to the users. Is the market able to provide information about paternalistic hardware or software components?
- The argument for a paternalistic, mandated TC architecture is that, otherwise, the user "might do something that the user would regret but that the user wouldn't be able to understand was wrong." This argument reminds one of phaenomena which, in the behavioral law and economics literature, are being discussed unter terms such as "overconfidence bias" and "availability heuristic". This is not the place for a detailed discussion. But, under certain conditions, such biases and heuristics may create a justification for paternalistic regulatory interventions. The complex question is, then, which intervention - information disclosure or a technologically mandated loss of user control - should be preferred. Over the last years, the economic analysis of IT security has attracted quite some interest. Is anyone aware of such research that incorporates insights from behavioral economics or psychology?