The FTC’s Do Not Track (DNT) proposal would essentially provide online users with a convenient universal opt-out of …. well, what exactly? Ad targeting? (But then why not call it “Do Not Target”?) Online data collection? (But surely this can’t be true given that online data collection is necessary for various legitimate purposes besides targeting ads).
To be more controversial: should users really be provided with an opt-out at all? Arguably, their use of the amazing products and services available online today without charge comes with a price tag that everybody should recognize by now. You get services without paying in return for use of your data for advertising purposes. That’s the deal. Indeed, is receiving targeted ads that harmful? Many (most?) users probably find it beneficial and would prefer targeted to non-targeted content – including ads.
The architecture of the Internet, built incrementally over the past 15 years or so, favors transparency and data sharing over privacy and data segmentation. Arguably, if we wanted to change that, we should have done so at the inception of the Internet in the 1990s. It may be too late to do so now.
To be sure, there is a “creepiness factor” to targeting and re-targeting; checking out a book on one website and then seeing an ad for the same book the next day on another website (particularly if it’s a book about Alzheimer’s). It feels like someone is following you around. Transparency is needed. Nothing justifies surreptitious monitoring or the accumulation of personal data into secret files used for vague, undefined purposes. That’s why we have data protection laws in the first place. See, for illustration, the wonderful German film “The Life of Others”, depicting life in East Germany under the secret surveillance of the omniscient Stasi.
So far, the industry has failed to convey coherently to users the basic elements of the online transaction. 10 years of notices and privacy policies haven’t had the impact on public awareness of a single Wall Street Journal article. While becoming increasingly aware of the value-for-data proposition, users remain largely ignorant about the inner workings of the market for personal data, with ad networks, ad exchanges, analytics and research companies, re-targeters, and other players operating in the background of each browsing session.
The industry should promote transparency. But provide an opt-out? Arguably, if you want to opt-out you should not use the services in the first place. When you get on a car and drive on the highway you don’t get to opt out of traffic jams or red lights. They’re part of the package.
DNT is likely to launch a game of “who blinks first” between online companies and users. Industry voices argue, that faced with the choice of opting-out under DNT and receiving un-targeted content (or worse, web pages with “black holes” where space is reserved for targeted ads), users will cave and revert to the existing default. Privacy advocates argue the industry will blink first and provide users with the same services offered today regardless of opt-out rates. After all, Google was tremendously profitable even before behavioral tracking, back in the days of contextual non targeted ads.
The likely impact of this second scenario on the market is somewhat troubling though, since the companies bearing the brunt of DNT may be small and medium size enterprises, which currently rely on targeted ads and cost per click ad models. The big boys who can afford to advertise on less finely tailored spaces such as TV and magazines, or do so online based on cost per view, will be less affected.
What is more troubling than use of data for targeted ads is their collection online and use in offline decisions in the fields of employment, insurance, banking and litigation. So maybe we should have “Do Not Track for Non-Advertising Purposes” (DNTNAP?) Surely insurance companies should not be allowed to price your premiums based on articles you read on Wikipedia or WebMD. Or should they? The law-and-economics argument would be – let them use any information to better price their products and enhance efficiencies. That’s how efficient markets work. Why should Lisa, a healthy non-smoker, subsidize the health insurance premiums of John, a smoking diabetic who misrepresents his medical condition to his insurer?
This is a Posner-school argument, recently echoed by Google CEO Eric Schmidt: privacy is a tool for misrepresentation in the hands of those who have something to hide. The pedophiles who intend to work in child care, or alcoholics who want to drive trains.
I would not go so far. In my mind, the line for online uses of data should be drawn around the services-for-advertising business model. Any usage of online data outside the advertising ecosystem is beyond the reasonable expectation of users and can cause users real, tangible harm. When using Google search or Facebook social, I expect to be targeted with online ads; not to be turned down for a mortgage or insurance or sanctioned by my employer. Regulators should focus on setting and enforcing rules at the online-offline fault line. What happens online should stay online.