Stanford CIS

Why Smart Phones Should Help Us Avoid Selfie Sabotage

By Woodrow Hartzog on

Cross-posted from Forbes.

Co-authored by Evan Selinger.

Over at the New York Times, Farhad Manjoo argued that smart phones should be designed to better protect people from the harms that can arise when their nude selfies end up in the wrong hands. Manjoo’s proposal entails nudging, and consequently has greater moral complexity than meets the eye. We think it’s a good and important idea, and will explain why to help make the case more persuasive.

Beginning from the premise that we’re all susceptible to the “charms” of taking sensitive and potentially compromising photos, Manjoo makes two suggestions.

First, phones should be designed to detect naked photos. From a technical perspective, Manjoo declares this is a “comparatively easy task” requiring less hassle than facial recognition.

Second, phones that detect naked photos should notify users that options exist they should consider using. These choices can include encrypting a photo, requiring a password access a photo, preventing a photo from being included in cloud back-ups, and even exercising the delete option.

Sounds great, right? Given the ubiquity of photos taken on phones and their increasingly sensitive nature, we think such protective design features for photos are desirable, if not necessary, for mobile devices. Still, the question of how users should be notified of, enrolled in, and interact with these features deserves deep consideration.

The only objection Manjoo musters is that automating nudity detection might be “creepy.”

As one of us has argued, being creeped out in itself isn’t a decisive moral experience. Manjoo recognizes this and says the issue turns on whether being uncomfortable is outweighed by the potential harm that gets reduced.

Beyond this cost-benefit-analysis and below the surface, however, lies a more fundamental issue. If society ever gets to a point where technology over-polices our behavior, we’ll lose out on important things—including the ability to think for ourselves and recognize when we’re facing situations that call for prudence and forethought.

The question, then, is whether the detection component of Manjoo’s suggestion would be a step towards the degraded direction. Does depending on technology to identify highly sensitive information outsource too much of our awareness to algorithms?

Notice what this question is asking us to consider. It isn’t addressing the matter of whether the mere act of prompting people to think about what to do with their sensitive selfies will have unintended consequences, like triggering feelings of shame. Nor is the question about whether about individuals should assume full responsibility for the damage that selfies can cause. The idea that by itself technology can solve the problem of non-consensual pornography is an expression of the fallacious thinking called “solutionism.”  As professors Danielle Citron and Mary Anne Franks argue, new legal measures also need to be created.

The question at issue is whether folks should be responsible for learning that nude selfies can ruin reputations (or worse) on their own and without being prompted by technological assistance. It’s thus a matter of character and self-responsibility. Is it infantilizing to have technology create such a warning sign? Would such a warning sign disincentive us from putting in the effort to proactively understand big digital age dangers? At the extreme, too many warning signs can do more than undermine our initiative to learn. They can backfire by creating so much noise people cope by tuning them out.

Let’s think this subject through by deciding which amongst competing design possibilities is best. This task may seem easy, but it’s really quite daunting. Every possibility commits us to choice architecture and this means some degree of paternalism is inevitable in designing technologies. We’ve got to decide which configuration serves the user’s best interests.

Option #1. Keep status quo smart phone design that doesn’t change anything.

Option #2.  Add user-friendly options that make it easy to encrypt photos, password protect photos, and isolate photos from cloud back-ups. Leave out Manjoo’s proposed detection and notification features.

Option #3. Add Manjoo’s proposed detection and notification tools as opt-in features that phone users have to select if they wish to use them. Ensure that the service is easy to both enable and disable.

Option #4.  Add Manjoo’s proposed detection and notification features as the default setting that phone users are automatically opted into. Ensure that it’s easy for users who don’t want to use the service to turn it off, as well as turn on at a later time, should they be so inclined.

Although Option #1 entails doing nothing, it’s still a moral decision. It amounts to a choice to exclude the safety features available in Option #2 and thus deprive users of their use. So far as we can tell, there’s no compelling reason to refrain from giving users those attractive features if it’s relatively easy and inexpensive to do so.

So, Option #2 is preferable to Option #1 because it expands our choices and gives us a greater ability to protect ourselves, should we choose to do so. But is it better than Options #3 or  #4? Are we better off without technology that identifies nude selfies as worthy of our extra attention?

Probably not. Not everyone has the same level of digital literacy. While it might seem easy to the technologically savvy to find and use protective features for intimate media, data security should be for all users. Unlike some kinds of privacy choices around which there is little consensus, it seems clear that virtually everyone would choose to avoid unauthorized access to and unintentional exposure of nude photos. Given this unanimity combined with both the sensitivity and popularity of nude photos, it’s hard to imagine any content more worthy of robust and automatic security features on a consumer technology.

Brett Frischmann, Director of Intellectual Property and Information Law Program at Cardozo School of Law, told us that when technology prompts us to think about how to handle a nude selfie, the process actually can enhance social learning and help a new level of common sense emerge.

“ Like GPS, many technologies provide helpful nudges and have anti-developmental consequences. That is, they inhibit the development of critical human capabilities. So a useful design default is to prompt users to engage in active choosing, which may be essential to social learning and in this particular case, the development of common sense. We can’t presume people know or should know how to deal with problems posed by new technology. They might not even recognize the problem as such.  It may take some reflection and perhaps even deliberation with others.  Common sense often works that way.”

Option #4, the opt-out detection of nude photos, seems wisest. It’s tempting to say opt-in protections are the most desirable because they are the most choice-preserving. But given the likely consensus around the protection of nude photos, the catastrophic consequences of a related data breach, and the diversity of technical sophistication, the benefits of this nudge would likely outweigh the costs—especially if the detection setting is easy to find (or even presented as an option upon the first detection) and can be disabled by simply pushing a button.

To be sure, we need to be wary of nudging. Beyond having the potential to corrode character and negate autonomy, it can be used in ways that purely encourage, rather than mitigate, risky behavior. For example, Facebook is letting people know how often publically posted videos are viewed, and it isn’t hard to imagine that the desire to have that information will incline some folks to over-share. But nudges aren’t inherently bad, and Manjoo identified an important one whose time has come.

Published in: Publication , Other Writing , Privacy