The Problems and Promise with Terms of Use as the Chaperone of the Social Web

The New Republic recently published a piece by Jeffrey Rosen titled “The Delete Squad: Google, Twitter, Facebook, and the New Global Battle Over the Future of Free Speech.” In it, Rosen provides an interesting account of how the content policies of many major websites were developed and how influential those policies are for online expression.  The New York Times has a related article about the mounting pressures for Facebook to delete offensive material.

Both articles raise important questions about the proper role of massive information intermediaries with respect to content deletion, but they also hint at a related problem: Facebook and other large websites often have vague restrictions on user behavior in their terms of use that are so expansive as to cover most aspects of interaction on the social web. In essence, these agreements allow intermediaries to serve as a chaperone on the field trip that is our electronically-mediated social experience.

Behavior restrictions in terms of use agreements are not inherently bad. Indeed, many of them are useful, if not vital, to protect users and avoid chaos. Intermediaries can help curtail harms such as harassment and non-consensual pornography in ways that state actors cannot. Yet users can often be disadvantaged in many undesirable ways by some of these terms. Beyond the considerations about the wisdom of Facebook as content sensor, behavior restrictions in social media terms of use agreements can be so broad that it is nearly impossible to discern what activity is prohibited and what is allowable.

Like with other boilerplate contracts, user restrictions on social media are legally consented to and rarely challenged, yet similar restrictions via most other regulatory mechanisms would likely be legally suspect and vigorously opposed. For example, in Facebook’s “Statement of Rights and Responsibilities,” users are not allowed to:

  1. Use a pseudonym
  2. “[S]olicit login information or access an account belonging to someone else”
  3. “[B]ully, intimidate, or harass any user”
  4. “[P]ost content that: is hate speech, threatening, or pornographic; incites violence; or contains nudity or graphic or gratuitous violence”
  5. “[F]acilitate or encourage any violations of this Statement or our policies”
  6. “[T]ag users or send email invitations to non-users without their consent”
  7.  “[U]se Facebook to do anything unlawful, misleading, malicious, or discriminatory
  8. “[P]rovide any false personal information on Facebook”
  9. “[P]ost content or take any action on Facebook that infringes or violates someone else's rights or otherwise violates the law” (emphasis added)

Taken together, it would seem challenging for any Facebook user to be able to follow all of these terms in the normal course of social interaction online. Access your partner’s Facebook account while using their computer? Breach. Tag your frenemy in an unflattering picture that you know he hates? Breach. Pretend that you drank that margarita in your recently uploaded photo, when really it was your friend who slugged it down? Breach. The literal scope of terms such as “false,” “misleading,” “intimidate,” and “harass” is expansive. While this broad scope allows the terms to cover many odious practices, the terms could also include commonplace social practices like joking, peer pressure, and excessive exaggeration.    

Facebook is not an outlier here. Social media like GoogleTwitterPinterest, and Path all have similar behavioral restrictions. One problem with these restrictions as contractual terms is the lack of guidance given to users. These terms usually lack accompanying definitions and are subject to a diverse pool of assumptions for meaning. (Some companies, like Twitter for example, do provide some guidance, however.) Social interaction is extremely messy, which makes it very difficult to legally pin down.

So users are left in a difficult position. They can either be fervently antisocial and overly cautious out of fear of breaching the terms they agreed to or they can simply roll the dice with hopes of avoiding the ire of the enforcers, as any flagged activity could almost certainly be a violation of the terms of use.

Of course, it’s seemingly common knowledge that these terms are sporadically, if not rarely, enforced. How many unconsented tags are added to photos the morning after a wild party and remain online indefinitely? How many students are intimidated by their classmates with no recourse? How many pseudonymous profiles are obvious (such as Santa Claus), yet never deleted? While discretion allows for scarce resources to be allocated effectively, an atmosphere where violations are routinely tolerated threatens to leave users largely guessing.

Rosen alluded to an interesting fact that adds another layer of complexity to this story. In order to manage the massive scope of online social interaction, Facebook and many other social websites have internal policies to decide which kinds of user activities violate the terms of use. Facebook’s internal policy was leaked earlier this year and covered by Adrian Chen in Gawker. The coverage helped give users a much clearer picture as to what kinds of content would likely be permissible even though it technically violated Facebook’s terms of use.

Given this information, are users to follow the broad and restrictive terms in their agreement or should they take their guidance from Facebook’s more permissive and clearly-refined policies for enforcement of those terms? In a relatively recent New York Times piece on the increasingly-common practice of password sharing for video streaming services, Jenna Wortham indicated that “the companies with whom I spoke seemed to have little to no interest in curbing our sharing behavior — in part because they can’t.” If users are told that there are going to be no serious attempts to enforce terms that restrict certain activities, are they allowed to be surprised if their account is suspended for violations of those terms? In other words, should any weight be given to the “operational reality” of the contract, which is a dynamic similar to what was identified in Quon v. Arch Wireless?

It’s important to note the tension here. Ideally, terms of use agreements are short and easy to read. The more a company tries to provide explanation, the longer, more complex (and consequently less read) the agreements become. To complicate matters further, it seems that there are strong incentives for both approaches. Long and complex terms allow companies to fulfill and disclaim numerous responsibilities, while shorter and simpler agreements vest the company with greater power in policing its users.

The broad scope of terms of use is problematic under one theory of the Computer Fraud and Abuse Act. The issue is only going to be more important for the so-called “Internet of Things.” Google has already begun to flex its muscle by including terms of use to keep Glass users and developers honest and clean. There are increasing incentives to please advertisers by keeping the social web a place to play nicely, or at least safely.

So where does this leave the broad behavior restrictions as a normative matter? Some rules of the road are helpful if not necessary to save the social web from anarchy and protect users. Statutes, torts, and regulations are ineffective in limiting many different kinds of repugnant, harmful, and highly undesirable user activity. Yet it is worth pondering the desirability of the significant discretion intermediaries give themselves in terms of use agreements to regulate social interaction online and the cost of leaving users uncertain about the rules of the road.

Because discretion is important in all regulatory systems, it might be worth comparing a company’s thoroughness in enforcing social media user agreements with the rigor that law enforcement officials use to enforce jaywalking, speeding, or violent crime. I’m curious to hear of your thoughts on the desirability and enforceability of these terms. Just like in high school, having a chaperone might be a good idea, but we don’t always have to like it.

 

Cross-posted at Concurring Opinions

Add new comment