How to Stop Facebook From Making Us Pawns in Its Corporate Agenda

Author(s): 
Publication Type: 
Other Writing
Publication Date: 
July 1, 2014

Cross-posted from Wired.

You didn’t know it, but Facebook used some of you to manipulate your friends.

Even though you can’t anticipate how a company will integrate your data into its undisclosed activities, you’re still unintentionally providing grist for the manipulation mill. In the case of Facebook’s most recently published study, the company used the words of some of you—and we can’t know who—in ways you certainly did not intend, to tweak News Feed based on emotional indicators to measure the effect it would have on mood. But this study is not unique. Social media regularly manipulates how user posts appear; the abuse of socially shared information has become a collective problem that requires a collective response.

This is a call to action. We should work together to demand that companies promise not to make us involuntary accomplices in corporate activities that compromise other people’s autonomy and trust.

Why Individual Responsibility Is Far From Enough

Many, though certainly not all, social media users are probably aware that their posts are curated for other people. Yet it’s still quite easy to fall into the trap of thinking that our mediated reality is the same as everyone else’s. In this mindset, the only way our words can prove harmful is when we make bad judgments about what we post. This perspective exerts a powerful hold on our imaginations because it suggests every time we log on, it’s up to us to do the right thing and make good judgments because others will be reading what we write. Unfortunately, this atomistic and choice-driven outlook ignores a deeper structural reality and erroneously frames the common good as protected by each user exhibiting sensitivity and self-control.

Through the lens of this overly reductive view of cyber-citizenship, each person does his or her part to promote the common good by accepting responsibility for three don’ts: Don’t deliberately say something that hurts another person’s feelings; Don’t disclose sensitive information that can harm your own reputation; Don’t let prying eyes peek by using privacy settings.

And yet, the case at issue demonstrates that individual discretion only goes so far when companies can take control of our information, re-purpose it to the potential detriment of others, and keep us in the dark about processes which are hard to keep in mind when looking at the friendly “what’s on your mind?” box—all the while avoiding liability by using lengthy and obtuse language in a Terms of Service agreement.

Even if the experiment resulted in a seemingly modest outcome and didn’t profoundly impact anyone’s life, a happy result couldn’t be presupposed at the outset. If it could, there wouldn’t be any need to run an experiment. Hypothetically, your sharing a problem to get it off your chest could have—combined with other attempts to do the same—been used in a way that made some of your friends (maybe ones with emotional disorders) sadder than it would have otherwise.

This isn’t the first time information intermediaries have dubiously monkeyed with what privacy scholar Helen Nissenbaum calls the “contextual integrity” of what we share with others. Sadly, violations of contextual integrity happen often online. These violations are troubling because they minimize the control user have over their information and deliberately manipulate posts in ways that are often almost completely hidden—thanks to information asymmetries and corporate secrecy—from the posters themselves.

Beyond problems with contextual integrity, companies like Facebook also turn our online presence into other people’s liabilities by studying our behavior and helping data brokers determine how others tick by virtue of demographic similarities.

The Way to Fix It: Communal Action

So, what can we do? Although what we share online might be used maliciously against others, we cannot solve the problem of data manipulation solely by being discreet and prudent with what we post. The process is far too opaque and complex. Placing the burden on individuals to foresee such uses could dissuade use and nullify much of the utility of social media.

One option would be to pressure companies to provide “a consent process for willing study participants: a box to check somewhere saying you’re okay with being subjected to the occasional random psychological experiment that Facebook’s data team cooks up in the name of science.” But this solution is limited, too.

What if we come to change our minds and ultimately feel that furthering a social media company’s grand social experiment leaves our hands too dirty? In university approved IRB research, participants can drop out of a study whenever they want and request their data be deleted. Barring technical problems—which researchers need to be up front with—the petition should be granted. But could the corporate sense of consent—which differs from the academic one—ever be so accommodating, even if the day eventually comes when Consumer Subject Review Boards are voluntarily adopted? This is to say nothing of the overwhelming burden of routinely manifesting explicit preferences for study or falling back into the gentle lull of closing your eyes, clicking “I have read the Terms of Service,” and hoping things work out. The notice and choice regime that has been largely instituted in privacy law is, at best, flimsy and, as currently implemented, largely broken. It’s not wise to ask it to do more work than it’s capable of doing.

It’s possible that the Federal Trade Commission, the agency tasked with protecting consumers, could regulate this kind of activity as an unfair and deceptive trade practice. But these disputes are highly factually dependent and enforced on a case-by-case basis. Because sweeping terms in boilerplate terms of use are generally valid, there aren’t many other options for legal recourse.

If we’re to learn something from the debate over the ethics of Facebook’s emotion study and move forward, the answer must lie in collective action. As when the furor erupted last year over Facebook rolling out Graph Search, we can act individually and register our personal disgust by quitting. But as we’ve previously pointed out, the social costs of going offline are entirely are high, and network effects make meaningful changes to the underlying structural problems difficult.

We Need a People’s Terms of Service Agreement

Along with Ari Melber, co-host of MSNBC’s “The Cycle,” we’ve previously proposed a “People’s Terms of Service Agreement—a common reference point and stamp of approval, like a Fair Trade label for the web, to govern the next photo-sharing app or responsible social network.” Together, we could pressure existing Internet companies to adopt our terms of service, to show us they take some of our basic rights into consideration in the way they operate and behave.

Included in the People’s Terms of Service–in addition to key issues involving transparency, intellectual property, confidentiality, and data security—we imagine adding something we didn’t initially include: a promise not to wrongfully manipulate user’s contributions to a medium. Yet this road, too, is steep. Few agreements are as non-negotiable as Terms of Use, so for such a strategy to work, we’d need to more competition among the major tech players—and someone would have to step up to be the first to sign on.

While no single effort is a panacea, more pervasive attempts at reform and reminders why change is needed, can lead us to a true solution: a gradual but irrefutable shift in what we demand from information intermediaries. University of Maryland law professor James Grimmelmann—one of the more vocal critics in this debate—accurately stated, “The study itself is not the problem; the problem is our astonishingly low standards for Facebook and other digital manipulators.”

Realistically, our standards will not change overnight. But by changing the way we talk about who is harmed when we disclose, by reading stories about the problems with data manipulation, and by demanding more accountability from companies who use our online information, piece by piece we become more cognizant of our role in data manipulation, and we grow one step closer to a social accountability movement born of the realization that we’re all in this together.