The Facebook Experiment: Gambling? In This Casino?

Author(s): 
Publication Type: 
Other Writing
Publication Date: 
July 2, 2014

Cross-posted from Recode. 

By Jules Polonetsky and Omer Tene

Critics have spent the last few days castigating Facebook for a large-scale experiment conducted by researchers who wanted to learn the effects of tweaking the dosage of positive or negative comments on a user’s News Feed. Would people who are exposed to more negative comments than the average delivered to them by the Facebook algorithm be more or less prone to positivity themselves?

Many scorned Facebook’s actions as an unruly experiment on human subjects without their knowledge or informed consent. Kashmir Hill lamented what she called “a new level of experimentation, turning Facebook from a fishbowl into a petri dish.” Arthur Caplan wrote that the experiment “should send a shiver down the spine of any Facebook user or anyone thinking about becoming one,” and that it should never have been performed.

Others were more sanguine, pointing out that in considering the use of algorithms to tailor content — on Facebook and elsewhere — one was reminded of Captain Renault’s protest as he walked into a casino in “Casablanca”: “I’m shocked, shocked to find that gambling is going on in here!” They claimed that, far from being an exception to conventional business practice, manipulation of user experience on a digital platform is the market norm. On the Web, on mobile and increasingly in our homes and on wearable devices, data is analyzed to increase user engagement, satisfaction, traction, or shopping appetite.

Indeed, Facebook itself has engaged in experimentation with much more ambitious aspirations than merely gauging user sentiment. Last year, working with researchers from Johns Hopkins University, Facebook adjusted its profile settings so users could announce their status as an organ donor, or sign up if they weren’t already registered. Over a single day, the new feature prompted more than 13,000 individuals to sign up as organ donors — more than 21 times the daily average. Most observers would agree that increasing organ-donation rates is a laudable goal, but clearly, some kinds of social influence must be considered off-limits or subject to special disclosures.

Big-data analysis is already used in multiple contexts, to personalize the delivery of education in K-12 schools, reduce the time commuters spend on the road, contain greenhouse emissions, detect harmful drug interactions, encourage weight loss, and much more. Such data uses promise tremendous societal benefits, but at the same time creates new risks of surveillance, discrimination, and opaque algorithmic decision-making. In this environment, who is best placed to distinguish right from wrong, to warn before corporate practices cross the “creepy” line?

Increasingly, corporate officers find themselves struggling to decipher subtle social norms and make ethical choices that are more befitting of philosophers than business managers or lawyers. Perhaps the most powerful example is the European court’s decision to appoint Google an arbitrator of thousands of individual contests between privacy rights and freedom of speech. Google reacted by setting up a panel of experts comprising senior officials as well as five external experts, including an Oxford philosopher, a civil-rights activist and a United Nations representative. It will have to deal with a steady barrage of requests from individuals who want to wipe their data record clean.

Google’s model will soon have to be replicated by companies tackling a broad swath of policy dilemmas. Should a fitness app “manipulate” users to coax them to eat less and exercise more? Is an airline overstepping the bounds of social etiquette by Googling passengers’ names to personalize their experience? Should an app developer offer a student a level-two math app after she completes level one?

These decisions echo the mandates of academic review boards (IRBs), which operate in research institutions under formulaic rules and follow strict protocols. It may be a challenge to deploy traditional IRBs in the corporate domain, which is restricted by concerns for confidentiality, patents and trade-secrecy law. But it would be unfortunate if the lesson that industry takes from this episode is to keep algorithmic decisions confidential, or prevent access to corporate data coffers by the academic-research community.

Going forward, companies will need to create new processes, deploying a toolbox of innovative solutions to engender trust and mitigate normative friction. Fortunately, many companies have already laid the groundwork for such delicate decision-making by appointing chief privacy officers. Others have budding internal ethical review programs.

But big-data analysis raises issues that transcend privacy and implicate broader policy concerns around discrimination, filter bubbles, access to data, and the ethics of scientific research. Accordingly, it requires active engagement by both internal and external stakeholders to increase transparency, accountability and trust.

As the companies that serve us play an increasingly intimate role in our lives, understanding how they shape their services to influence users has become a vexing policy issue. Data can be used for control and discrimination or utilized to support fairness and freedom. Establishing a process for ethical decision-making is key to ensuring that the benefits of data exceed their costs.