Facebook and Falsehood

Author(s): 
Publication Type: 
Other Writing
Publication Date: 
January 15, 2017

After the election, many people blamed Facebook for spreading partisan — and largely pro-Trump — "fake news," like Pope Francis’s endorsement of Trump, or Hillary Clinton’s secret life-threatening illness. The company was assailed for prioritizing user "engagement," meaning that its algorithms probably favored juicy fake news over other kinds of stories. Those algorithms had taken on greater prominence since August, when Facebook fired its small team of human beings who curated its "trending" news section, following conservative complaints that it was biased against the right.

Initially, Facebook denied that fake news could have seriously affected the election. But recently it announced that it was taking action. The social-media giant said it would work with fact-checking organizations such as Snopes and Polifact to identify problematic news stories and flag them as disputed, so that people know that they are questionable. It will also penalize suspect stories so that they are less likely to appear in people’s news feeds.

In each instance — the decision to remove human editors in August and the recent decision to use independent fact-checkers — Facebook has said that it cannot be an arbiter of truth. It wants to portray itself as a simple service that allows people and businesses to network and communicate, imposing only minimal controls over what they actually say to one another. This means that it has to outsource its judgments on truth — either by relying on "machine learning" or other technical approaches that might identify false information, or by turning to users and outside authorities.

Both approaches try to deal with fake news without addressing politics. Neither is likely to work.

The great strength and the great weakness of Silicon Valley is its propensity to redefine social questions as engineering problems. In a series of essays, Tim O’Reilly, the head of O’Reilly Media, argues that Facebook and similar organizations need to avoid individual judgments about the content of web pages and instead create algorithms that will not only select engaging material but also winnow out the false information from the true. Google has created algorithms that can comb through metadata for "signals" suggesting that pages are likely to have valuable content, without ever having to understand the intrinsic content of the page. O’Reilly argues that one can do the same thing for truth. Facebook’s algorithms would identify websites that repeatedly spread fake news and penalize their stories. This would define fake news as an engineering problem, in which one simply had to discover which signals were associated with true stories and give them priority.

Read the full piece at The Chronicle of Higher Education