Facebook Takes a Step Forward on Deepfakes—And Stumbles

Publication Type: 
Other Writing
Publication Date: 
January 8, 2020

The good news is that Facebook is finally taking action against deepfakes. The bad news is that the platform’s new policy does not go far enough.

Journalists, technologists and academics have warned in recent years about the potential threat posed by these realistic-looking video or audio falsehoods, which show real people doing or saying things they never did or said. Generated through neural-network methods that are capable of achieving remarkably lifelike results, deepfakes present a challenge for both privacy (the vast majority of deepfake videos are nonconsensual pornography, showing people performing sex acts they never engaged in) and security (consider, for example, the effects of a deepfake showing the president announcing a nuclear strike on North Korea). And as two of us (Citron and Chesney) have written, if deepfakes grow more common, they also “threaten to erode the trust necessary for democracy to function effectively[.]”

So it should have been a relief when, on Jan. 7, Facebook announced a new policy banning deepfakes from its platform. In a blog post on the company’s website, Vice President of Global Policy Management Monika Bickert wrote that Facebook will, going forward, remove the following material from its platform:

  • It has been edited or synthesized—beyond adjustments for clarity or quality— in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:

  • It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.

This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words.

Yet, instead of cheers, the company faced widespread dismay—even anger. The campaign of Democratic presidential candidate Joe Biden, who was recently targeted by a misleadingly edited video in which he appeared to make a racist comment during a campaign speech, declared that Facebook’s announcement represented only the “illusion of progress.” Also angry was the team of Speaker of the House Nancy Pelosi, who was similarly targeted in May 2019 with a deceptively edited video altered to make her appear drunk or in poor health—which Facebook refused to take down at the time. “Pelosi’s people,” wrote Washington Post technology reporter Tony Romm, “are pissed[.]”

So what went wrong?

Read the full post at Lawfare Blog