Q&A: LAW’s Danielle Citron Warns That Deepfake Videos Could Undermine the 2020 Election

Danielle Citron is worried. Deepfake videos are here—you may have watched one already and not even realized it—and they could undermine the 2020 presidential election. 

Citron, a leading privacy scholar, who joined the School of Law as a professor of law in July, and is a member of the Hariri Institute’s Cyber Security, Law, and Society Alliance, coauthored an essay published this week on the Carnegie Endowment for International Peace website calling for every candidate for president to take immediate steps to counter deepfakes and outlining an eight-point deep fakes emergency campaign plan.

Why the urgency? Only the future of our democracy is at stake.

Deepfakes are hard to detect, harder to debunk, highly realistic videos and audio clips that make people appear to be saying and doing things they never said or did. Enabled by rapidly advancing machine learning, they are distributed at lightning speed through social media. A recent example highlighting the danger of manipulated videos is a video of Speaker of the House Nancy Pelosi (D-Calif.) that made it appear as if she were drunk and slurring her words. It got more than 2.5 million views on Facebook, and while it was relatively easy to tell that the video had been altered (Citron and other experts call it a cheap-fake rather than a deepfake), it went viral anyway, with an assist from President Trump, who tweeted a clip that first aired on Fox News.

Citron, an expert on deepfakes, testified on manipulated media, particularly deepfakes, in June before the House Permanent Select Committee on Intelligence. Warning that technology will soon enable the creation of deepfakes that will be impossible to distinguish from the real thing, she told lawmakers: “Under assault will be reputations, political discourse, elections, journalism, national security, and truth as the foundation of democracy.”

Read the full piece at BUToday