Stanford CIS

The Ongoing Fight to Keep Evidence Intact in the Face of AI Deception

By Riana Pfefferkorn on

Last week, television news host Chris Cuomo fell for a deepfake video of Representative Alexandria Ocasio-Cortez (D-NY) despite its being prominently labeled as AI-generated. Two weeks before Cuomo’s error, Senator Mike Lee (R-UT) posted to X a clearly fake resignation letter from embattled Federal Reserve chair Jerome Powell. Lee (who routinely amplifies misinformation) quickly deleted the tweet, claiming he didn’t know whether the letter was legitimate or not. Cuomo likewise deleted his post, though he bizarrely demanded that AOC disavow all the words the fake video had put in her mouth.

In an opinion column in the New York Times, Princeton professor Zeynep Tufekci references the Cuomo incident to highlight both the difficulty and necessity of verifying what’s real in the age of high-quality AI-generated images, audio, and video. As Tufekci notes, deepfakes are a demand-side issue, not just a supply-side issue: Many people (including, evidently, Cuomo and Lee) who consume and share fake content don’t care if it’s fake so long as it confirms their existing beliefs.

Nevertheless, as Tufekci points out, there are many contexts, from private interactions between individuals to the financial markets, where “truthiness” won’t do and we still need some way to establish content authenticity. Fortunately, that’s what many people in both policy and technical domains have been working on for years now.

On the policy side, lawyers, judges, lawmakers, and legal scholars have long been attuned to the deepfake threat. As Tufekci notes, in the deepfake era, a wrongdoer caught on camera could claim the video is a deepfake, or manufacture their own fake evidence to frame someone else — “Hey, it’s your word against theirs” — meaning ordinary people will need ways to “disprove false claims and protect our reputations,” though she warns about the incentives for increased surveillance.

Read full post at Tech Policy Press