The following was excerpted from an article that will appear in a future issue of NWLawyer. The author was also recently interviewed for the “What’s Next” newsletter on LAW.COM, which you can read here.
The client shows his lawyer a video he says he took on his cell phone. It shows the defendant saying things that, if seen by the jury, will be a slam-dunk for the client’s case. The attorney includes the video in her list of evidence for trial, but the defendant’s lawyers move to strike. They claim it’s a fake. What’s the plaintiff’s lawyer—and the judge—to do?
Welcome to trial practice in the new world of “deepfake” videos. A portmanteau of “deep learning” and “fake,” so-called “deepfake” programs use artificial intelligence (AI) to produce forged videos of people that appear genuine. The technology lets anyone “map” their movements and words onto someone else’s face and voice to make them appear to say things they never said. The more video and audio of the person that can be fed into the computer’s deep-learning algorithms, the more convincing the result. For example, last year University of Washington researchers used algorithms they’d created to make a realistic, but phony, video of former president Obama out of actual audio and video clips. Jennifer Langston, “Lip-Syncing Obama: New Tools Turn Audio Clips into Realistic Video,” UW News (July 11, 2017). But it doesn’t take a UW computer science degree to make a deepfake: the technology is freely available and fairly easy for anyone to use. Its usability, and the verisimilitude of its output, will keep improving over time.
Read the full piece at the WA State Bar Association.