The client shows his lawyer a video he says he took on his cell phone. It shows the defendant saying things that, if seen by the jury, will be a slam dunk for the client’s case. The attorney includes the video in her list of evidence for trial, but the defendant’s lawyers move to strike. They claim it’s a fake. What’s the plaintiff’s lawyer—and the judge—to do?
Welcome to trial practice in the new world of "deepfake" videos.
A portmanteau of "deep learning" and "fake," socalled "deepfake" programs use artificial intelligence (AI) to produce forged videos of people that appear genuine. The technology lets anyone map their own or another’s movements and words onto someone else’s face and voice to make them appear to say or do something. The more video and audio of the person that can be fed into the computer’s deep-learning algorithms, the more convincing the result. For example, two years ago University of Washington (UW) researchers used algorithms they’d created to make a realistic, but phony, video of former president Barack Obama, based on actual audio and video clips they fed the algorithm.1 But it doesn’t take a UW computer science degree to make a deepfake; the technology is freely available and fairly easy for anyone to use. Its usability, and the verisimilitude of its output, will keep improving over time.
Read the full article at NWLawyer.
- Publication Type:Other Writing
- Publication Date:09/10/2019