The verdict was in, and it was a comforting one: Deepfakes are the “dog that never barked.” So said Keir Giles, a Russia specialist with the Conflict Studies Research Centre in the United Kingdom. Giles reasoned that the threat posed by deepfakes has become so entrenched in the public’s imagination that no one would be fooled should they appear. Simply put, deepfakes “no longer have the power to shock.” Tim Hwang agreed but for different reasons, some technical, some practical. Hwang asserted that the more deepfakes are made, the better machine learning becomes at detecting them. Better still, the major platforms are marshaling their efforts to remove deepfakes, leaving them “relegated to sites with too few users to have a major effect.”
We disagree with each of these claims. Deepfakes have indeed been “barking,” though so far their bite has most often been felt in ways that many of us never see. Deepfakes in fact have taken a serious toll on people’s lives, especially the lives of women. As is often the case with early uses of digital technologies, women are the canaries in the coal mine. According to Deeptrace Labs, of the approximately 15,000 deepfake videos appearing online, 96 percent involve deepfake sex videos; and 99 percent of those involve women’s faces being inserted into porn without consent. Even for those who have heard a great deal about the potential harms from deepfakes, the opportunity to be shocked remains strong. Consider the fate that befell journalist and human rights activist Rana Ayyub. When a deepfake sex video appeared in April 2018 showing Ayyub engaged in a sex act in which she never engaged, the video spread like wildfire. Within 48 hours, the video appeared on more than half of the cellphones in India. Ayyub’s Facebook profile and Twitter account were overrun with death and rape threats. Posters disclosed her home addressed and claimed that she was available for anonymous sex. For weeks, Ayyub could hardly eat or speak. She was terrified to leave her house lest strangers make good on their threats. She stopped writing, her life’s work, for months. That is shocking by any measure.
Is this really any different from the threat posed by familiar, lower-tech forms of fraud? Yes. Human cognition predisposes us to be persuaded by visual and audio evidence, but especially so when the video or audio in question is of such quality that our eyes and ears cannot readily detect that something artificial is at work. Video and audio have a powerful impact on people. We credit them as true on the notion that we can believe what our eyes and ears are telling us. The more salacious and negative the deepfake, moreover, the more inclined we are to pass them on. Researchers have found that online hoaxes spread 10 times faster than accurate stories. And if a deepfake aligns with our viewpoints, then we are still more likely to believe it. Making matters worse, the technologies associated with creating deepfakes are likely to diffuse rapidly in the years ahead, bringing the capability within realistic reach of an ever-widening circle of users—and abusers.
Read the full post at Lawfare Blog.
- Date Published:05/11/2020
- Original Publication:Lawfare Blog