Disinformation on Steroids

Author(s): 
Publication Type: 
Other Writing
Publication Date: 
October 16, 2018

Introduction

Disinformation and distrust online are set to take a turn for the worse. Rapid advances in deep-learning algorithms to synthesize video and audio content have made possible the production of “deep fakes”—highly realistic and difficult-to-detect depictions of real people doing or saying things they never said or did. As this technology spreads, the ability to produce bogus yet credible video and audio content will come within the reach of an ever-larger array of governments, nonstate actors, and individuals. As a result, the ability to advance lies using hyperrealistic, fake evidence is poised for a great leap forward.

The array of potential harms that deep fakes could entail is stunning. A well-timed and thoughtfully scripted deep fake or series of deep fakes could tip an election, spark violence in a city primed for civil unrest, bolster insurgent narratives about an enemy’s supposed atrocities, or exacerbate political divisions in a society. The opportunities for the sabotage of rivals are legion—for example, sinking a trade deal by slipping to a foreign leader a deep fake purporting to reveal the insulting true beliefs or intentions of U.S. officials.

The prospect of a comprehensive technical solution is limited for the time being, as are the options for legal or regulatory responses to deep fakes. A combination of technical, legislative, and personal solutions could help stem the problem.

Background: What Makes Deep Fakes Different?

The creation of false video and audio content is not new. Those with resources—like Hollywood studios or government entities—have long been able to make reasonably convincing fakes. The “appearance” of 1970s-vintage Peter Cushing and Carrie Fisher in Rogue One: A Star Wars Story is a recent example.

The looming era of deep fakes will be different, however, because the capacity to create hyperrealistic, difficult-to-debunk fake video and audio content will spread far and wide. Advances in machine learning are driving this change. Most notably, academic researchers have developed “generative adversarial networks” (GANs) that pit algorithms against one another to create synthetic data (i.e., the fake) that is nearly identical to its training data (i.e., real audio or video). Similar work is likely taking place in various classified settings, but the technology is developing at least partially in full public view with the involvement of commercial providers. Some degree of credible fakery is already within the reach of leading intelligence agencies, but in the coming age of deep fakes, anyone will be able to play the game at a dangerously high level. In such an environment, it would take little sophistication and resources to produce havoc. Not long from now, robust tools of this kind and for-hire services to implement them will be cheaply available to anyone.

Read the full piece at The Council on Foreign Relations