Stanford CIS

Our Racist, Terrifying Deepfake Future Is Here

on

And there are other, perhaps less obvious ways in which the most vulnerable will be disadvantaged. Riana Pfefferkorn, a policy fellow at Stanford’s Institute for Human-Centered Artificial Intelligence, recently conducted a study to learn about what kinds of lawyers had gotten caught submitting briefs with “hallucinations,” the term used to describe AI’s tendency to create citations that reference nonexistent sources, like those that filled a recent “scientific” report from Health and Human Services head Robert F. Kennedy Jr. She found the most hallucinations in briefs from small firms or solo practices, meaning attorneys who are likely stretched thinner than those at white-shoe firms, which have staff to catch AI mistakes.

“It connects back to my fear that the people with the fewest resources will be most affected by the downsides of AI,” Pfefferkorn said to me. “Overworked public defenders and criminal defense attorneys, or indigent people representing themselves in civil court—they won’t have the resources to tell real from fake. Or to call on experts who can help determine what evidence is and isn’t authentic.”

Read full article at The Nation