"Danielle Citron, a law professor at the University of Maryland who works with Facebook’s non-consensual intimate images advisory group in an unpaid capacity, says that the episode could point to the limits of algorithms in flagging hate speech. Hate speech is about context, she says, which algorithms struggle to detect.
“Algorithms may be helpful to flag patterns and groupings of words that in other cases have been appropriately found to constitute hate speech,” said Citron. “But it does a terrible job of adjudicating on its own. So the idea that we would rely on algorithms to flag and to filter and block without human moderation … is a terrible idea.” But even with human moderation, content removal decisions can be very difficult."
- Date Published:07/05/2018
- Original Publication:Slate