What exactly was the extent of Russian meddling in the 2016 election campaign? How widespread was its infiltration of social media? And how much influence did its propaganda have on public opinion and voter behavior?
Scholars are only now starting to tackle those questions. But to answer them, academics need data — and getting that data has been a problem.
Take a recent example: Jonathan Albright, a researcher at Columbia University, looked into a number of Russia-bought pages that Facebook had taken down. He concluded that they had amassed potentially hundreds of millions of views. David Karpf, an associate professor of media and public affairs at George Washington University, wasn’t convinced, arguing that most of the "people" who had liked these pages were very likely Russian bots. (Full disclosure: I commissioned and edited Karpf’s post on The Washington Post’s Monkey Cage blog.)
Usually such disagreements are resolved through the data. The problem that scholars like Albright and Karpf face is that there is little publicly available data on Facebook. For his study, Albright had to use an unconventional Facebook-owned tool called CrowdTangle to find anything at all. After he had published his initial findings, Facebook quickly announced that it had "fixed a bug" in the software Albright used, making it impossible for other researchers to replicate what he did. Albright and Karpf are left in a very unhappy situation — the data that they need to understand what happened is simply no longer available.
Read the full piece at The Chronicle of Higher Education.