On the heels of the Federal Trade Commission’s (“FTC”) third annual “PrivacyCon,” the Future of Privacy Forum hosted its eighth annual “Privacy Papers for Policymakers” event on Capitol Hill—a gathering in which academics present their original scholarly works on privacy-related topics to D.C. policy wonks who may have a hand in shaping laws and regulations at the local, federal, and international level. The goal of the event is, in part, to foster academic-industry collaboration in addressing the world’s current and emerging privacy issues.
FTC Commissioner Terrell McSweeny kicked off the program with a reminder of the unique challenge that has always faced the world of tech policy: the rapid acceleration of the Digital Age and the need for consumer rights to catch up. Commissioner McSweeny opined that the challenge may require some solutions that go beyond privacy—such as individual control over personal data, data portability, and governance by design—and pointed out several ways in which the honored papers may help spur the evolution of existing privacy frameworks:
“Artificial Intelligence Policy: A Primer and Roadmap”— former Covington attorney Ryan Calo’s paper aims to advance current debates surrounding artificial intelligence (“AI”) by pointing out something that many policymakers may not realize: concerns surrounding AI, the ethics of algorithms, and the possibility that robots could steal all of our jobs have been around for at least half a century (in fact, as his paper mentions, in 1960 President John F. Kennedy was called upon to “regulate automation” through a conference on robots and labor). By putting the current debate into context, the paper provides a roadmap to the major policy questions that AI raises and the many complicating factors that should be considered as AI takes on an ever-increasing presence in consumers’ daily lives.
“The Public Information Fallacy”—Unlike the many scholarly works that have already debated the plethora of definitions of “sensitive” or “private” information, Woodrow Hartzog’s paper asks what should be considered “public.” In researching his paper, Hartzog discovered that there are surprisingly few definitions of the term “public information,” despite the fact that the term is often used as a carte blanche for government surveillance and the personal data practices of private companies. Given the consequences that come with the term, the paper proposes that before labeling something as “public” we should first consider the values we want to serve, the outcomes we want to achieve, and the unspoken expectations that people realistically have when they “release” their information.
“The Undue Influence of Surveillance Technology Companies on Policing” — Elizabeth Joh’s paper discusses the ways in which private companies that produce surveillance technologies (such as “StingRays”, body cameras, or big data software) have an undue, secretive influence on the traditionally public policy decisions made by police departments. This influence may have enormous consequences for civil liberties, yet very little oversight and control is being exercised over these companies, which typically act out of their own, private self-interest. The paper argues that although some automation might result in a fairer result, private companies procured to produce police technologies should be part of the calculus in exercising oversight over police practices.