Professor Hartzog is a Professor of Law and Computer Science at Northeastern University, where he teaches privacy and data protection law, policy, and ethics. He holds a joint appointment with the School of Law and the College of Computer and Information Science. His recent work focuses on the complex problems that arise when personal information is collected by powerful new technologies, stored, and disclosed online.
Professor Hartzog’s work has been published in numerous scholarly publications such as the Yale Law Journal, Columbia Law Review, California Law Review, and Michigan Law Review and popular national publications such as The Guardian, Wired, BBC, CNN, Bloomberg, New Scientist, Slate, The Atlantic, and The Nation. His book, Privacy’s Blueprint: The Battle to Control the Design of New Technologies, is under contract with Harvard University Press. He has testified twice before Congress on data protection issues.
Professor Hartzog has served as a Visiting Professor at Notre Dame Law School and the University of Maine School of Law. He previously worked as an attorney in private practice and as a trademark attorney for the United States Patent and Trademark Office. He also served as a clerk for the Electronic Privacy Information Center. He holds a PhD in mass communication from the University of North Carolina at Chapel Hill, an LLM in intellectual property from the George Washington University Law School, and a JD from Samford University.
I have just uploaded a new essay about online privacy to SSRN that will appear in Volume 46 of the Georgia Law Review. The essay, titled "Chain-Link Confidentiality," asserts that personal information that is shared online can be better protected if we require our confidants to make sure that their confidants are watching out for us. This strategy could help us retain control over our personal information as it moves downstream. Your comments are warmly welcome.
Last week, the Supreme Court issued its opinion in United States v. Jones, in which the Justices held that the government's installation of a GPS device on a target's vehicle, and its use of that device to monitor the vehicle's movements, constituted a Fourth Amendment search. The decision was surprisingly unanimous on this point, though concurring opinions by Justices Sotomayor and Alito potentially amplify the significance of the opinion by proposing alternate approaches to the larger problem of ubiquitous surveillance technologies and privacy in public. Given the majority opinion's narrow focus on the attachment of the device to the car, the larger issue of privacy in public remains unsettled.
Others have done an exemplary job of commenting on the decision. The dominant themes arising from the decision and analysis of the decision seem to be the (re?)injection of the concept of trespass into Fourth Amendment doctrine, signs of potential withering of the third party doctrine, and recognition that Fourth Amendment and privacy doctrine will soon enough be useless if they do not adequately protect against ever-evolving surveillance methods and technologies.
I'd like to focus on an aspect of the decision that has not shown up much in the analysis of the case, likely because it was never explicitly mentioned in the text. Although the word obscurity does not appear anywhere in United States v. Jones, I think the decision, particularly Justice Sotomayor's concurring opinion, supports the idea that the obscurity of our personal information is worth protecting.
Amazon, the company synonymous with online shopping, is supplying facial recognition technology to government and law enforcement agencies over its web services platform. Branded Rekognition, the technology is every bit as dystopian as it sounds.
Imagine a technology that is potently, uniquely dangerous — something so inherently toxic that it deserves to be completely rejected, banned, and stigmatized. Something so pernicious that regulation cannot adequately protect citizens from its effects. That technology is already here. It is facial recognition technology, and its dangers are so great that it must be rejected entirely.
Imagine a technology that is potently, uniquely dangerous — something so inherently toxic that it deserves to be completely rejected, banned, and stigmatized. Something so pernicious that regulation cannot adequately protect citizens from its effects.
That technology is already here. It is facial recognition technology, and its dangers are so great that it must be rejected entirely.
The user agreement has become a potent symbol of our asymmetric relationship with technology firms. For most of us, it’s our first interaction with a given company. We sign up and are asked to read the dreaded user agreement — a process that we know signifies some complex and inconveniently detrimental implications of using the service, but one that we choose to ignore.
The revelation that Cambridge Analytica was involved in the extraction of data involving over 50 million Facebook users has raised more than a few questions about just what went wrong and who is to blame.
"“The future of human flourishing depends upon facial recognition technology being banned,” wrote Woodrow Hartzog, a professor of law and computer science at Northeastern, and Evan Selinger, a professor of philosophy at the Rochester Institute of Technology, last year. “Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.”
"In the near future, robots might be able to manipulate our emotions, everyone will be rated on everything, and we won’t be able to trust our own eyes, Woodrow Hartzog said.
“I promise I will end on a positive note,” the Northeastern professor said Thursday, eliciting chuckles from a roomful of people gathered at the university’s Charlotte campus to hear him discuss online privacy.
"“You’ve really got a rock-and-a-hard-place situation happening here,” said Woody Hartzog, a professor of law and computer science at Northeastern University. “Facial recognition can be incredibly harmful when it’s inaccurate and incredibly oppressive the more accurate it gets.”"
"At the Senate hearing, Northeastern University’s Professor of Law and Computer Science Woodrow Hartzog said he teaches his students to expect to deal with a patchwork of state laws on a variety of issues, so the argument that state regulation stifles innovation may not be true, because many industries consider it a legal reality.
"“In the United States, we have a tradition of dealing with a patchwork of 50 state laws, [and] while there are virtues to consistency, it’s not the obstacle that would strike me as the first thing we have to surmount if we’re going to get privacy right,” said Woodrow Hartzog, the lone privacy advocate testifying before the committee, and a professor of law and computer science at Northeastern University."
Solutions to many pressing economic and societal challenges lie in better understanding data. New tools for analyzing disparate information sets, called Big Data, have revolutionized our ability to find signals amongst the noise. Big Data techniques hold promise for breakthroughs ranging from better health care, a cleaner environment, safer cities, and more effective marketing. Yet, privacy advocates are concerned that the same advances will upend the power relationships between government, business and individuals, and lead to prosecutorial abuse, racial or other profiling, discrimination, redlining, overcriminalization, and other restricted freedoms.
Listen to the full radio show (in German) at Deutschlandradio.
"On the other hand: even algorithms can make mistakes. You will eventually written by humans. And just legal texts can be difficult in a formalized language to translate. They are, says Woodraw Hartzog, just not made for it to be automated. And they are not made to be enforced to one hundred percent."