Professor Hartzog is a Professor of Law and Computer Science at Northeastern University, where he teaches privacy and data protection law, policy, and ethics. He holds a joint appointment with the School of Law and the College of Computer and Information Science. His recent work focuses on the complex problems that arise when personal information is collected by powerful new technologies, stored, and disclosed online.
Professor Hartzog’s work has been published in numerous scholarly publications such as the Yale Law Journal, Columbia Law Review, California Law Review, and Michigan Law Review and popular national publications such as The Guardian, Wired, BBC, CNN, Bloomberg, New Scientist, Slate, The Atlantic, and The Nation. His book, Privacy’s Blueprint: The Battle to Control the Design of New Technologies, is under contract with Harvard University Press. He has testified twice before Congress on data protection issues.
Professor Hartzog has served as a Visiting Professor at Notre Dame Law School and the University of Maine School of Law. He previously worked as an attorney in private practice and as a trademark attorney for the United States Patent and Trademark Office. He also served as a clerk for the Electronic Privacy Information Center. He holds a PhD in mass communication from the University of North Carolina at Chapel Hill, an LLM in intellectual property from the George Washington University Law School, and a JD from Samford University.
According to NPR, 300 plus teenagers broke into former NFL player Brian Holloway’s vacation home, causing massive damage and showcasing their exploits on social media. In response, Holloway created a website,helpmesave300.com, that collects the alleged culprits’ social media posts. He claims this repository has enabled teens to be identified, and that the growing list of names is “being turned over to the sheriffs (sic) department to assist them to verify and identify the facts.”
Online stalking, harassment, and invasions of privacy can be incredibly destructive. Yet very little empirical data exisits regarding these incidents. This paucity of data hinders educational, support, research and policy efforts. Without My Consent, a non-profit organization seeking to combat online invasions of privacy, is conducting research to better understand the experiences of online harassment. If you are 18 or older and have experienced harassment on the Internet, please consider taking their survey.
The New Republic recently published a piece by Jeffrey Rosen titled “The Delete Squad: Google, Twitter, Facebook, and the New Global Battle Over the Future of Free Speech.” In it, Rosen provides an interesting account of how the content policies of many major websites were developed and how influential those policies are for online expression.
Amazon, the company synonymous with online shopping, is supplying facial recognition technology to government and law enforcement agencies over its web services platform. Branded Rekognition, the technology is every bit as dystopian as it sounds.
Imagine a technology that is potently, uniquely dangerous — something so inherently toxic that it deserves to be completely rejected, banned, and stigmatized. Something so pernicious that regulation cannot adequately protect citizens from its effects. That technology is already here. It is facial recognition technology, and its dangers are so great that it must be rejected entirely.
Imagine a technology that is potently, uniquely dangerous — something so inherently toxic that it deserves to be completely rejected, banned, and stigmatized. Something so pernicious that regulation cannot adequately protect citizens from its effects.
That technology is already here. It is facial recognition technology, and its dangers are so great that it must be rejected entirely.
The user agreement has become a potent symbol of our asymmetric relationship with technology firms. For most of us, it’s our first interaction with a given company. We sign up and are asked to read the dreaded user agreement — a process that we know signifies some complex and inconveniently detrimental implications of using the service, but one that we choose to ignore.
The revelation that Cambridge Analytica was involved in the extraction of data involving over 50 million Facebook users has raised more than a few questions about just what went wrong and who is to blame.
Back in 2000, ING Direct Canada – the digital bank that became Tangerine Bank – piloted a “biometric” mouse that would scan users’ fingerprints to help bypass the need for passwords.
“Installing the mouse involved 16 different registry changes,” says Charaka Kithulegoda, Tangerine’s chief information officer, referring to changes to computer settings. “We said, ‘The tech works great, the concept works, but the experience is awful.’”
"When you have a conversation with a chatbot, it’s clear that you’re talking to software, not a human. The conversation feels stiff. But some bots are adept at shooting the breeze, a skill that can make it hard to know you’re conversing with code. “Disclosure is going to be really important here,” says Woodrow Hartzog, a law professor at Samford University. “Problems can come up when people think they’re dealing with humans, but really they’re dealing with bots.”
"Just because someone might be able to use their ear at checkout doesn’t mean it’s necessarily going to happen anytime soon, though. “Biometrics are tricky,” Woodrow Hartzog, an Associate Professor of Law at Samford University told WIRED. “They can be great because they are really secure. It’s hard to fake someone’s ear, eye, gait, or other things that make an individual uniquely identifiable. But if a biometric is compromised, you’re done. You can’t get another ear.”
"The central issue may come down to what Christine Rosen, senior editor of the New Atlantis, called “the Stepford Wife problem,” which she described as the probability that we’ll end up with emotional attachments to our robots. But Woodrow Hartzog, a law professor at Samford University and the owner of a Roomba nicknamed Rocko, argued that there’s nothing wrong with developing an emotional attachment to a robot.
Part of the Cyber Insecurity series.
Probe the difficult questions that we will need to address as human-robot relationships evolve in the coming decades. Explore the nuances of our future and prepare for the complex problems that will rise as our lives become more A.I. dependent.
Adults 18+ Only.
This program is free thanks to the generosity of the Lowell Institute.
Ranging across consumer protection, data aggregation, digital networks, high-tech devices and surveillance, this panel brings together top privacy and surveillance experts to discuss how the Trump administration has and will continue to shape our privacy in these and other areas.
- ELIZABETH JOH Professor of Law, UC Davis School of Law
- AHMED GHAPPOUR Associate Professor of Law, Boston University School of Law
- ANDREA MATWYSHYN Professor of Law, Northeastern University School of Law
The Tech/Law Colloquium speaker for September 19, 2017 will be Woodrow Hartzog, a professor of law and computer science at Northeastern University, where he teaches privacy and data protection law, policy, and ethics. His recent work focuses on the complex problems that arise when personal information is collected by powerful new technologies, stored, and disclosed online.
Talk: Privacy’s Blueprint: The Battle to Control the Design of New Technologies
Robots are starting to look suspiciously familiar. Increasingly sophisticated robots designed to resemble us are striking up more and more symbiotic relationships with humans, at home as our companions and at our workplaces as colleagues.
Human-robot interactions will continue to evolve as robotic technology transforms the way we see our creations and the way they react to us. But as machines cease acting like machines and become more integrated into our lives, how will we feel about them? And, dare we ask, how will they feel about us?
Recently 50 million Facebook users had their personal information extracted and used for political and commercial purposes. In the wake of this scandal, we’ve all become much more aware of how our use of social media clashes with our desire for privacy. Are technical fixes and awareness enough, or is it time for Facebook and other online services to be regulated? Our guest Woodrow Hartzog is a professor of law and computer science at Northeastern University and discusses the battle and future of our personal information.
Woodrow Hartzog, a professor at Northeastern University Law School, discusses Facebook CEO Mark Zuckerberg’s agreement to appear before the House Energy and Commerce Committee about the company’s data usage policies. He speaks with Bloomberg’s June Grasso.
Sharing passwords with a partner can be tricky. NPR's Lulu Garcia-Navarro talks with tech experts Nancy Baym and Woodrow Hartzog while Becky McDougal from Malden, Mass. shares her experience.
Watch the full video at the Energy & Commerce Committee website.
Woodrow Hartzog, Associate Professor Cumberland School of Law
See more at: http://energycommerce.house.gov/hearing/what-are-elements-sound-data-bre...