Annemarie Bridy is a Professor of Law at the University of Idaho. She is also an Affiliated Fellow at the Yale Law School Information Society Project and a former Visiting Associate Research Scholar at the Princeton University Center for Information Technology Policy. Professor Bridy specializes in intellectual property and information law, with specific attention to the impact of new technologies on existing legal frameworks for the protection of intellectual property and the enforcement of intellectual property rights. She has testified before Congress on the safe harbor provisions of the Digital Millennium Copyright Act and is widely published on the shifting landscape of intermediary copyright liability and online anti-piracy/anti-
Professor Bridy holds a BA, summa cum laude and with distinction, from Boston University; an MA and a PhD from the University of California, Irvine; and a JD, magna cum laude, from the Temple University James E. Beasley School of Law. At the University of California, she was a Humanities Predoctoral Fellow and an Andrew W. Mellon Research Fellow in the Humanities.
In the name of “brand safety,” advertisers these days are working hard to better control where their ads appear online. Programmatic advertising with real-time bidding automates the process of online ad buying and ad placement to such an extent that the entire process takes place in the time it takes a web page to load. The process is highly efficient, but a significant downside is that ads sometimes appear alongside controversial content with which an advertiser would rather not be associated. Online pornography is the classic example, but other strains of extreme content—e.g., hate speech, conspiracism, and incitement-to-terrorism—have more recently come into focus for advertisers as threats to brand reputation.
In response to a global backlash in the wake of Brexit and the 2016 US presidential election, dominant tech companies are scrambling to stave off increased governmental regulation of their information handling practices. It is an attractive strategy for them to cut deals with regulators whereby they agree to follow privately negotiated rules in lieu of command-and-control regulation. With respect to content moderation, this form of hybrid public-private regulation could undermine First Amendment limits on state action that are designed to protect individual citizens from official censorship. This post explores the role of anti-piracy voluntary agreements in normalizing hybrid public-private speech regulation on the Internet.
Over the course of the last decade, in response to significant pressure from the US government and other governments, service providers have assumed private obligations to regulate online content that have no basis in public law. For US tech companies, a robust regime of "voluntary agreements" to resolve content-related disputes has grown up on the margins of the Digital Millennium Copyright Act (DMCA) and the Communications Decency Act (CDA). For the most part, this regime has been built for the benefit of intellectual property rightholders attempting to control online piracy and counterfeiting beyond the territorial limits of the United States and without recourse to judicial process.
The Fourth Circuit has issued its decision in BMG v. Cox. In case you haven’t been following the ins and outs of the suit, BMG sued Cox in 2014 alleging that the broadband provider was secondarily liable for its subscribers’ infringing file-sharing activity. In 2015, the trial court held that Cox was ineligible as a matter of law for the safe harbor in section 512(a) of the DMCA because it had failed to reasonably implement a policy for terminating the accounts of repeat infringers, as required by section 512(i). In 2016, a jury returned a $25M verdict for BMG, finding Cox liable for willful contributory infringement but not for vicarious infringement. Following the trial, Cox appealed both the safe harbor eligibility determination and the court’s jury instructions concerning the elements of contributory infringement. In a mixed result for Cox, the Fourth Circuit last week affirmed the court’s holding that Cox was ineligible for safe harbor, but remanded the case for retrial because the judge’s instructions to the jury understated the intent requirement for contributory infringement in a way that could have affected the jury’s verdict.
(NB: This headline does not obey Betteridge’s Law.)
Hollywood studios, led by Universal, have sued TickBox TV in federal district court in California, bringing their campaign against set-top box (STB) piracy stateside after a big win earlier this year in the EU. Last spring, the Dutch film and recording industry trade association BREIN prevailed in copyright litigation against the distributor of a STB called the Filmspeler. The CJEU held that the Filmspeler’s distributor, Wullems, directly infringed the plaintiffs’ copyrights—specifically, their right of communication to the public—by selling STBs loaded with software add-ons that provided easy access to infringing programming online. (I blogged about the Filmspeler case here.)
These comments were prepared and submitted in response to the U.S. Copyright Office's November 8, 2016 Notice of Inquiry requesting additional public comment on the impact and effectiveness of the DMCA safe harbor provisions in Section 512 of Title 17
Making Sense of the Recent Upheaval at the U.S. Copyright Office
These comments were prepared and submitted in response to the U.S. Copyright Office's December 31, 2015 Notice and Request for Public Comment on the impact and effectiveness of the DMCA safe harbor provisions in Section 512 of Title 17.
"Calls for tighter content moderation policies have not come without concern. Some lawyers, including Annemarie Bridy, professor of law and affiliate scholar at Stanford University Center for Internet and Society, said tightly regulating speech on platforms can lead to over-censorship, or confusion about where to draw the line.
"But it’s far more than just instinct. Studies from around the world consistently come to the same conclusion, says Annemarie Bridy, a University of Idaho law professor specializing in copyright.
"“Article 13 creates more or less limitless liability with extraordinarily narrow exemptions,” says Annemarie Bridy, an academic intellectual property and technology lawyer at the University of Idaho. “The result will be that a few platforms will be positioned in terms of resources to operate with the related risk and expense. The rest will either stop hosting user-generated content, which would be a shame, or continue to do it until they get hit with an existentially threatening lawsuit, and fold.”"
"“But that’s not the world we live in in 2019,” says Annemarie Bridy, a University of Idaho law professor specializing in copyright. “It’s a statute from a more innocent, optimistic era in the history of the Internet.”
"This raises the question of whether Congress could draft a law narrow enough to help victims of deepfakes without such unintended consequences. As a cautionary tale, Annemarie Bridy, a law professor at the University of Idaho, points to the misuse of the copyright takedown system in which companies and individuals have acted in bad faith to remove legitimate criticism and other legal content.
Still, given what’s at stake with pornographic deep fake videos, Bridy says, it could be worth drafting a new law.
One of the most dangerous aspects of SOPA and other copyright proposals is the idea of moving enforcement and liability further down the stack of technology that powers the internet, even all the way to the DNS system. Although SOPA's DNS-blocking proposals were heavily criticized and the bill ultimately defeated, the idea of deep-level copyright enforcement has lived on and been implemented without changes to the law.
We've been talking about internet platform regulation for a long time, but in the past year these issues have gotten a huge amount of increased focus — for a bunch of fairly obvious reasons.