Adapting Advertising Infrastructure for Content Regulation: WIPO’s BRIP Blacklist

In the name of “brand safety,” advertisers these days are working hard to better control where their ads appear online. Programmatic advertising with real-time bidding automates the process of online ad buying and ad placement to such an extent that the entire process takes place in the time it takes a web page to load. The process is highly efficient, but a significant downside is that ads sometimes appear alongside controversial content with which an advertiser would rather not be associated. Online pornography is the classic example, but other strains of extreme content—e.g., hate speech, conspiracism, and incitement-to-terrorism—have more recently come into focus for advertisers as threats to brand reputation.

How to beat a copycat.

If you run a billion (or even million) dollar brand, does it make sense to spend a few thousand dollars to protect your mark from copycats? Or if you are a songwriter, does it make sense to spend thirty-five dollars to register a copyright on your song? The answer is yes. But the recent "SUPREME" trademark drama shows why this answer isn't so obvious to all.

Bringing the Fight for Court Transparency to the Ninth Circuit

You may recall that in February, a federal district court in Fresno denied a petition I filed with the American Civil Liberties Union, the ACLU of Northern California, and the Electronic Frontier Foundation to attempt to shed light on the Department of Justice's attempt to force Facebook to break the encryption on its Messenger app for encrypted voice calls so that Facebook could carry out a wiretap order the DOJ had obtained.

Tool Without A Handle: Guerilla Information Warfare

“The enemies of liberal democracy hack our feelings of fear and hate and vanity, and then use these feelings to polarize and destroy" - Yuval Noah Harari.

The security of our news and media information systems matters as much as the security of personal and commercial information systems. "Information warfare" shows that harms can arise even when there is no unauthorized access, when tools are used as intended, and when there’s no compromise of user privacy settings. In both cases of cybersecurity and news/media security, the threats are asymmetric, the tools readily available, usable for many purposes, and threats are easily disguised as benign. Whether it is a business conference used for economic espionage, or a product order form email used to inject malware, the tools of everyday information exchange are capable of use as a weapon.

Robust resistance to phishing and social engineering, as well as “fake news” and disinformation campaigns is unlikely to be fully achieved on the basis of refinements to the tools (including information platforms). It will require users to, to the extent possible, acquire greater understanding of themselves as sorters and interpreters of information. If there is a territory to explore in addressing issues of information warfare, let it be the territory of the mind.

Open Letter to GCHQ Regarding Threats Posed by their Ghost Proposal

Today I join several cybersecurity, civil liberties, civil society organizations and researchers in responding to the United Kingdom's GCHQ recent proposal to silently add 'ghost' users from law enforcement or the security services to online chats and calls, including those conducted via encrypted messaging tools like WhatApp, iMessage, or Signal.

What Online Content Are We Regulating? Illegal Speech, Offensive Speech, and Platform Value

This discussion, excerpted from my Who Do You Sue article, very briefly reviews the implications of what I call “must-carry” arguments – claims that operators of major Internet platforms should be held to the same First Amendment standards as the government, and prevented from using their Terms of Service or Community Guidelines to prohibit lawful speech.


Subscribe to Stanford CIS Blog