Daphne Keller is the Director of Intermediary Liability at Stanford's Center for Internet and Society. Her work focuses on platform regulation and Internet users' rights. She has published both academically and in popular press; testified and participated in legislative processes; and taught and lectured extensively. Her recent work focuses on legal protections for users’ free expression rights when state and private power intersect, particularly through platforms’ enforcement of Terms of Service or use of algorithmic ranking and recommendations. Until 2015 Daphne was Associate General Counsel for Google, where she had primary responsibility for the company’s search products. She worked on groundbreaking Intermediary Liability litigation and legislation around the world and counseled both overall product development and individual content takedown decisions.
High Res Photo of Daphne Keller
Most people I talk to think that Facebook, Twitter, and other social media companies should take down ugly-but-legal user speech. Platforms are generally applauded for taking down racist posts from the White Nationalist demonstrators in Charlottesville, for example. I see plenty of disagreement about exactly what user-generated content should come down -- breastfeeding images? Passages from Lolita? Passages from Mein Kampf? But few really oppose the basic predicate of these removals: that private companies can and should be arbiters of permissible speech on their platforms.*
Alarm bells are sounding around the Internet about proposed changes to one of the US’s core Intermediary Liability laws, Communications Decency Act Section 230 (CDA 230). CDA 230 broadly immunizes Internet platforms against legal claims based on speech posted by their users. It has been credited as a key protection for both online expression and Internet innovation in the US. CDA 230 immunities have limits, though. Platforms are not protected from intellectual property claims (mostly handled under the DMCA) or federal criminal claims.
In its Equustek ruling in June, the Canadian Supreme Court held that Google must delete search results for users everywhere in the world, based on Canadian law. Google has now filed suit in the US, asking the court to confirm that the order can’t be enforced here. Here’s my take on that claim.
The Canadian Supreme Court this morning issued its long-awaited ruling in Equustek. The court upheld an order compelling Google to remove search results for specified websites, not just in Canada, but everywhere in the world.
[Stanford's Daphne Keller is a preeminent cyberlawyer and one of the world's leading experts on "intermediary liability" -- that is, when an online service should be held responsible for the actions of this user. She brings us a delightful tale of Facebook's inability to moderate content at scale, which is as much of a tale of the impossibility (and foolishness) of trying to support 2.3 billion users (who will generate 2,300 one-in-a-million edge-cases every day) as it is about a specific failure.
This past week, with some fanfare, Facebook announced its own version of the Supreme Court: a 40-member board that will make final decisions about user posts that Facebook has taken down. The announcement came after extended deliberations that have been described as Facebook’s “constitutional convention.”
The Program on Extremism Policy Paper series combines analysis on extremism-related issues by our researchers and guest contributors with tailored recommendations for policymakers.
Full paper available for download here.
"However, a video for example showing Isis recruitment can violate the law in one context, but also be legal and important for purposes such as documenting crimes for future prosecution, says Daphne Keller, intermediary liability director at Stanford's Centre for Internet and Society.
“The more we push companies to carry out fast, sloppy content removals, the more mistakes we will see,” Keller says. She thinks lawmakers should “slow down, talk to experts including both security researchers and members of the affected communities, and build on that foundation”."
"Stanford's Daphne Keller is one of the world's foremost experts on intermediary liability protections and someone we've mentioned on the website many times in the past (and have had her on the podcast a few times as well). She's just published a fantastic paper presenting lessons from making internet platforms liable for the speech of its users. As she makes clear, she is not arguing that platforms should do no moderation at all.
"Indeed, as Europe is pushing for more and more use of platforms to censor, it's important that someone gets them to understand how these plans almost inevitably backfire. Daphne Keller at Stanford recently submitted a comment to the EU about its plan, noting just how badly demands for censorship of "illegal content" can turn around and do serious harm.
Facebook has come under increased scrutiny in recent months, the social media giant’s efforts to protect its users’ data questioned.
Vinton G. Cerf is one of the founding fathers of the internet, and on Wednesday, February 28th, he will be on Canada 2020’s stage for an exclusive event.
Tickets are free and open to the public, but available in limited quantities. Click below to secure yours.
Known most for being the co-designer of the TCP/IP protocols and the architecture of the modern Internet, Vint will join us in Ottawa to talk about online citizenship, the right to be forgotten, and state of the modern internet.
Twenty years ago, the US Supreme Court’s decision in Reno v. ACLU established the framework for internet free speech and liability that remains in place today. This conference will consider the continuing viability of the Reno vision in the face of multiplying concerns about sex trafficking online, terrorist content, election interference, and other forms of contested content.
The Center for Internet and Society (CIS) is a public interest technology law and policy program at Stanford Law School and a part of Law, Science and Technology Program at Stanford Law School. CIS brings together scholars, academics, legislators, students, programmers, security researchers, and scientists to study the interaction of new technologies and the law and to examine how the synergy between the two can either promote or harm public goods like free speech, innovation, privacy, public commons, diversity, and scientific inquiry.
Over two years have passed since the Court of Justice of the European Union ruled, in the Google Spain case, that the search engine must “de-list” certain search results on request in order to honor the requesters’ data protection rights.
For many years since the European Data Protection Directive was implemented across Europe in 1998, data privacy was seen as an issue that mainly concerned what companies did with personal data behind the scenes.
In this episode of the Arbiters of Truth series—Lawfare's new podcast series on disinformation in the run-up to the 2020 election—Quinta Jurecic and Evelyn Douek spoke with Daphne Keller, the director of intermediary liability at Stanford's Center for Internet and Society, about the nuts and bolts of content moderation. People often have big ideas for how tech platforms should decide what content to take down and what to keep up, but what kind of moderation is actually possible at scale?
In this episode, Daphne Keller, Director of Intermediary Liability at the Center for Internet and Society at Stanford Law School and former Associate General Counsel for Google, discusses her essay "Who Do You Sue?: State and Platform Hybrid Power Over Online Speech," which is published by the Hoover Institution.
On this segment of “Quality Assurance,” I take a deep dive on platforms and regulating speech. I spoke with Daphne Keller, who is at Stanford Law School’s Center for Internet and Society. The following is an edited transcript of our conversation.
The question of what responsibility should lie with Internet platforms for the content they host that is posted by their users has been the subject of debate around in the world as politicians, regulators, and the broader public seek to navigate policy choices to combat harmful speech that have implications for freedom of expression, online harms, competition, and innovation.
Cybersecurity is increasingly a major concern of modern life, coloring everything from the way we vote to the way we drive to the way our health care records are stored. Yet online security is beset by threats from nation-states and terrorists and organized crime, and our favorite social media sites are drowning in conspiracy theories and disinformation. How do we reset the internet and reestablish control over our own information and digital society?