Daphne Keller is the Director of Intermediary Liability at Stanford's Center for Internet and Society. Her work focuses on platform regulation and Internet users' rights. She has published both academically and in popular press; testified and participated in legislative processes; and taught and lectured extensively. Her recent work focuses on legal protections for users’ free expression rights when state and private power intersect, particularly through platforms’ enforcement of Terms of Service or use of algorithmic ranking and recommendations. Until 2015 Daphne was Associate General Counsel for Google, where she had primary responsibility for the company’s search products. She worked on groundbreaking Intermediary Liability litigation and legislation around the world and counseled both overall product development and individual content takedown decisions.
High Res Photo of Daphne Keller
This is one of a series of posts about the pending EU General Data Protection Regulation (GDPR), and its consequences for intermediaries and user speech online. In an earlier introduction and FAQ, I discuss the GDPR’s impact on both data protection law and Internet intermediary liability law.
This is one of a series of posts about the pending EU General Data Protection Regulation (GDPR), and its consequences for intermediaries and user speech online.
Most intermediaries offer legal “Notice and Takedown” systems – tools for people to alert the company if user-generated content violates the law, and for the company to remove that content if necessary.
Most observers cheered when the neo-Nazi Daily Stormer was booted from YouTube, CloudFlare, and other platforms around the Internet. At the same time, the site’s disappearance stirred anxiety about Internet companies’ power over online speech. It starkly illustrated how online speech can live or die at the discretion of private companies. The modern public square is in private hands.
Prime Minister Theresa May’s political fortunes may be waning in Britain, but her push to make internet companies police their users’ speech is alive and well. In the aftermath of the recent London attacks, Ms. May called platforms like Google and Facebook breeding grounds for terrorism.
These comments were prepared and submitted in response to the U.S. Copyright Office's November 8, 2016 Notice of Inquiry requesting additional public comment on the impact and effectiveness of the DMCA safe harbor provisions in Section 512 of Title 17
Forthcoming in the Berkeley Technology Law Journal
"“Users are calling on online platforms to provide a moral code,” says Daphne Keller, director of the intermediary liability project at Stanford’s Center for Internet and Society. “But we’ll never agree on what should come down. Whatever the rules, they’ll fail.” Humans and technical filters alike, according to Keller, will continue to make “grievous errors.”"
"Any effort to regulate social media companies and search engines would run up against a bevy of constitutional free speech questions. Legally, Trump doesn’t have any authority to change how Google displays search results, said Daphne Keller, director of intermediary liability at the Stanford Center for Internet and Society.
“By dictating what a private company does with search results, it legally would be like dictating what The Chronicle does with news reporting,” Keller said. “It would be a First Amendment violation.”"
"We don’t have nearly enough information to see the big picture and know what speech platforms are taking down. For the most part, we only find out when the speakers themselves learn that their posts or accounts have disappeared and choose to call public attention to it. But the idea that platforms’ rules are biased — and that this undermines democracy — isn’t new, and it isn’t unique to conservatives.
"“There should be a whole gradation of how this [software] should work,” Daphne Keller, the director of the Stanford Center for Internet and Society (and mother of two), told Quartz. “We should be able to choose something in between, that is a good balance [between safety and surveillance], rather than forcing kids to divulge all their data without any control.”"
"Daphne Keller, of the Stanford Center for Internet and Society, said Section 230 was designed to allow platforms like Facebook to do some moderation and make editorial decisions without generally being liable for users’ posts: “They need to be able to make discretionary choices about content.”
The law seemed to be on Facebook’s side, she said, but added that it was an unusual case given the focus on app data access while previous cases have centered on more straightforward censorship claims."
Vinton G. Cerf is one of the founding fathers of the internet, and on Wednesday, February 28th, he will be on Canada 2020’s stage for an exclusive event.
Tickets are free and open to the public, but available in limited quantities. Click below to secure yours.
Known most for being the co-designer of the TCP/IP protocols and the architecture of the modern Internet, Vint will join us in Ottawa to talk about online citizenship, the right to be forgotten, and state of the modern internet.
Twenty years ago, the US Supreme Court’s decision in Reno v. ACLU established the framework for internet free speech and liability that remains in place today. This conference will consider the continuing viability of the Reno vision in the face of multiplying concerns about sex trafficking online, terrorist content, election interference, and other forms of contested content.
The Center for Internet and Society (CIS) is a public interest technology law and policy program at Stanford Law School and a part of Law, Science and Technology Program at Stanford Law School. CIS brings together scholars, academics, legislators, students, programmers, security researchers, and scientists to study the interaction of new technologies and the law and to examine how the synergy between the two can either promote or harm public goods like free speech, innovation, privacy, public commons, diversity, and scientific inquiry.
Over two years have passed since the Court of Justice of the European Union ruled, in the Google Spain case, that the search engine must “de-list” certain search results on request in order to honor the requesters’ data protection rights.
For many years since the European Data Protection Directive was implemented across Europe in 1998, data privacy was seen as an issue that mainly concerned what companies did with personal data behind the scenes.
In this episode of the Arbiters of Truth series—Lawfare's new podcast series on disinformation in the run-up to the 2020 election—Quinta Jurecic and Evelyn Douek spoke with Daphne Keller, the director of intermediary liability at Stanford's Center for Internet and Society, about the nuts and bolts of content moderation. People often have big ideas for how tech platforms should decide what content to take down and what to keep up, but what kind of moderation is actually possible at scale?
In this episode, Daphne Keller, Director of Intermediary Liability at the Center for Internet and Society at Stanford Law School and former Associate General Counsel for Google, discusses her essay "Who Do You Sue?: State and Platform Hybrid Power Over Online Speech," which is published by the Hoover Institution.
On this segment of “Quality Assurance,” I take a deep dive on platforms and regulating speech. I spoke with Daphne Keller, who is at Stanford Law School’s Center for Internet and Society. The following is an edited transcript of our conversation.
The question of what responsibility should lie with Internet platforms for the content they host that is posted by their users has been the subject of debate around in the world as politicians, regulators, and the broader public seek to navigate policy choices to combat harmful speech that have implications for freedom of expression, online harms, competition, and innovation.
Cybersecurity is increasingly a major concern of modern life, coloring everything from the way we vote to the way we drive to the way our health care records are stored. Yet online security is beset by threats from nation-states and terrorists and organized crime, and our favorite social media sites are drowning in conspiracy theories and disinformation. How do we reset the internet and reestablish control over our own information and digital society?