Stanford CIS

A Primer on Cross-Border Speech Regulation and the EU’s Digital Services Act

By Daphne Keller on

Some U.S. politicians have recently characterized European platform and social media regulation laws as “censorship” of speech in the U.S. If this claim were true, it would be a very big deal. As someone with a twenty-year history of resisting cross-border Internet speech suppression demands, I would be up in arms. 

It isn’t true, though. This blog post explains why. It starts with a big picture overview of how Internet lawyers and online platforms deal with varying national speech laws. Then it reviews the EU law currently at the center of this controversy, the Digital Services Act (DSA). Finally, it goes over some actual examples of real or attempted cross-border speech restrictions. Super wonks might find nothing new in this summary,  but new-comers to cross-border speech regulation will likely find it a helpful touchstone and guide. If you’re neither though, all you really need to know is this: Nothing about the EU’s Digital Services Act (DSA) requires platforms to change the speech that American users can see and share online. 

The Big Picture

The DSA is a major overhaul to EU platform regulation. It has some parts that I like, including clearly defined platform immunities and unprecedented rights for platforms’ users to understand and challenge content removal decisions. It has parts I worry about, including grants of regulatory authority that could have been better drafted to prevent abuse. Still, the EU equivalent of constitutional law should ensure that regulators can’t use the DSA to suppress lawful but state-disapproved speech. Recent political signs are also encouraging. When an EU Commissioner actually tried to abuse power – by falsely claiming that the DSA gave him authority to restrict legal speech on platforms like X – he swiftly found himself without a job.    

EU Executive Vice President Henna Virkkunen recently explained the DSA’s territorial reach in a letter to U.S. Representative Jim Jordan. The law, she said, “applies exclusively within the European Union to all services provided therein” and “has no extraterritorial jurisdiction in the U.S. or any other non-EU country.” (Bold-formatted text here and throughout this post is my addition.)  I’ll dig deeper into the DSA’s statutory language below, but this is the bottom line. 

Using national law to restrict what people see inside a country’s territory is normal. Every country does it, including the U.S. This territory-based restriction on speech is the basic framework that Internet law has settled on – sometimes through explicit legislation or court rulings, more often by unacknowledged convention. 

In practice, big platforms typically geo-block users within a country from seeing content that is known to be illegal in that country. This approach has been endorsed by the EU’s highest court, and is also a standard commercial practice for things like territorially limited copyright licenses. Users who truly want to see material that is blocked in their country can generally use VPNs to do that, for better or for worse. The same VPN tools used by people seeking seriously illegal content and by your friends and family members who want to see sports or TV dramas from other countries are also essential for researchers, business travelers concerned about security, and persecuted groups in authoritarian regimes. The law about VPNs and tools to bypass geo-blocks is complicated and evolving. But the European Court of Human Rights has said that Russia violated Internet users’ free expression rights by trying to prevent them from getting access to VPNs.

Every country, including the U.S., has its own laws about what speech is illegal. The U.S. bans some speech for being defamatory, fraudulent, or too likely to incite violence, for example. The details of these speech-restricting laws vary considerably across borders. In the U.S., different states have their own defamation laws. In the EU, Member State laws diverge when it comes to Holocaust denial, publication of Mein Kampf, the proper balance between news reporting and individual privacy rights, and more. 

The U.S. has historically been more permissive about speech than most countries, but that seems to be changing.  Over the past century, U.S. courts interpreted the First Amendment to protect a wide array of expression – like hate speech, falsehoods, and invasion of privacy – that might be illegal in other countries. The U.S. isn’t always on the vanguard of speech rights, though. At times Sweden has allowed more pornography, for example. And Brazil can be more permissive of parodies. The U.S. in 2025 has also taken a sharp turn toward increasing state restrictions on speech. Major media organizations have settled frivolous defamation lawsuits brought by President Trump, paying out millions of dollars for speech and news reporting that was almost certainly legal.  News media and advertisers have also given up on First Amendment rights in order to secure merger approval from the FTC and FCC.

International human rights law does a lot of hidden work to smooth over countries’ inevitable differences in speech law. We may not like each others’ choices, but international free expression lawyers generally don’t bother fighting much about them as long as a country’s rules are democratically enacted, reviewable by courts for any violations of rights, and fall in the range of rules that are permitted under international human rights law. Plenty of state censorship violates even the more flexible standards of human rights law. International free speech advocates often reserve their firepower for that.

People around the world complain that platforms censor their speech at the behest of the U.S. government. The U.S. mostly doesn’t have laws that explicitly require platforms to limit speech in other countries, though sanctions or export control laws like those administered by the Office of Foreign Asset Control are a possible example, and ICE engaged in some troubling domain name seizures in the 2010s. But U.S.-based platforms often globally enforce U.S. laws anyway. That practice is likely backed by assumptions that U.S. courts might, if asked, say that our laws governing things like child abuse content or copyright really do require global compliance by companies that are based in this country. Free speech activists in other countries often complain that their own lawful speech has been suppressed under U.S. laws, particularly the Digital Millennium Copyright Act. Experienced European lawyers also often have complaints about the U.S.’s historically aggressive extraterritorial enforcement of non-speech laws, like antitrust laws.

Internet jurisdiction law is full of interesting and unresolved questions about national courts’ and lawmakers’ authority to regulate speech outside their borders. I’ll say more about this in the final section of the post. Few or none of these interesting edge-case questions are about the DSA, though. 

The EU Digital Services Act

The EU’s Digital Services Act does not currently require platforms to suppress speech outside of EU borders. If hypothetical future regulators tried to interpret the law differently, European courts would have reason to stop them. This section will go into a potentially tedious degree of detail about what the DSA actually says.

To be clear, however, it is possible – and has always been possible – that individual EU countries might try to mandate global content removals under their own laws. Examples have to date been rare, but certainly not unknown. Two such cases have gone to the EU’s highest court, the Court of Justice of the EU (CJEU). It ruled that EU-level laws – including the DSA’s predecessor, the eCommerce Directive – did not require global removal of platform content, but also did not prevent individual EU countries from issuing global takedown orders to the extent consistent with national and international law. 

The DSA is relatively grabby about asserting jurisdiction and saying that the DSA applies to companies based in other countries. But its compliance obligations for those companies are only for their services in the EU. The DSA’s language is fairly dense EU legalese, but the point is communicated relatively clearly in Recital 7: “rules should apply to providers of intermediary services irrespective of their place of establishment or their location, in so far as they offer services in the Union, as evidenced by a substantial connection to the Union.”

In the context of varying laws between EU Member States, DSA Recital 36 says that “[t]he territorial scope of such orders … should not exceed what is strictly necessary[.]” The Commission has explained that this means “Where a content is illegal only in a given Member State, as a general rule it should only be removed in the territory where it is illegal.” (That hedging “as a general rule” language leaves room for the Member State authority that the CJEU said already existed.)

The Articles of the DSA say the same thing at greater length.

One of the main concerns identified by recent DSA critics is one I have myself raised in the past: the worry that platforms will voluntarily change their global speech rules to accommodate powerful regulators in particular countries. As a conspicuous recent example of such political appeasement, Meta loudly announced a shift to speech rules preferred by President Trump in January 2025. 

When a platform globally implements speech rules, though, that is typically for economic reasons – not political ones. Platform operations are simpler, cheaper, and face lower risk of errors and technical failures if the company can use the same content moderation systems everywhere in the world. That is why major platforms’ speech rules under Terms of Service or Community Guidelines are almost always global. It is also why platforms may have economic reason to expand their own speech-prohibitions, rather than pay lawyers to identify which content violates national laws around the world. 

A diagram of a diagram

AI-generated content may be incorrect.

Historically, U.S. platforms’ rules often exported American speech norms and legal standards, to the dismay of many users in other countries. Today, Europe may be, as a practical matter, a greater net exporter of speech rules. But that’s not because Europe is requiring anything, or trying to use its laws to shape speech in other places. It’s just the unsurprising result of market forces and technical and economic decisions made within platforms. 

Like any law, the DSA could be interpreted more aggressively by future enforcers – particularly if they believed that content like incitement to violence outside of Europe was leading to greater risks inside the region. Both the letter of the DSA and Virkkunen’s strong recent statements should make that harder, though. And, just as in the U.S., such an interpretation would be contestable in court based on both free expression rights and standards of international comity.

Interesting and Hard Edge Case Questions

Every now and then, a regulator or national court asserts that it does have authority to demand global takedowns of speech. This is relatively rare, mostly because courts and regulators don’t want to be on the other side of that issue – they don’t want to have another country try to regulate speech inside their borders. The jurisdictional doctrine of “comity” is the formal legal tool used by courts acknowledging the sovereignty and divergent laws of other countries. If courts decide that comity concerns don’t tie their hands, though, things can get very interesting.

Some of the hardest cross-border issues in this regard are not about censorship, but surveillance. Important litigation, legislation, and international negotiations have focused on whether law enforcement can demand user data from platforms in other countries. Those legal issues are dissimilar in subtle but important ways from the ones about speech and censorship. (If that interests you, check out pages 26-31 in the proceedings of my Stanford conference on these topics.) 

Conclusion

The law of Internet speech jurisdiction gets snarlier, and more fundamental to Internet users’ rights, the deeper you dig. But the EU’s Digital Service Act is not the problem. American politicians should stop pretending it is. 

Published in: Blog , censorship , Platform Liability