Can a New Broadcasting Law in Europe Make Internet Hosts Monitor Their Users?

The European Commission is making major steps forward in its new Digital Single Market strategy. One important part, the Platform Liability consultation, pointedly asked whether Internet intermediaries should “do more” to weed out illegal or harmful content on their platforms – in other words, to proactively police the information posted by users. Last week the Commission delivered part of its answer. It proposed that an important set of platforms – video hosts like DailyMotion or YouTube – should monitor their users and proactively remove content. That's an alarming development. As courts around the world have recognized, laws like this threaten ordinary Internet users' rights, by giving private intermediaries incentives to delete perfectly lawful online expression. 

The good news is that the Commission won’t revise the EU’s core intermediary liability law, the eCommerce Directive. That Directive explicitly prohibits lawmakers from requiring protected platforms to generally monitor their users. The bad news is that proposals unveiled this week erode the no-general-monitoring rule anyway - or at least try to. They do so through a mix of changes to other laws; far-reaching interpretations of the eCommerce Directive; and bewildering "co-regulation" measures that, paradoxically, seem designed to legally compel voluntary action by platforms. 

One important vehicle for this change is a draft revision to the Audio Visual Media Services (AVMS) Directive. The draft AVMS Directive expands broadcast-style regulations to video hosting platforms. It gives them major, but ill-defined, new duties to prevent users from sharing videos that promote hate, violence, or material inappropriate for minors. Here's my take on the draft Directive and the problems with its monitoring provisions.

 

Can the AVSM Directive Make Hosts Monitor Without Amending the eCommerce Directive?

Video platforms generally maintain that the “hosting” provisions at Article 14 of the eCommerce Directive protect them from liability for content uploaded by users. Plenty of courts have agreed with them. If they are right, then under another eCommerce provision -- Article 15 -- the law can’t compel the hosts to generally monitor information posted by their users.

But there is a lot of disagreement about which platforms qualify for the eCommerce immunities. Some people – and some courts – think hosts are not protected if they do things like make hosted content searchable, or organize it into categories, or run ads. The theory is that this makes the platforms too “active,” and that only more passive hosts qualify for eCommerce Directive protection.

The draft AVMS Directive adopts an extremely broad version of this “active hosting” analysis. It says any host that “organizes” videos posted by users loses important parts of its eCommerce immunities. Therefore, these platforms can be held responsible for finding and removing content.

This interpretation would effectively change the rules for Internet users and intermediaries without formally reopening the eCommerce Directive. If it is right, courts or national lawmakers could apply a similar strained interpretation of the eCommerce Directive to other Internet hosts – blogging platforms like OverBlog or WordPress, news sites with user comment forums, and more – as long as those hosts somehow “organize” posted content. And nothing in this line of reasoning limits the new rules to hateful or violent content. On this theory, lawmakers could compel the same hosts to monitor for trademark infringement or defamation without running afoul of the eCommerce Directive. 

Stripping eCommerce immunities from hosts that “organize” user content would have significant implications for the EU tech economy. It would make investing in normal, contemporary online services - the ones with search functionality, channels, related content links, and other “organizing” features - a risky proposition in the EU. Only obsolete, bare-bones hosts could count on legal protections. That’s an odd outcome from the Digital Single Market process, which was supposed to boost technical innovation and investment in the EU.

Such a change would also be deeply troubling for Internet users’ rights. As the European Court of Human Rights has noted, when platforms have to police content, users’ rights suffer. Risk-averse private companies will remove not only genuinely unlawful content, but also controversial expression and anything else with a whiff of legal risk. Laws that require monitoring give private platforms every reason to take down users’ content, even if it is perfectly legal. In a society increasingly dependent on internet communications, deleting users’ online expression effectively curtails democratic participation and opportunities to seek and impart information online.

 

Why Extend the AVSM Directive?

The first reason lawmakers want this change, obviously, is to fight dangerous content online. Given current concerns about terrorism, hate speech, and incitement to violence, few people question this goal. But giving tech companies a vague set of obligations and incentives to over-enforce isn’t the answer. It’s a recipe for deletion of legal and valuable expression – including counter-speech by moderate voices in communities affected by extremism.

The second reason is economic. The idea is that online video hosts and traditional broadcasters compete for the same audience, but that there isn’t a level playing field because broadcasters have to comply with AVSM Directive rules. The part about competing economically is true – if it weren’t for the Internet, more of us would resort to watching TV. There may be some real imbalances there, and grounds for other legal reforms to fix financial inequities. But as economies of expression and participation, TV and the Internet are wildly different. Open video hosting platforms on the Internet let anyone in the world “broadcast” to anyone else in the world. Can the AVSM “level the playing field” in the other direction, by making traditional broadcasters give Chewbacca-Mask Mom instant access to millions of viewers, as video hosting platforms have done? Can they give any viewer – like Mom-Mask Chewbacca - an instant means of reply? Of course not. That’s not how broadcasting works. But just because lawmakers can’t make TV more democratic, that doesn’t mean they should make the Internet less so. Imposing new rules that will effectively harm Internet users’ ability to participate online is not a solution.

 

Who Gets Regulated under the AVMS Directive?

As the draft Directive explains,

An important share of the content stored on video-sharing platforms is not under the editorial responsibility of the video-sharing platform provider. However, those providers typically determine the organisation of . . . user-generated videos, including by automatic means or algorithms. Therefore, those providers should be required to take appropriate measures to protect minors . . . and protect all citizens from incitement to violence or hatred[.]

In light of the nature of the providers' involvement with the content stored on videosharing platforms, those appropriate measures should relate to the organisation of the content and not to the content as such. The requirements in this regard as set out in this Directive should therefore apply without prejudice to Article 14 of [the eCommerce Directive.] (R. 28-29)

Building on this aggressive eCommerce Directive work-around, the draft expands broadcast regulations to any “video-sharing platform service” in which “the organisation of the stored content is determined by the provider of the service including by automatic means or algorithms, in particular by hosting, displaying, tagging and sequencing.” (Art. 1aa) EU countries are required to maintain lists of covered AVMS services. (Art. 2.5a) Since this new regulation seems to reach any host that lets users find videos using search algorithms or tagging, it should cover pretty much any video platform you’ve heard of or used.

 

What Do Video Hosts Have to Do?

The AVMS Directive says unequivocally, "Member States shall ensure by appropriate means that audiovisual media services provided by media service providers under their jurisdiction do not contain any incitement to violence or hatred.” (Art. 6, emphasis added.) That’s a tall order. Later on, Article 28 spells out “what constitutes an appropriate measure” to achieve this end. Depending how it’s interpreted, the list could be relatively modest – or really alarming. Assuming that the appropriate means in Art. 6 and the appropriate measures in Art. 28 are the same thing, here is what video-sharing platforms must do:

 

·       Prohibit or restrict hateful, violent, and harmful-to-minors content in their Terms of Service (Art. 28a.2(a)). Adding this point to a TOS seems superfluous for content that is already illegal, since companies operating under the eCommerce Directive must take down illegal content when they find out about it anyway. However, if this requirement means hosts have to define and delete new kinds of “harmful” speech that are not already illegal, that is downright scary. (Drafters might want to check out India’s Shreya Singhal case, which struck down a similar “it’s not illegal but intermediaries must prohibit it by TOS anyway” law, if that’s what they mean.)

·       Let users “flag” and “rate” that content, and respond to user flagging. (Art. 28a.2(b), (d) and (f)). To the extent the AVMS Directive is talking is about illegal content, this looks a lot like what platforms should do under the eCommerce Directive anyway.

·       Create “age verification systems” and “parental control systems” for content harmful to minors. (Art. 28a.2(c) and (e)). That one’s more troubling. If it means hosts must proactively identify content that can ham minors, then it really is a monitoring requirement, with all the problems discussed above. And age-verification systems can be a privacy nightmare, since they mean video hosts check users’ IDs at the door. For example, Korea has an age verification requirement, and as Professor KS Park points out, in practice it’s been a slippery slope toward requiring mobile users to register their SIM cards.

Other parts of the draft Directive, and many of the Commission’s public statements about the Platform Liability inquiry, emphasize “voluntary” measures. For example, Article 4.7 says that “Member States shall encourage co-regulation and self-regulation through codes of conduct. . . They shall provide for effective enforcement, including when appropriate effective and proportionate sanctions.” It is not clear what exactly these “voluntary” measures under threat of State sanction would look like, or exactly how they relate to the Directive’s other enumerated obligations.

 

Now What?

Importantly, this is a first draft. It’s from just one branch of government. What happens next is a lot of politics, arguments, amendments, and so forth. The future is not written, thank goodness.

But what would happen in practice if the AVMS Directive went into effect as drafted? Member States would pass their own implementing laws, presumably with a lot of variation. Foreign companies without ties to one EU country in particular could forum-shop and establish themselves in a country with more favorable laws, much as they have done under the Data Protection Directive. (Art. 28b)

At any step in the process of interpreting the Directive – in national legislation, or in arguments accepted by courts in litigation – ambiguities about whether video platforms have to monitor user content could get resolved. Authorities may well say that platforms have to monitor users, and that they are in trouble if they aren’t doing it already. Or they could decide that actually, this is a dangerous direction for the Internet. They might conclude that compelling private companies to police online speech is inconsistent with the State's obligation to protect Internet users' fundamental rights to seek and impart information online. 

Resolution of that sort is a long way off, though. In the meantime, fear of being held liable for failure to monitor will drive behavior by private actors, in ways that threaten both innovation and Internet users’ rights. Technical innovators and investors will factor in the risk of expensive future monitoring obligations as they decide where, and whether, to start new companies. Existing platforms may decide to play it safe and set up filtering or monitoring efforts, or turn open platforms into walled gardens. Those outcomes would be bad news for everyone.

 

 

 

Add new comment