Prevention of terrorism is undeniably an important and legitimate aim in many countries of the world. In the course of the last years, the European Union (EU) institutions, and the Commission (EC) in particular, have shown a growing concern regarding the potential use of online intermediary platforms for the dissemination of illegal content, particularly content of terrorist nature, based on the assumption that this content can reasonably increase the danger of new terrorist attacks being committed on European soil.
Platforms have already assumed what can be judged as a relevant degree of responsibility in detecting and removing content that can be labeled as “terrorist”, according to the own terms and conditions. However, policymakers and legislators in Europe seem to be eager to take a further step by transforming these practices into actual legal obligations.
There is no common understanding across EU member States about what online expression actually violates the law, and what expression is protected as a matter of fundamental and human rights, particularly when it comes to the establishment of boundaries based on matters of national security. Within this complex reality, both European and national authorities have urged swift and almost automatic detection and removal of content related to the commission of acts of terrorism.
The political positions and non-binding documents produced so far have progressively incorporated the notion of “responsibility” for intermediaries, but have not imposed conventional legal liability or obligations. A new proposal for a Regulation on preventing the dissemination of terrorist content online changes that. The document is still at a very early stage and many changes will probably be introduced. However, the current version shows us the willingness of EU institutions to establish obligations that go beyond the promotion of voluntary conduct and cooperation. The proposal also shows the intention to provide for specific derogations of the regime established in the e-commerce Directive regarding hosting platforms’ general obligations to monitor. If finally adopted, this would be a major alteration of a legal principle which had remained accepted and untouched for almost two decades, and which was part of a series of rules that were publicly presented and considered as the frontispiece of EU regulation on online platforms vis-à-vis liability issues.
The regulation introduces a combination of obligations, including amendment of platform terms and conditions and the immediate – in one hour – removal of content upon an order from competent State authorities. The Regulation also calls on platforms to adopt what are described as proactive measures to eliminate such content. This puts intermediary platforms in a new status, where content monitoring becomes a de facto part of their role. National “competent authorities” – including law enforcement – “or the relevant Union body” - which basically refers to Europol – may also send referrals to hosting service providers for their expeditious “voluntary consideration” under their terms and conditions. These referral powers may become a mechanism to delegate to private entities the responsibility to decide and enforce measures that otherwise would need to be adopted by public bodies with proper opportunity for judicial review.
The European legislators seem to move towards a progressive delegation on private companies of true law enforcement powers, depriving Internet users - and hosting service providers themselves - of the legal and procedural safeguards applicable to this kind of decision until now. Moreover, intermediary platforms may be progressively put in a position where cautiously overbroad decisions will be taken, as this will be the only way to avoid the high and somewhat vaguely defined penalties that may be imposed on them. Obviously, this is not good news for freedom of expression and due process in Europe.