Stanford CIS

Algorithmic Enclosure? Reclaiming a Human-Centred Governance Model for Online Creativity

By Giancarlo Frosio on

This chapter argues that the real risk of 'algorithmic enclosure' arises not from using automation, but from using it without rights-driven, human-centred governance. Surveying the shift from notice-and-takedown to always-on filtering, it maps how copyright enforcement at scale increasingly relies on hash-matching and AI classifiers, while fundamental-rights safeguards lag. It then frames the core policy trade-off among rightsholders, platforms, and users—protecting IP at scale without sacrificing freedom of expression, privacy, or due process—and interrogates two design failures: over-blocking of context-dependent lawful uses, and opacity with weak remedies.

Empirical evidence on chilling effects, creator adaptation, platform incentives and competition, and cultural diversity grounds the analysis. Drawing on the DSM Directive's Article 17, the DSA, and the UK OSA, the chapter argues for a practical social covenant: limit preventive automation to manifest infringements; embed human oversight and swift, meaningful appeals; require explanations and public accuracy metrics; audit systems and deter rights-holder abuse; empower users through literacy and clear guidance; and coordinate internationally to harmonise transparency and oversight.

Economically, the chapter couples shared responsibility with a licensing-first fallback in brittle domains—pre-clear or monetise close calls, reserve blocking for manifest infringements. In sum, govern automated tools so they scale routine tasks while preserving human judgment and the cultural 'breathing space' creativity needs. 

Published in: Publication , Digital Platforms