Submission Regarding the Draft IT Intermediaries Guidelines Proposed in India

The present submission has the object of providing comments and recommendations regarding a very specific provision included in the document mentioned in the title, that is the specific proposed duty for intermediaries to:

“deploy technology based automated tools or appropriate mechanisms, with appropriate controls, for proactively identifying and removing or disabling public access to unlawful information or content.”

This provision is included in the provisions under paragraph 3, on “Due diligence to be observed by intermediary”.

This submission will be based on the most relevant international standards currently in place with regards to the role of private intermediaries vis-à-vis content moderation, particularly when automated tools are used (and imposed by competent authorities).

In his Report to the General Assembly of 11 May 2016[1], the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression stated that content regulation in the digital world (included those provisions that may affect the role and responsibility of intermediaries), must avoid taking steps that “unnecessarily or disproportionately interfere with freedom of expression, whether through laws, policies, or extralegal”. The Report also stresses the fact that the development and use of technical measures, products and services by private entities must be regulated by the States with the aim of advancing freedom of expression. Such regulation should also “provide the private sector, civil society, the technical community and academia meaningful opportunities for input and participation.” Last but not least, the Report also emphasizes the need to avoid the imposition of pressures from States to private actors that may lead to restrictions on the right to freedom of expression.

In the Report to the General Assembly of 6 April 2018[2], the Special Rapporteur includes a few references and recommendations related to the use of automated mechanisms with regards to content moderation. It is particularly outlined that “(a)utomated content moderation, a function of the massive scale and scope of user-generated content, poses distinct risks of content actions that are inconsistent with human rights law” and therefore it is particularly important to consider “the significant limitations of automation, such as difficulties with addressing context, widespread variation of language cues and meaning and linguistic and cultural particularities”. It is also important to mention that the Report also reiterates the already well established international legal principle that “States and intergovernmental organizations should refrain from establishing laws or arrangements that would require the “proactive” monitoring or filtering of content, which is both inconsistent with the right to privacy and likely to amount to pre-publication censorship.”

In the most recent Report to the General Assembly, focusing on the intersection between artificial intelligence (AI) and human rights, the Special Rapporteur acknowledges the role and presence of AI in the digital communications environment, also highlighting its potential problematic nature. In particular, the Report underscores that “States (…) are pressing for efficient, speedy automated moderation across a range of separate challenges, (including) child sexual abuse and terrorist content” and warns about the fact that “(e)fforts to automate content moderation may come at a cost to human rights”. In line with previous Reports (as it has already been shown), the Special Rapporteur insists on the fact that:

“Artificial intelligence-driven content moderation has several limitations, including the challenge of assessing context and taking into account widespread variation of language cues, meaning and linguistic and cultural particularities. Because artificial intelligence applications are often grounded in datasets that incorporate discriminatory assumptions,17 and under circumstances in which the cost of over-moderation is low, there is a high risk that such systems will default to the removal of online content or suspension of accounts that are not problematic and that content will be removed in accordance with biased or discriminatory concepts.”

A very important warning in this sense also refers to the fact that:

“Artificial intelligence makes it difficult to scrutinize the logic behind content actions. Even when algorithmic content moderation is complemented by human review — an arrangement that large social media platforms argue is increasingly infeasible on the scale at which they operate — a tendency to defer to machine-made decisions (on the assumptions of objectivity noted above) impedes interrogation of content moderation outcomes, especially when the system’s technical design occludes that kind of transparency.”   

Two important recommendations derived from the Report are: 1) “Artificial intelligence - related regulation should also be developed through extensive public consultation involving engagement with civil society, human rights groups and representatives of marginalized or underrepresented end users”, and 2) “Individual users must have access to remedies for the adverse human rights impacts of artificial intelligence systems. Companies should put in place systems of human review and remedy to respond to the complaints of all users and appeals levied at artificial intelligence-driven systems in a timely manner.”

Beyond international standards it also needs to be outlined that there are well-founded reports that systematically highlight the problems associated with the use of automated tools when moderating content online. Such problems relate to two main areas: their negative impact on freedom of expression and non-discrimination rights, and their lack of effectiveness with regards to properly tackling undesired and/or illegal content. See for example the study made by Natasha Duarte, Emma Llanso and Anna Loup at the Center for Democracy & Technology “Mixed messages? The limits of automated social media content analysis”[3].

On the basis of the abovementioned parameters we comment and recommend the following:

a) Considering the impact on human rights (particularly the right to freedom of expression) and the problems related to adequacy and effectiveness (particularly when applied to content moderation) the law shall not mandate the use of automated tools or similar mechanisms for proactively identifying and removing or disabling public access to unlawful information or content. The provision included in the draft is also problematic vis-à-vis international standards inasmuch as establishes a general content monitoring obligation for platforms.

b) It is recommended that the law includes safeguards with regards to the voluntary use of automated tools by intermediaries when enforcing their own terms of service and community guidelines, in line with international standards. Such safeguards may include proper human review and remedy mechanisms.

c) Tackling unlawful legal content must be consistent with international standards. In particular, provisions must be clearly established by law, pursue a legitimate aim and avoid any excessive or disproportionate restriction on the fundamental right to freedom of expression. Take down mechanisms must incorporate adequate review mechanisms for intermediaries and content providers, as well as avoid liability regimes that may lead to over removal of legitimate speech.

  

 

Add new comment