By Sachin Dhawan
The government has held two public consultations on the Digital India Act (DIA) this year, most recently in May. In these consultations, it considered doing away with safe harbour protection, which gives online platforms protection from liability for the third party user generated content they host.
The government argued that these platforms have evolved considerably since the early days of the internet. They are no longer passive conduits or ‘dumb pipes’ that simply transmit content from senders to receivers. On the contrary, they are actively involved in curating the content that users see. So, the government asserted, they probably no longer need immunity. The government also said that removing safe harbour protection was in the interest of user safety. In addition, it condemned the dangers of anonymous speech hosted by platforms with safe harbour protection.
Concerns about the safety of users online are valid. However, as this article will show, doing away with safe harbour protection will be detrimental. It will not advance user safety; moreover, it will have a negative impact on free speech. Therefore, instead of doing away with safe harbour protection, I recommend retaining it in the Digital India Act.
The Problem: Doing Away with Safe Harbour Will Do More Harm Than Good
Safe harbour protection gives online platforms breathing space. If they are hosting unlawful content they are not automatically on the hook for it. They get a chance to remove such content upon a government/court notification and it is only if they fail to do so that they are subject to liability.
Removal of this protection will have the following deleterious effects –
1. Harm to Free Speech
Without safe harbour protection, online platforms will over remove legitimate content, thereby silencing the voices of innocent users. This is because they lack the ability to assess the legality of speech – as private entities, online platforms are simply not qualified to do the job of a court or government agency. This is especially so when they have to do so at scale. That’s why the Supreme Court in the Shreya Singhal case clearly said that platforms like Facebook and Google should not be in the position of having to judge the legality of millions of takedown requests (TDRs) they will receive in the absence of safe harbour protection.
Moreover, platforms simply don’t have the time or the inclination to engage in a detailed legal examination of a plethora of takedown requests. A landmark study conducted by the Centre for Internet and Society (CIS) confirms this lack of bandwidth. It clearly showed that platforms will mechanically comply with user takedown requests without evaluating the validity of the claims contained therein, so as to avoid the possibility of going to court. It makes business sense for platforms to simply take down content rather than incur the various costs associated with litigating matters in court.
During the consultations, the government stated that if platforms don’t want to be the arbiter of truth they can go to court to resolve disputes with users. But the above discussion shows that platforms – being profit minded businesses – will jettison nuanced evaluation of takedown requests, simply remove content en masse and avoid the rigors of litigation. This will harm the speech rights of content originators (those who upload content) but also the rights of millions of others who will be denied access to the removed pieces of content. This is because their right to receive speech, which is a core component of the right to free speech, will be affected.
This is especially so because originators, who have an incentive to fight back against takedown requests, will likely be kept in the dark when platforms receive such requests. They are currently not notified when platforms receive government/court TDRs. So when safe harbour is removed, they will presumably not be informed when platforms receive takedown requests from individual complainants. Kept out of the loop, they wont be able to fight back.
Moreover, there is no penalty on complainants for sending frivolous takedown requests to platforms. So the incentives all point towards complainants raising takedown requests that are indiscriminately acquiesced to by platforms while the originator and the community of users denied access to that speech are harmed.
Moreover, there is considerable evidence to suggest that trolls will – as they have in the past – take advantage of the private takedown option to send a barrage of frivolous TDRs to platforms concerning content uploaded by vulnerable groups and communities online that they dislike. Thus, not only will frivolous requests be sent to platforms but there will likely be a spike in frivolous requests sent to platforms targeting vulnerable groups. Which will mostly be complied with by platforms. This means that the very people for whom the internet and online platforms are a lifeline because they offer access to communication channels that are denied by mainstream media platforms – will be disproportionately affected and denied access.
This abuse of the takedown process has already occurred in the field of copyright law, wherein private takedown notices can be sent to platforms under the existing safe harbour regime. Many such notices are sent and a great number of them completely ignore important fair dealing considerations that make the targeted content legal. For example, in the landmark MySpace v Super Cassettes case, the music company T-Series sent a barrage of takedown notices to MySpace, comprising its entire music catalogue! In its judgment, the Delhi High Court noted that such a brazen attempt by T-Series to enforce its copyright completely ignored and trampled over the rights of content originators who may well have valid rights in the content uploaded.
2. Harm to User Safety
Removal of safe harbour protection will undermine user safety online. This is because content that seeks to uphold user safety – such as anti-hate speech – will often be removed as platforms adopt an overcautious approach to content upon loss of safe harbour protection. This occurred in the United States when safe harbour protection was diluted in 2018 to make platforms liable for sex trafficking related content. Platforms overreacted and removed not only legal content that had nothing to do with sex trafficking but also content that would help the victims of sex trafficking.
Also, content that is valid and that seeks to fight hate speech etc. will be disproportionately removed by algorithms. As platforms lose safe harbour protection they will rely increasingly on filtering tools to remove illegal content. And a growing body of evidence shows that these filtering tools are biased against minority and marginalized communities when they, for example, raise their voices against hate speech and discrimination.
The Solution: Retain Safe Harbour Protection in the DIA
Safe harbour protection has worked so far to protect free speech online and even enable it to thrive. Because platforms are not directly liable for third party content, they can empower millions to speak without the fear of being drowned in endless rounds of litigation. Safe harbour has thus enabled the democratization of speech.
Moreover, anonymity online has also democratized speech by enabling the vulnerable and marginalized to speak truth to power without fear of persecution. The #Metoo movement is a good example of this. This is not to say that anonymity is an unalloyed good, but to say that it has positive features which should be preserved.
For instance, in the Subodh Gupta case the Delhi High Court allowed the alleged victims of sexual harassment to maintain their anonymity during the course of proceedings. The Court recognized that anonymity is often a bulwark of support for the oppressed. Moreover, without safe harbour protection, the online platform Instagram which hosted the anonymous speech of the individuals claiming to be sexually harassed by Gupta would likely have taken the content down prematurely to escape litigation hassles.
In cases of crime, anonymity can be unmasked as in the case of the individual threatening cricketer Virat Kohli’s daughter. Thus, a positive balance can be struck when it comes to anonymity online, without compromising safe harbour protection.
Conclusion
The government raised important issues at the DIA consultations. It for instance discussed the need to rein in the dominance of Big Tech platforms. It also spoke of the need to deal with issues such as Blockchain which could not have been foreseen in 2000 when the IT Act came into being.
STAY ON TOP OF TECH POLICY: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!
However, as demonstrated above, the criticism of safe harbour protection for online platforms is unfounded. Doing away with such protection will harm the very digital nagriks the government seek to protect. It will silence the voices of innocent speakers while amplifying the voices of disingenuous trolls who want to silence the vulnerable. Consequently, instead of doing away with it, the DIA should emphatically retain safe harbour protection.
Sachin Dhawan is a Programme Manager at the Centre for Communication Governance, National Law University Delhi.
Also read:
