wordpress blog stats
Connect with us

Hi, what are you looking for?

Bombay High Court’s split verdict on Govt’s Fact-Check Amendment: A Brief Analysis of the Divide

Can content flagging by the government be seen as indirect takedown request? “In a country like India with broad laws criminalising large swathes of speech…[undergoing expensive and time-consuming litigation process] creates strong incentives for intermediaries to remove the content rather than risk losing safe harbour,” the author argues.

By Vasudev Devadasan

On 31 January 2024 a Division Bench (two Judges) of the Bombay High Court delivered a split verdict (here and here) on the constitutionality of the 2023 amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules”). Rule 3(1)(b)(v) of the IT Rules required intermediaries (such as online platforms) to make reasonable efforts to not host content that is patently false or concerned “any business of the Central Government” that has been flagged by the Union Government’s ‘Fact Checking Unit’ (“FCU”). If the intermediaries violated the Rule, they would lose the statutory immunity from liability (or ‘safe harbour’) provided to them under Section 79 of the Information Technology Act, 2000 (“IT Act”). Justice Neela Gokhale upheld Rule 3(1)(b)(v) while Justice Gautam Patel struck it down.

This two-part series critically analyses the areas of disagreements between the two judgements. This first blog examines two foundational differences about the scope and content of Rule 3(1)(b)(v). First, the judges disagreed over what the consequence of violating the Rule was, and thus whether the Rule even restricted users’ speech. Next, the judges disagreed over whether Rule 3(1)(b)(v) only prohibited content concerning the Union Government which was shared with the knowledge that it was false, or any content about the Government. These disagreements ultimately end up shaping the free speech analysis of each judge, which will be discussed in the second post.

Safe harbour and free speech

Justice Gokhale found that the Rule 3(1)(b)(v) did not actually require an intermediary to remove content flagged by the Union Government’s FCU. According to her, once the FCU flags content, the intermediary acts according to its “existing policy” and can either remove the content but also show a disclaimer or warning to users that the content is false (p. 16-17 Gokhale J.). Crucially, she ruled that the Petitioner’s concern that merely because an intermediary loses safe harbour if it fails to remove FCU flagged content would not automatically cause the intermediary to remove the content (p. 19 Gokhale J.). She noted that the loss of safe harbour would only expose an intermediary to liability for hosting unlawful speech, and the intermediary could always defend itself against such liability in court (p. 29 Gokhale J.). This understanding of the consequences of loss of safe harbour is central to Justice Gokhale’s opinion as it allows her to characterise Rule 3(1)(b)(v) as a relatively innocuous measure that does not ultimately result in the removal of content or have a significant bearing on free speech.

With respect, Justice Gokhale’s approach fails to grasp the incentives of intermediaries or the role of safe harbour in protecting free speech. Intermediaries host millions of pieces of content every day. This content is not their own content, it is that of their users. Further, intermediaries make negligible amounts of money from any single piece of content. The result of these realities is that if intermediaries are at risk of being held liable for a piece of content, the easiest thing to do for an intermediary is to remove it and avoid the time and money associated with litigating the legality of such content. This is not conjecture but has been empirically demonstrated by sending legal notices to intermediaries and recording whether they remove content or litigate these notices. Intermediaries’ unwillingness to defend their users’ speech in Court against the government is perhaps best demonstrated by the fact that not a single intermediary challenged Rule 3(1)(b)(v).

Precisely to avoid intermediaries removing content at the drop of a hat, even when it may be entirely lawful, Parliament through Section 79 of the IT Act granted them conditional immunity for hosting user generated content. The Supreme Court in Shreya Singhal v. Union of India (“Shreya Singhal”) also recognised that safe harbour was essential to protect free speech on the internet and ruled that an intermediary will only lose safe harbour if a court or government agency requires it to remove content. Perhaps most crucially, safe harbour immunity protects the intermediary from liability even if the speech is illegal. In a country like India with broad laws criminalising large swathes of speech, losing safe harbour can not only embroil an intermediary in expensive and time-consuming litigation, a single adverse verdict or guilty sentence can cripple an intermediary. This creates strong incentives for intermediaries to remove the content rather than risk losing safe harbour. Thus, requiring intermediaries to remove content under threat of losing safe harbour is in reality no different from asking them to take it down.

Justice Patel’s opinion expressly acknowledges this. At paragraph 81 he notes:

Between safe harbour and user’s rights regarding content, the intermediary faces a Hobson’s choice; and no intermediary is quixotic enough to take up cudgels for free speech. Compromising one particular chunk of content is a small price to pay; better the user content is thrown under the bus than having the bus run over the entire business. The safe harbour provision is therefore not just intermediary-level insulation from liability. It is an explicit recognition of a free speech right. (emphasis supplied)”

Justice Patel’s opinion recognises that once content is flagged by the Government FCU, the intermediary’s most obvious and indeed rational course of action would be to remove this content. Unlike Justice Gokhale, Justice Patel also notes that once content is flagged by the FCU, there is no room for the intermediary to apply its own mind or its policies, it is the FCU that is the arbiter of the falsehood of the content, and the intermediary is merely required to remove it at the threat of losing safe harbour (p. 73 Patel J.). Recognising this key aspect allows Justice Patel’s opinion to accurately capture the threat to free speech posed by Rule 3(1)(b)(v). Namely that requiring an intermediary to remove content flagged by the government under threat of stripping an intermediary of its safe harbour amounts to indirect takedown request by the government. Viewed in this manner, Rule 3(1)(b)(v) poses a direct risk to free speech and causes Justice Patel to engage with the doctrines of overbreadth, vagueness, proportionality, and the permissible grounds to restrict speech in a detailed manner (as discussed in Part II of this blog-series).

The knowledge requirement

Rule 3(1)(b)(v) required intermediaries to make reasonable efforts not to host content that:

deceives or misleads the addressee about the origin of the message or knowingly and intentionally communicates any misinformation or information which is patently false and untrue or misleading in nature or, in respect of any business of the Central Government, is identified as fake or false or misleading by such fact check unit of the Central Government.”

The judges disagreed over whether this text covered one or two classes of content. In Justice Patel’s view, the text outlined two different sets of content: (i) where the sender knowingly and intentionally shared information which was false or misleading; and (ii) content concerning the Union Government flagged by the FCU. Crucially, Justice Patel held that because the two sets of content were separated by the word “or”, the requirement that the content was shared with the knowledge that it was false did not apply to FCU flagged content (p. 59 Patel J.). In other words, the intermediary had to remove content that was flagged by the FCU even if it was not shared with an intention to mislead. The FCU was the sole arbiter of whether the content ought to stay up or not irrespective of user intention (p. 69 Patel J.).

Justice Gokhale however ruled that the requirement that the content be shared with knowledge and intent to mislead applied to even FCU flagged content (p. 40 Gokhale J.). Justice Gokhale’s offers two justifications for this position, both of which, with respect, are deeply flawed. First, Justice Gokhale simply reproduces the rule as interpreted by her (i.e., “knowingly and intentionally communicates information in respect of any business of the Central Government, is identified as fake or false…”). It is trite law that a judge cannot re-write a statute in the guise of interpreting it. Here however, there is no guise, the judge has simply re-written the rule. This cannot be considered reasoning, let alone sound reasoning.

Second, Justice Gokhale states that an intermediary is granted safe harbour because of its passive role, but once an intermediary has knowledge and intent, it loses safe harbour. Thus, a knowledge and intention requirement must be read into the Rule (p. 40 Gokhale J.). This is a conflation of two different knowledge requirements, the knowledge of the intermediary and the knowledge of the user. A perusal of Rule 3(1)(b)(v) demonstrates that it is concerned with the sender’s knowledge. The Rule restricts content that is knowingly or intentionally communicated. It is users who communicate information and therefore Rule 3(1)(b)(v) targets situations where a user knowingly shares misinformation. Completely independent and unrelated to this is the question of whether the intermediary has knowledge of unlawful content on its network. Prior to the Supreme Court’s decision in Shreya Singhal v. Union of India, an intermediary would lose safe harbour if it had knowledge of unlawful content on its network but failed to remove it (post Shreya Singhal, the intermediary does not lose safe harbour until it receives a court order requiring takedown). It is submitted that Justice Gokhale’s reasoning that because an intermediary (used to) lose safe harbour upon having knowledge of unlawful content on its network, Rule 3(1)(b)(v) should be interpreted to require that even FCU flagged content by shared with a user’s knowledge that it is false is an incorrect conflation of two entirely unconnected knowledge requirements and is incorrect.

One final observation: Reading in a knowledge requirement may seem to diminish the risk to free speech because it raises the threshold for content that can be flagged by the FCU. However, three points need to be noted here. First, because this interpretation is at odds with the text of the Rule itself, there is no guarantee that the government officials at the FCU interpret the Rule as Justice Gokhale does. Second, verifying the intention of internet users in a time-bound manner is neigh impossible and opens the door to incorrectness and abuse. Third, there are no procedural safeguards to ensure the FCU does interpret the Rule this way or that.

Conclusion

These foundational differences between the two Judges cause them to characterise the disputed Rule in diametrically opposed manners. In Justice Gokhale’s opinion, Rule 3(1)(b)(v) does not require or cause the removal of content. Further, by circumscribing the Rule to only cases where users intentionally share false information, the rule has a narrow field of operation. However, in Justice Patel’s opinion, Rule 3(1)(b)(v) represents an indirect restriction on speech concerning the Union Government that is enforced by threatening to strip intermediaries of safe harbour. Further, because Justice Patel finds that the Rule could be applied to remove any information concerning the Union Government regardless of why it was shared, it raises the spectre of indirect censorship. Understandably, these conclusions on the scope and effect of Rule 3(1)(b)(v) substantially influence each judge’s analysis of the free speech risks the Rule raises. Both Judges’ discussion on issues of overbreadth, vagueness, and the permissibility of restricting false speech under Article 19(2) will be discussed in the subsequent blog post.

*Disclaimer: One of this Blog’s editors (Gautam Bhatia) was a counsel in this case. He was not involved with the publication of this post.

Note: This article has been cross-posted with permission from the author. The post, originally published on Indian Constitutional Law and Philosophy blog, can be found here.


STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!


Read More:

Written By

Free Reads

News

"We believe the facts and the law are clearly on our side, and we will ultimately prevail," the company said on the enactment of...

News

Zuckerberg expressed confidence in monetizing AI through methods like ads and paid access to larger models, leveraging Meta's successful history with scaled technologies.

News

The data leakage comes on the same day as the Reserve Bank of India (RBI) restricted Kotak Mahindra Bank from onboarding customers over online/mobile...

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.

Views

News

NPCI CEO Dilip Asbe recently said that what is not written in regulations is a no-go for fintech entities. But following this advice could...

News

Notably, Indus Appstore will allow app developers to use third-party billing systems for in-app billing without having to pay any commission to Indus, a...

News

The existing commission-based model, which companies like Uber and Ola have used for a long time and still stick to, has received criticism from...

News

Factors like Indus not charging developers any commission for in-app payments and antitrust orders issued by India's competition regulator against Google could contribute to...

News

Is open-sourcing of AI, and the use cases that come with it, a good starting point to discuss the responsibility and liability of AI?...

You May Also Like

News

Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...

Advert

135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...

News

By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

News

Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ