The latest of India’s platform governance reforms includes a proposal to outlaw hosting information online that’s been ‘fact-checked’ to be ‘fake’ by Indian government agencies. Experts we spoke to raised concerns that this attempt to combat fake information online rides roughshod over free speech rights, and basic principles of transparent and accountable governance.
“To empower any one authority in a democracy to decide what is ‘fake’ is hugely problematic,” warned Namrata Maheshwari, Asia Pacific Policy Counsel at Access Now speaking to MediaNama.
“The added implication is that the deciding authority is, under law, being made the singular arbiter of truth on the Internet. There is no way to ensure objectivity or accountability in such a system, which is an absolute must. Because, at the other end of this decision-making, there is somebody’s freedom of expression at stake. At an individual and societal level, this is about our ability to freely express ourselves, exchange information, and have a healthy debate about disagreements and preserve democracy.”
A refresher: Released last Tuesday, the proposed amendment to the 2021 IT Rules outlaws hosting content flagged by the Press Information Bureau (PIB)—or any other authorised Indian government body—as “fake” or “false”. What that will likely mean in practice: once the bell is sounded, all platforms in India will have to scramble to take down that ‘misleading’ information.
What’s the PIB, again?: They’re the Indian government’s press arm. In 2019, a PIB fact-checking unit was set up to counter ‘misinformation’ circulating online about the government.
Why does this matter?: The PIB has at times fact-checked news articles critical of the government as ‘fake’. What’s more, the proposed amendment is only a paragraph long. It contains no provisions on fact-checking best practices, what ‘fake’ information is, or how the government will ensure unbiased fact-checking. This troika has led to unsurprising concerns over the amendment’s alarming capacity to stifle free speech and expression online and muzzle digital media. These changes might also be brought through rules that are facing 17 constitutional challenges. All in all, not very democratic.
Rushing against time: The government also introduced the amendment rather suddenly on January 17th—bunching it together with a separate set of proposed amendments to the IT Rules to regulate online gaming. January 25th was supposed to be the last day to submit public feedback on where the amendment works, and where it must be reformed.
“I’m unaware of any public consultation that happened before this amendment was proposed [last week],” added Maheshwari. “To do this in a last-minute way and to provide just a week for consultation is far from ideal. To enable meaningful feedback, if that is the intention, more time has to be given to engage.”
The government seems to have responded to this popular concern—it recently announced that it will hold a separate consultation on the fact-checking proposal in February before it is implemented. A government official speaking to The Hindu added that the PIB will continue to only fact-check information related to the Indian government if the proposal goes through.
The deadline to submit feedback on the PIB proposal has been extended to February 20th. Submit yours here.
Despite these clarifications, many concerns around the proposal remain—such as, what does the government believe is ‘true’ in the first place?
STAY ON TOP OF TECH POLICY: Our daily newsletter with top stories from MediaNama and around the world, delivered to your inbox before 9 AM. Click here to sign up today!
Whose “truth” is it anyway? ‘Fake information’ has no benchmarks
To demarcate something as fake indicates that there’s a benchmark for what truth is. Sometimes, that benchmark is arrived at through hours of court proceedings or investigations. In this case, it may be unilaterally decided by the government. As Maheshwari already noted, this can seriously throttle free speech. One reason why: because “fake” content is an omnibus term that applies all sorts of information.
“Fake news could constitute dissemination of the wrong information without realising, like forwarding a WhatsApp message,” said Vasudev Devadasan, Project Officer at the Centre for Communication Governance, speaking to MediaNama. “It could be disinformation, where there is an active intent to deceive. It could be satire or parody too, where the person knows they’re saying something false, but is using that speech to make a point about public interest. So ‘fake’ or ‘false’ information really encompasses both legal and illegal speech. And, unlike other situations where law determines what’s true or false pursuant to a fairly rigorous procedure, here, there’s a unilateral declaration [of ‘fakeness’] made by the government [according to an unknown procedure].”
These concerns about paternalistic content takedowns aren’t unfounded. Way back in 2018, Tanul Thakur’s satirical website ‘www.dowrycalculator.com’ was blocked by the Indian government. Thakur said he was trying to shed light on the evils of India’s dowry system. The Indian government still refused to unblock it.
“If the PIB flags satire as ‘fake’ it would be within the ambit of a strict reading of the proposed amendment, but it would be a restriction on constitutionally protected speech too?” added Devadasan.
Many platforms where ‘misinformation’ circulates are also end-to-end encrypted, said a source speaking to MediaNama. While there are free speech threats with the amendment, it may also be used to enable access to encrypted content, compromising privacy too. As an aside, this happened in the case of copyrighted content allegedly circulating illegally on the encrypted messaging app Telegram. Recent orders from the Delhi High Court not only asked for the infringing material to be taken down—they also directed Telegram to disclose the user identities of the people allegedly circulating it.
For Tejasi Panjiar, Associate Policy Counsel at the Internet Freedom Foundation, even trying to define a benchmark fake or false news is a bit of a premature question.
“What we should be trying to understand is what are the related harms of these larger issues,” said Panjiar, in conversation with MediaNama. “There needs to be more research on the larger social context of misinformation and on different ways of tackling the situation [aside from blocking misinformation]. If we go with the current lens [of simply framing a misinformation policy based on definitions of truth and fakeness] then ultimately, we end up criminalising information. That’s the wrong approach to begin with.”
Ultimately, it’s in the government’s court to ensure that its fact-checking processes incorporate rigorous safeguards and don’t step on the toes of free speech rights. MediaNama has reached out to and filed an RTI with PIB, to learn more about how it fact-checks news and insulates itself from bias. This piece will be updated if and when it responds.
An all too familiar refrain: Introducing backdoor provisions through the IT Rules
The PIB’s newfound powers have been introduced through the 2021 IT Rules meant to govern platforms. They’re a subset of India’s Information and Technology Act, 2000 (IT Act).
Rules are intended to simply help the government implement a statute. They’re not supposed to introduce new provisions that alter the parent law’s original purpose. However, some argue that that’s what the IT Rules 2021 have done from the get-go.
For example, the IT Rules 2021, introduced a traceability mandate that could compromise end-to-end encryption and privacy. It also proposed a tiered grievance redressal mechanism to resolve content moderation-related complaints. “These are all provisions that belong in a law that has undergone public and parliamentary scrutiny,” argued Maheshwari. “Executive rulemaking doesn’t have the same safeguards and checks and balances that laws do. In terms of overstepping the ambit of the government’s rule-making power, this amendment is consistent with many other provisions that exist or are proposed to be included in the IT Rules.”
These aren’t new concerns. The same arguments of the government introducing backdoor provisions through the IT Rules 2021 have been flagged by experts ever since amendments to them were first proposed last July. Even the online gaming amendments to the rules—under which the PIB amendment was snuck in—were criticised on the same grounds. “Backhand regulation of online gaming platforms via an amendment to existing rules (part of which have already been stayed by courts) instead of a separate legislation is inappropriate and may be liable to be struck down by courts,” warned gaming and technology lawyer Jay Sayta earlier this month.
Devadasan disagreed slightly. “There are certainly provisions in the IT Rules 2021 that are beyond the rulemaking powers of the government,” he says. “In this case, the rules are empowering the PIB to notify platforms of fake news, which could reasonably be within the ambit of the kinds of due diligence platforms need to undertake to retain safe harbour. Under Rule 3(1)(d), the Rules ask platforms to take down content if ordered by a court or the government. This is somewhat analogous to that—but nevertheless, it still raises significant free speech concerns.”
Remember: safe harbour protects platforms from being held liable for hosting unlawful third-party content, provided they follow certain due diligence requirements. As Vox notes, safe harbour laws “fundamentally shaped the internet’s development (..) It’s unlikely that social media sites would be financially viable, for example, if their owners could be sued every time a user posts a defamatory claim.” Combating misinformation could just be termed as one of the many ‘due diligence’ requirements listed out in the Rules too.
Are multiple blocking laws a crowd? Section 69A, the IT Rules, and the PIB
If blocking powers already exist within India’s IT laws, what exact purpose does specifically providing the PIB with extra (effective) censoring powers serve? For example, the Indian government has given itself powers to block public access to content online on various grounds, like national security and public order, under Section 69A of the IT Act. Content can also be blocked under the 2021 IT Rules.
“Section 69A and the IT blocking rules are by no means perfect,” said Devadasan. “But even the limited substantive and procedural safeguards provided in them are being effectively circumvented by the amendment.”
For example, under a Section 69A blocking order, the government is supposed to notify the user who uploaded the content about the blocking. Only a specific person can order something to be taken down. There’s also a committee of public servants who are supposed to evaluate each order and decide whether it falls under Section 69A’s ambit or not.
Now, both the safeguards and the larger Section 69A are internally flawed, as MediaNama has past reported. RTIs and court cases also reveal that the government may not always enforce these safeguards the way they’re supposed to. But, at the very least, they exist. They offer a promise of informed governance—a position corroborated by the Supreme Court when it upheld the provision’s constitutionality in 2015 in Shreya Singhal.
But the proposed amendment oversimplifies the inherently complex content-blocking process, and no procedural safeguards have been described.
“Here, the notification is directly issued by the PIB or the authorised government department. The intermediary then has to remove the content. I would say this is a significant expansion of Section 69A with fewer substantive and procedural safeguards,” warned Devadasan.
The Internet Freedom Foundation adds that the proposal might fall foul of the Shreya Singhal verdict. “The Proposed Amendment will bypass all these [procedural Section 69A requirements], and introduce a new third route for passing blocking orders,” said the digital rights advocacy group.
“Ultimately, the amendment tightens the government’s grip over online content,” said Maheshwari. “It also has no safeguards. It’s not clear as to what additional purposes the amendment is meant to serve, but it will amplify the existing issues with content governance. It will even create a few more at a foundational level, as far as individual freedoms and democracy are concerned.”
Will platforms push back?
There’s a clear possibility that some of these ‘fact-checking’ orders could be dodgy. But, what if a platform doesn’t think the fake news order is kosher—can and will they challenge it?
“This is one of the procedural safeguards [mentioned earlier] that’s being circumvented,” noted Devadasan. “Based on what we know from news reporting, there are meetings on Section 69A orders where government officials meet platform representatives. There’s some discussion and potential pushback from platforms. It’s not ideal because no one on the outside has any visibility on these meetings—but at least there’s some contact with the government. Even that doesn’t seem to be provided for here. I don’t see how platforms could legally contest this, short of trying the writ petition route.”
Twitter is a good example here—the microblogging platform sued the Indian government for 39 allegedly overreaching Section 69A orders last July. The challenge was somewhat of a lone wolf, though—few companies typically challenge government diktat in Indian courts. “Perhaps they feel they have never been asked to remove content that was lawful,” Centre for Internet and Society Co-Founder Pranesh Prakash drily suggested MediaNama back in July. “Perhaps they feel they have never been asked to unlawfully reveal user data”.
Devadasan spelt out why this might be the case. “If a platform ignores a PIB notification, then the consequence is that you lose safe harbour,” he explains. “Once that happens, the risk is that there are many statutes that you could potentially be prosecuted under for that speech. It could be obscenity, decency, hate speech, or many others. If a platform is willing to take that risk—that the government won’t file prosecution against them, that they won’t be sued, or that they’ll eventually win a suit ten years down the line—then they may be willing to roll the dice and ignore the PIB order.”
What’s the way forward?
Privacy was deliberated for decades before 2017’s Puttaswamy judgment fully realised it as a fundamental right in India. When it comes to misinformation, Maheshwari doesn’t think India’s at a similar stage of legal development.
“What we can talk about is the processes we need to even attempt to devise such laws,” said Maheshwari. “At the first step, there has to be a multi-faceted, multi-stakeholder consultation. These conversations cannot be happening only between select parties behind closed doors. Second, there’s very clear guidance on how a law should and should not impact free speech and human rights. Whether the law is necessary, proportionate, and predictable are basic requirements for any policy.”
Panjiar noted that leaning on sectoral experts is a key principle underpinning any lawmaking process. “There can’t just be one entity deciding that something is definitely ‘false’ in a country as diverse as India,” she notes. “You need to understand the social context to frame a policy like this.”
Devadasan added to this, noting that carrying out empirical research is necessary to regulate misinformation. “We don’t know exactly what kinds of harm misinformation is causing, which leads to policy-making based on anecdotal evidence and incidents, which isn’t the best way of going about it,” he says. “The first step [for research] is better access to platform data, which requires greater transparency from social media companies and better researcher access to data too. It’s crucial that the [regulatory] response is proportionate to the harms that the government is trying to protect against.”
Taking all these steps could point to a different approach to misinformation regulation—that is certainly less heavy-handed.
Platforms could simply label content flagged by independent fact-checkers as “fake” while keeping it online, similar to how Twitter currently does. Digital and media literacy campaigns could significantly improve India’s information ecosystem too. These less restrictive measures need not thwart national security either. “Section 69A continues to exist,” added Devadasan. “The government anyway has the power to block content if it poses an imminent risk to public order.”
“A UN Human Rights Council report from 2021 noted the relationship between the spread of disinformation [intentionally false information] and human rights,” added Maheshwari. “So, where there is a poor human rights record, disinformation spreads faster. So, there is value in maintaining an open exchange for ideas and free speech [for the information ecosystem]. I don’t think any single authority can be the arbiter of ‘false’ information, because the ‘truth’ is not a legal standard.”
The bottom line seems to be that there needs to be more transparency about what the government wants for tech policy and how it wants to go about it.
“Since 2021, we’ve [the Internet Freedom Foundation] been asking for a white paper on what the government’s outlook for digital rights is,” noted Panjiar. “For example, recently the government has talked about ‘user harm’ in the online gaming amendments. What its understanding of harms is is something I’d be very interested in knowing about, because that will ultimately influence how they regulate. Until we have a white paper detailing the clear articulation of the issues considered by the government, the question of how to frame a better policy becomes redundant.”
Note: This piece was updated on 27/01/2023 at 6:23 pm to add the extended consultation date.
This post is released under a CC-BY-SA 4.0 license. Please feel free to republish on your site, with attribution and a link. Adaptation and rewriting, though allowed, should be true to the original.
- How Tattle And Hatebase Tackle Online Misinformation And Hate Speech In Non-English Languages
- Why Section 69A Of The IT Act Should Have Been Changed By The Supreme Court
- #NAMA: The Traceability Mandate And What It Means For End-To-End Encryption
- “Indian Laws Should Be Dictated By The Constitution”: Pranesh Prakash On Twitter’s Challenge To 39 Blocking Orders
- How Does A Section 69A Blocking Order Come Into Existence?