By Mitaksh Jain, Sarvesh Mathi, Nishant Kauntia, and Anushka Jain
India’s Ministry of Electronics and Information Technology (MeitY) attempted to clear the air on its controversial Information Technology (IT) Rules, 2021, by releasing a set of frequently asked questions (FAQs) on November 1. MeitY is expected to release another document on the Standard Operating Procedures for the Rules, Minister of State (MoS) for IT Rajeev Chandrashekhar said at a press conference on the same day.
Since it took effect on May 26, the IT Rules have faced pushback and court challenges as it stands to fundamentally alter the digital regulation landscape in India.
While the FAQ document goes through various questions on the applicability, due diligence provisions, penalties, etc. of the IT Rules related to social media intermediaries; here’s a roundup of what was not clarified by MeitY.
What the IT Rules FAQs failed to clarify
Inclusion of juridical persons: The IT Rules define a user as any person who accesses or avails computer resource of an intermediary or a publisher for the purpose of hosting, publishing, sharing, transacting, viewing, displaying, downloading or uploading information and includes other persons jointly participating in using such computer resource and addressee and originator;
- What hasn’t been clarified: The FAQs do not go into the definition of a user and do not distinguish between natural and juridical persons. A juridical person is a non-human legal entity that is not a single natural person but an organization recognized by law as a legal person such as a corporation, government agency, or NGO.
Interaction undertaken by entities: The set of rules do not establish what constitutes an online interaction except that it is one when two users are involved. The nature of users has not been examined by the Rules.
- What hasn’t been clarified: The FAQs fail to address if online interaction includes entity-to-entity interaction or entity-to-natural person interaction.
Timeline for preserving information and records: The IT Rules mandate that the intermediary must preserve information and associated records for 180 days for investigation purposes, or for a longer period if required by the court or “by government agencies who are lawfully authorised” without tampering with the evidence.
- What has been clarified: The information collected after registration and before withdrawal will vary from platform to platform. How much information should a platform store otherwise would be addressed through the IT (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 and notifications under Section 67C of the IT Act 2000.
- What hasn’t been clarified: The grounds under which intermediaries may be requested to preserve information and records for longer than one eighty days and what is the maximum timeline for the same despite being asked to clarify in the representation MediaNama sent to the MeitY.
Grounds for nature of impersonation: The Rules dictate that an intermediary must “take all reasonable and practicable measures to remove or disable access to content” which exposes the private area of an individual, shows the concerned individual in full or partial nudity or in any sexual act or conduct, or is in the nature of impersonation in an electronic form, including artificially morphed images of such individual, hosted, stored, published or transmitted by the intermediary within 24 hours from the receipt of a complaint made by an individual or any person on his behalf.
- What hasn’t been clarified: The FAQs do not provide any additional information except summarise what is mentioned in the Rules. They do not clarify whether the “nature of impersonation” should relate to nudity/sexual act or conduct or any form of impersonation in an electronic form when an intermediary is to disable content. The query was raised by MediaNama in its submission to MeitY.
Scope of artificially morphed images: As mentioned above, the Rules direct intermediaries to remove content which includes artificially morphed images of an individual who has raised a grievance with the intermediary within 24 hours of the receipt of the complaint under Rule 3(2)(b).
- What hasn’t been clarified: The FAQs do not give any information in addition to what has been mentioned in the draft of the Rules and they do not shed light on whether artificially morphed images include fake news and deep fakes.
Alignment with the Supreme Court’s judgement in Shreya Singhal case: The apex court directed that intermediaries are required to disable access to content only upon receiving a court order or a notification from an appropriate government agency. Intermediaries were not to act upon complaints by users to avoid adjudication upon such complaints involving questions of freedom of expression.
But the IT Rules require intermediaries to deploy tools to proactively identify and remove certain types of content that depict rape, child sexual abuse, or any information which is exactly identical in content to information that has previously been removed by the intermediary.
- What hasn’t been clarified: FAQs do not elaborate on how the IT Rules will be reconciled with the Supreme Court’s judgment in Shreya Singhal as the judgment prevents intermediaries from acting on their own. The IT Rules only advises the intermediary to ensure that the measures taken by it must be proportionate to the interests of free speech and expression, and privacy of users.
Determining intentionality of users: As per the IT Rules, a social media intermediary should inform its users “not to host, display, upload, modify, publish, transmit, store, update or share any information that deceives or misleads the addressee about the origin of the message or knowingly and intentionally communicates any information which is patently false or misleading in nature but may reasonably be perceived as a fact.”
- What hasn’t been clarified: Is there any mechanism suggested to determine intentionality, and thus differentiate disinformation (intention is to mislead) from misinformation?
Safeguards around government take-down requests: As per the IT Rules, social media intermediaries, upon receiving a court order or notification from the Appropriate Government or its agency, must remove content that is deemed unlawful within 36 hours.
- What hasn’t been clarified:
- What safeguards are there to ensure that blocking orders are not sent to intermediaries through multiple agencies at the Central and State level?
- Will agencies request the taking down or blocking of any content by coordinating with the authorised agency at the Central Government level?
- What are the constitutional safeguards to be met for information to be taken down?
- What way can an intermediary ask for clarification or appeal of the notification received?
Stop-the-clock provisions for complaints and take-down requests: As per the IT Rules, there are various time frames prescribed for action to be taken by an intermediary in the event of receiving a complaint from a user or a takedown request by a government agency.
- What hasn’t been clarified:Is there a stop-the-clock provision, which is a maximum waiting time before the validity of a request expires, for intermediaries for responding to notifications and complaints received from the authorised agencies or Appropriate Government or users, such as when the intermediary has requested information in order to be able to act on the notification and this is pending?
Which government agency can demand information? As per the IT Rules, the intermediary is required to “provide information under its control or possession, or assistance to the Government agency which is lawfully authorised for investigative or protective or cyber security activities, for the purposes of verification of identity, or for the prevention, detection, investigation, or prosecution, of offences under any law for the time being in force, or for cyber security incidents.
- What hasn’t been clarified: Which government agencies can issue orders for information and assistance to intermediaries?
Personal liability of Chief Compliance Officer: As per the IT Rules, intermediaries must “appoint a Chief Compliance Officer who shall be responsible for ensuring compliance with the Act and rules made thereunder and shall be liable in any proceedings relating to any relevant third-party information, data or communication link made available or hosted by that intermediary where he fails to ensure that such intermediary observes due diligence while discharging its duties under the Act and rules made thereunder.”
- What hasn’t been clarified:
- Is there personal liability of the Chief Compliance Officer and how will this provision be interpreted in light of Section 85 of the IT Act, which deals with liability of employees in the event of an offence committed by a company ?
- What is the nature, scope and extent of liability that the Chief Compliance Officer of a significant social media intermediary may face in the event of non-compliance?
- Why isn’t the “taken all necessary measures to comply” an exemption for Chief Compliance Officer’s liability?
- What are the bare minimum steps that the Chief Compliance Officer may take to ensure that they do not face criminal liability?
Can independent contractors fill required positions? As per the IT Rules, intermediaries are required to appoint a Chief Compliance Officer, a nodal contact person, and a resident grievance officer.
- What hasn’t been clarified: Should these positions be filled by employees of the company or can they be independent contractors?
Determining authenticity of a government request: As per the IT Rules, authorized government agencies can issue notification for content take down or demand information and assistance from an intermediary.
- What hasn’t been clarified: Social media organisations often receive email requests with imposters posing as government officers. Will the government share certain standard identifiers to enable organisations to know whether a request was issued by the appropriate authority or whether the proper process of requesting a take down was followed?
Reporting offences detected by automated removal: As per the IT Rules, intermediaries are required to “endeavour to deploy technology-based measures, including automated tools or other mechanisms to proactively identify information that depicts any act or simulation in any form depicting rape, child sexual abuse or conduct, whether explicit or implicit, or any information which is exactly identical in content to information that has previously been removed” and “display a notice to any user attempting to access such information.”
- What hasn’t been clarified: Does such voluntary removal result in a presumption that the platform has knowledge of the specific content and trigger obligation to report offences under the Criminal Procedure Code or the Protection of Children from Sexual Offences Act, 2012?
Holding automated systems accountable: As per the IT Rules, any automated systems or tools used by an intermediary for removing offensive content such as rape or child sexual abuse material must have appropriate human oversight and undergo a periodic review. Furthermore, this review must “evaluate the automated tools having regard to the accuracy and fairness of such tools, the propensity of bias and discrimination in such tools and the impact on privacy and security of such tools.”
- What hasn’t been clarified:
- While the provisons outline criteria that the review must keep in mind, there is no requirement to make results of this review public. Will there be any mechanism put in place for external evaluation and oversight over such automated systems?
- What are the proposals for ensuring transparency and accountability in algorithmic systems for proactive filtering?
User verification mechanisms: As per the IT Rules, social media intermediaries must “enable users who register for their services from India, or use their services in India, to voluntarily verify their accounts by using any appropriate mechanism, including the active Indian mobile number of such users, and where any user voluntarily verifies their account, such user shall be provided with a demonstrable and visible mark of verification, which shall be visible to all users of the service.”
- What hasn’t been clarified:
- What should be the appropriate verification method for verification of accounts by significant social media intermediaries, and should mobile numbers be used for this purpose?
- Do the verification requirements apply only to social media platforms or would it extend to intermediaries such as YouTube, Spotify (which also has podcasts)?
Tracing the first originator of a message: As per the IT Rules 2021, significant social media intermediaries (SSMI) providing “services primarily in the nature of messaging” must be able to identify the ‘first originator’ of a specific message on their platform.
What hasn’t been clarified:
- Will the rule apply to personal messaging features of social media platforms? How will the requirements apply to audio-visual content, especially services such as Skype, Microsoft Teams and Zoom?
- The same content may be conveyed by several users in different communication chains. Is the first originator the person who first conveyed the content on the platform, or the persons who first posted it on each chain of communication?
- Small changes to a forwarded message could result in a different originator. How will the government account for these discrepancies?
Determining the first originator without breaking encryption: The rules say that no SSMI will need to disclose message content, any other information related to the first originator or other users. In the FAQ, MeitY has also said that “the intent of this rule is not to break or weaken the encryption in any way.”
What hasn’t been clarified:
- Does this mean intermediaries themselves will not be required to access message content or any other information related to the first originator and other users? If first originators cannot be traced without accessing message content, will the intermediary be excused from having to comply?
- It is possible to fake originator data by using a custom application (and not the official messenger app). Is it enough if an app assumes everyone is using the official app? How would any case then hold up in court, given that any defendant may plead that they could have been framed?
Less intrusive means: Such traceability will not be required if less intrusive means are effective in identifying the originator, the IT Rules say.
- What hasn’t been clarified: What are some examples of “less intrusive means”? Would the public authority be required to consult with technical experts to determine whether less intrusive means are possible before passing an order?
Issuing a traceability order: In order to request information regarding the first originator under the rules, a court or competent authority will need to send an official order to the platform with a copy of the information to be traced.
- What hasn’t been clarified: Will the ministry display the orders on its website in the interest of transparency? Will the directive of the government agency be a public document accessible under RTI Act?
The location of the first originator: As per the IT Rules, the first originator for the purpose of the traceability clause will be “the first originator of that information within the territory of India.”
- What hasn’t been clarified: How will intermediaries with end-to-end encryption determine the first originator in India without tracing the entire chain of communication on their platforms? How can an intermediary determine the location of the first originator, given the prevalence of VPNs and location spoofing?
- Guide: All You Need To Know About The New IT Rules, 2021
- Summary: Information Technology Rules 2021, And Intermediaries And Social Media Platforms
- Summary: Information Technology Rules 2021 And Digital News Publishing
- Summary: Information Technology Rules 2021 And OTT Streaming Services
Have something to add? Subscribe to MediaNama here and post your comment.