WhatsApp addressed grievances related to abusive or harmful behaviour while Facebook and Instagram looked into content related to a range of topics like impersonation, bullying, nudity, and other issues.
The instant messaging app WhatsApp released its first compliance report on July 15 along with Facebook and Instagram, which had released a partial report on July 2. Most notably, the report reveals that WhatsApp banned over 2 million Indian accounts between May and June 2021, which were reported either through its in-app feature or otherwise.
Why it matters: WhatsApp has, for quite some time, been a medium of several misinformation campaigns in India including some which have led to mass violence. In relation to this, it is currently engaged in a court battle with the Indian government which wants WhatsApp’s end-to-end encryption mechanism weakened to trace the first originator of messages as iterated by the IT Rules.
The grievance-redressal mechanisms for Facebook, Instagram, and WhatsApp involve:
- Online contact form for Instagram and Facebook
- Email address of the grievance officer for WhatsApp
- Physical mail addresses for each grievance officer appointed by WhatsApp and Facebook
Details of WhatsApp’s compliance report
According to WhatsApp’s compliance report, the messaging app:
- Blocked 20,11,000 Indian accounts in the period between May 15 to June 15, 2021, which were reported through its grievance redressal mechanism as well as through a ‘report’ feature on its app. According to the report, Indian accounts are identified as ones associated with mobile numbers with the country code ‘+91’ preceding them.
- Acted on 63 ban appeals out of the 204 it received. This, the report says, could include appeals for restoration of a banned account or requests for banning an account received through its grievance redressal mechanism.
- Overall, it received 345 grievances under topics such as Account Support, Ban Appeal, Product Support, Safety issues, and other support. Safety issues are defined as grievances that are related to abuse or harmful behaviour exhibited on the platform. As per the report, users are requested to raise complaints under ‘Safety Issues’ through its in-app reporting mechanism and thus, actions taken on such reports ‘won’t be recorded as an action taken against the grievance report’.
According to WhatsApp, ‘N/A’ denotes grievance topics where it’s not applicable to take action against the account. The reports may not have been actioned upon if it is related to
- Assistance in accessing a user’s account or some features
- Feedback on features
- An account that does not violate Indian law or WhatsApp’s Terms of Service
- Request for a restoration of an account that is denied
Details of Facebook and Instagram’s compliance report
Facebook and Instagram’s report reveals the grievances that they received between May 20 and June 15; in their earlier report, Facebook and Instagram had only detailed the grievances that they had received from their ‘community’, and the content that they had flagged proactively and actions taken thereon.
Under Rule 4(1)(d) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, Significant Social Media Intermediaries (SSMIs) have to publish periodic compliance reports every month mentioning the details of complaints received and action taken thereon, and the number of specific communication links or parts of information that the intermediary has removed or disabled access to in pursuance of any proactive monitoring conducted by using automated tools or any other relevant information as may be specified
As per the IT Rules, the social media platform elaborated on actions taken against flagged content – whether pre-established channels were used or specialised reviews needed to be undertaken. The report doesn’t elaborate on what these pre-established channels are but says they were used ‘to report content for specific violations, self-remediation flows where they can download their data, avenues to address account hacked issues, etc.’
According to the report, data from before May 20 isn’t available as they didn’t categorise it then.
- Facebook received 646 grievances, and the social media platform responded to 100% of them. These grievances were related to a range of topics like impersonation, bullying, nudity, access to accounts, requests for access to personal data, abusive content, issues with how Facebook processes personal data, and other issues.
- Facebook provided tools in 363 cases or provided users with pre-established channels to resolve the issue.
- The remaining 297 reports were subjected to special reviews, including 14 of which it says were miscategorised in an earlier report.
- Over 47 pieces of content had action taken against them following special reviews.
- Instagram received 36 grievances related to impersonation, bullying, nudity, abusive content, and accounts being hacked, and it responded to 100% of them.
- In about 10 of these grievances, users were provided with tools to resolve the issue, although one report was miscategorised and later, subjected to special review.
- Action was taken against 20 out of 27 flagged content after special reviews.
Actions taken by both Facebook and Instagram against reported content include removing the post, covering a photo or a video with a warning, or taking down the account itself. In its report, it also says that when something is reported as violating Indian law but not Facebook’s Community Standards, it may restrict the post’s availability in India. Facebook and Instagram also said that they do not take any action against certain reports if:
- It doesn’t violate any policies
- There is not enough information to locate the content
- Policies do not permit Facebook to take action
- The report is regarding feedback
- The report is regarding a dispute between the reporter and a third party where Facebook cannot mediate.
- A link is provided to a page or an account instead of the specific content
- Content reported is not hosted on Facebook
- The report is about assistance in accessing an account
Non-compliance with the IT Rules can lead to an intermediary losing safe harbour protection as provided under Section 79 (A) of the Information Technology Act, making it liable for any unlawful content posted by a user on its platform.