Both social media platforms use machine-learning technologies to proactively detect content that violates their community guidelines.
Facebook and Instagram released their interim compliance report on Friday in partial adherence with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Covering the period between May 15 to June 15, the report provides information on actions taken against violating content by Facebook and Instagram, and the percentage of violating content proactively detected by the platform(s).
According to the new IT rules, which came into effect on May 26, Significant Social Media Intermediaries (SSMIs) have to release monthly compliance reports on the details of complaints received, action taken thereon, and the number of links or parts of information removed. SSMIs are defined as social media intermediaries with more than 50 Lakh registered users in India like Facebook, Google, Twitter, and Koo.
What does the report say?
According to the report, Facebook proactively detected 99.9 percent of 25 million ‘Spam’ content and 2.5 million ‘Violent and Graphic’ content that it took action against. Of 1.8 million content actioned against as ‘Adult Nudity and Sexual Activity’ and 5,89,000 content as Suicide and Self-Injury, 99.6 percent and 99.7 percent were detected proactively, respectively.
The lowest proactive detection rate was for Firearms (2,000) and Bullying and Harassment (1,18,000) content of which 89.4 and 36.7 percent were detected, respectively.
On Instagram, actioned content involving suicide and self-injury was higher than that on Facebook at 6,99,000 content pieces, 99.8 percent of which were detected by the photo-sharing platform. Under Bullying and Harassment, Facebook took action against 118,000 content pieces while Instagram took action against 1,08,000 content pieces. Of this, Instagram had proactively detected 43.1 percent. It proactively censored 99.7 percent of 490,000 violent and graphic content pieces, and 99.6 percent of posts related to adult nudity and sexuality. 200 actioned posts on firearms, 1,100 posts on drugs, and 6,200 posts on organised hate saw a detection rate between 88 and 87 percent.
While metrics on content spamming were shared by Facebook, Instagram said that the same wasn’t available with it yet and that it was working on it.
How the two metrics are measured
Content is removed when it doesn’t follow Facebook’s community guidelines and includes comments, posts, photos, and videos. According to Facebook’s policy page on the content-actioned metric, in cases where a post has multiple photos or videos, each photo or video is regarded as one piece of content. This is different from Instagram where the whole post is counted as one piece of content if it is found to contain violating content.
For both platforms, actions taken include removing the problematic content or issuing a content warning over it, and proactive detection is a result of machine learning technologies flagging content. This content is later looked at by trained human reviewers.
On its website, Facebook says that the proactive detection percentage is calculated as ‘the number of pieces of content acted on that we found and flagged before people using Facebook or Instagram reported them, divided by the total number of pieces of content we took action on’.
However, the measure on actioned content does not include any accounts, pages, or groups that were disabled or fake accounts that were prevented from being created. The report adds that the metrics also don’t take into account violating content that may have been posted by users masking the country that they are posting from (for example, through VPNs).
Full report yet to come
Facebook is expected to release its full compliance report on the number of user complaints received and action taken on July 15, which will include data related to the instant messaging app WhatsApp.
Notably, in its report, Facebook mentioned that it expects to publish subsequent editions with a 3-45 day lag over the reporting period. Google, in its report released on June 30, had also said that it will have a two-month lag in reporting.
Yesterday, Union Information Technology and Law Minister Ravi Shankar Prasad tweeted in appreciation of compliance reports released by Google and Facebook.
Nice to see significant social media platforms like Google, Facebook and Instagram following the new IT Rules. First compliance report on voluntary removal of offensive posts published by them as per IT Rules is a big step towards transparency. pic.twitter.com/FhzUv4pHUp
— Ravi Shankar Prasad (@rsprasad) July 3, 2021
Meanwhile, Twitter has not yet indicated when it would release its compliance report under the IT rules.