Three days before the Wall Street Journal published an article alleging that Facebook India’s public policy team refused to take down content by BJP leaders that had been flagged as hate speech within the company due to business interests, Facebook had announced that it was looking for external auditors to assess the accuracy and usefulness of the metrics of its community standards enforcement report (CSER). This report tracks the enforcement of Facebook’s community guidelines along 12 standards, including hate speech, spam, child nudity and child sexual abuse material, fake accounts, terrorism and organised hate, and violent and graphic content. The platform releases similar information for Instagram as well but it does not include numbers on hate speech and fake accounts.

Facebook started publishing the CSER in May 2018 and has published six of them so far. It initially tracked enforcement of Facebook standards across six types of content, but now tracks it for 12 types on Facebook and 10 on Instagram.

The CSER is one of the five transparency reports that the platform releases. The frequency of each of these reports varies. Unlike CSER, which gives data on how Facebook implements its own community guidelines, its content restrictions report gives data of how the platform complied with local legal requests to take down content that doesn’t violate Facebook’s own community standards. CSER does not give country-specific data. Thus, information about where most infractions against Facebook’s own community standards occur is not available.

What information does the report give? For each of the 12 issues, Facebook gives information about the content’s prevalence, how much content Facebook took action on, how much content Facebook acted on before users reported in, how much removed content people appealed against, and how much removed content was later restored (both without and after an appeal).

Assessing Facebook’s claims of transparency: MediaNama’s take

Facebook also releases monthly reports about coordinated inauthentic behaviour on the platforms that include details about fake accounts, accounts that spam, and networks that engage in spreading misinformation. The WSJ reported that while Facebook announced that it had taken down inauthentic pages tied to Pakistan’s military and the Indian National Congress in April 2019, days before the Lok Sabha elections, it did not disclose that “it also removed pages with false news tied to the BJP, because Ms. [Ankhi] Das [public policy head] intervened”.

When MediaNama had reported on that coordinated authentic behaviour report in April 2019, we had also focussed on the pages linked to the Congress because the original Facebook post did not link removal of content to false news by the BJP. It only linked it to an Indian IT firm, Silver Touch. As per the Ahemdabad-based company’s website, its services include bot development, and its clients include the Ministry of External Affairs, numerous Gujarat government entities, Department of Biotechnology amongst a battery of other government entities. It had also made the web portal for the Pravasi Bhartiya Divas and apps for Gujarat government (Digital Gujarat, Startup Gujarat) and Make in India Event.

Going by the WSJ reporting, it is clear that despite there being pages and groups linked to the ruling party that spread misinformation, Facebook actively withheld that information, thereby defeating the entire purpose of releasing such reports in the first place. What is to say that Facebook didn’t similarly withhold information in other transparency reports as it considered its business interests? This revelation erodes the already tenuous trust that the world at large has in the largest social media platform.

Read more: Facebook accused of selective enforcement of hate speech rules, Opposition calls for creation of Joint Parliamentary committee