Facebook’s internal research found that inflammatory content on the platform spiked 300% in the months leading up to the Delhi riots, and calls for violence were spread using Whatsapp, the Wall Street Journal has reported.
The internal report titled ‘Communal Conflict in India Part 1’ interviewed dozens of Indians who said that they found “a large amount of content that encourages conflict, hatred and violence on Facebook and WhatsApp.”
Facebook has routinely failed to control hate speech and misinformation in India, at times bowing to political pressure from the Indian government. The leaked documents reveal that Facebook is aware of the extent of hate Indians are exposed to on the platform, and that recommendations made by its own team on curbing hate speech went unheard.
Hindu nationalist groups posted anti-Muslim content
According to the WSJ report, Facebook researchers identified two Hindu nationalist groups, the Bajrang Dal and the Rashtriya Swayamsevak Sangh, that spread “inflammatory anti-Muslim content” on the platform:
- Bajrang Dal: Researchers at Facebook found in a report that the Bajrang Dal “used WhatsApp to organize and incite violence,” and recommended that the group be taken down, the Wall Street Journal has reported.
- Rashtriya Swayamsevak Sangh: In a separate report reviewed by the Journal, researchers found that the RSS was responsible for posting anti-Muslim content including posts that:
- compared Muslims to pigs and dogs
- spread misinformation claiming the Quran calls for men to rape female family members
- claimed Muslim clerics spit on food to make it halal or spread COVID-19
The researchers recommended, according to WSJ, that pages linked to the Bajrang Dal be taken down, but the right-wing group still remains active on the platform. The report also found that RSS was not designated as harmful due to ‘political sensitivities’.
In response to queries sent by MediaNama, a Facebook spokesperson told us that they “ban individuals or entities after following a careful, rigorous, and multi-disciplinary process.” and enforce their “Dangerous Organizations and Individuals policy globally.”
What the Facebook whistleblower revealed about hate speech
Recently, former Facebook employee-turned whistleblower Frances Haugen revealed critical details about Facebook’s content moderation in a series of interviews. Here is what Haugen had to say about Facebook’s efforts to remove hate speech:
How much Facebook engaged with hate speech: Haugen revealed that internal research at Facebook misrepresents its progress in dealing with misinformation, violence, and hate speech on its platform. She said that an internal study conducted this year estimated that the platform takes action only against 3-5% of hate speech and six-tenths of 1% of V & I [violence and incitement] on Facebook.
When we live in an information environment that is full of angry, hateful, polarizing content it erodes our civic trust, it erodes our faith in each other, it erodes our ability to want to care for each other, the version of Facebook that exists today is tearing our societies apart and causing ethnic violence around the world — Frances Haugen (emphasis added)
Facebook investing little outside English-speaking countries: In her testimony to the subcommittee, Haugen said that in its ‘integrity spending’ Facebook allocates 87% of it to English-speaking countries who form nine percent of its users.
“It seems that Facebook invests more in users who make more money, even though the danger may not be evenly distributed in terms of profitability,” she said.
In the same vein, Haugen later also said that Facebook is misleading multiple people across the world by telling them that their safety systems apply in their languages although they are actually getting the ‘original, dangerous’ version of Facebook.
A Facebook spokesperson told us that Facebook has made efforts to reduce hate speech on the platform:
We’ve invested significantly in technology to find hate speech in various languages, including Hindi and Bengali. As a result, we’ve reduced the amount of hate speech that people see by half this year. – Facebook spokesperson
Facebook’s history of inaction on hate speech in India
Facebook’s failure or unwillingness to effectively curb hate speech in India has been under the spotlight at least since last year. Here’s a timeline of developments:
- BJP leaders off the hook: In August last year, reports revealed that Facebook refused to take down hateful content by ruling Bharatiya Janata Party (BJP) leaders in order to avoid damage to its business prospects in the country:
- Telangana: Inflammatory posts by Raja Singh, a BJP MLA from Telangana, were left on the platform despite being marked as hate speech, WSJ has reported. In his posts, Singh had said, “Rohingya Muslim immigrants should be shot, called Muslims traitors and threatened to raze mosques.”
- Assam: Facebook did not remove a hateful post by Shiladitya Dev, a BJP MLA from Assam, for nearly a year, TIME reported. Dev had shared a news report about a girl allegedly being drugged and raped by a Muslim man. He said this was how Bangladeshi Muslims target the “native people.”
- No reason to remove Bajrang Dal: In December last year, Facebook was questioned by the Parliamentary Standing Committee on IT regarding the allegations. Ajit Mohan, head of Facebook India, told the panel that the company has no reason to act against or take down content from Bajrang Dal.
Update (26 October, 09:50 am): Responses from Facebook spokesperson were added.
- Facebook Accused Of Selective Enforcement Of Hate-Speech Rules, Opposition Calls For Creation Of Joint Parliamentary Committee
- Summary: All Eight Complaints Made By Facebook Whistleblower Frances Haugen
- No Reason To Remove Bajrang Dal From Facebook, Ajit Mohan Tells IT Standing Committee: Report
Have something to add? Post your comment and gift someone a MediaNama subscription.