Facebook is going after coordinated groups of authentic accounts engaging in harmful activities on its platform, Reuters reported on Friday. This is different from coordinated operations by fake accounts, such as those carried out by Russian troll farms, that the company has so far gone after.
But given the platform’s history in India and the recent experiences of Twitter, this kind of moderation is easier said than done.
What is Facebook targeting?
Following the Reuters report, Facebook published its own blog that sheds more light on what the company is targeting—coordinated social harm.
Coordinated social harm: “Coordinated social harm campaigns typically involve networks of primarily authentic users who organize to systematically violate our policies to cause harm on or off our platform,” Facebook said. While Facebook already removes content and accounts violating its Community Standards, sometimes organised groups work together to amplify their members’ harmful behavior and repeatedly violate the platform’s content policies, the company said. “In these cases, the potential for harm caused by the totality of the network’s activity far exceeds the impact of each individual post or account,” Facebook highlighted. Examples of coordinated social harm include:
- Mass reporting and brigading: “Mass reporting, where many users falsely report a target’s content or account to get it shut down, or brigading, a type of online harassment where users might coordinate to target an individual through mass posts or comments,” the Reuters report stated as examples.
- US Capitol riot: In the aftermath of the January 6 riot on the US Capitol, BuzzFeed News published an internal Facebook report that analyzes the company’s role in the riot. One of the findings was that the company had little policy around coordinated authentic harm such as the one perpetrated in this riot. “The harm existed at the network level: an individual’s speech is protected, but as a movement, it normalized delegitimization and hate in a way that resulted in offline harm and harm to the norms underpinning democracy,” the report read.
- Querdenken movement in Germany: In its blog, Facebook announced that it has removed a network of Facebook and Instagram accounts, Pages, and Groups associated with the Querdenken movement in Germany for engaging in coordinated efforts to post harmful misinformation about COVID-19 by “using authentic and duplicate accounts to post and amplify violating content,” the company said.
How will Facebook go after coordinated social harm?
The company hasn’t shared specific detail on how it will tackle coordinated social harm, but said: “To address these organized efforts more effectively, we’ve built enforcement protocols that enable us to take action against the core network of accounts, Pages and Groups engaged in this behavior. As part of this framework, we may take a range of actions, including reducing content reach and disabling accounts, Pages and Groups.” According to the Reuters report, Facebook is looking to use the same strategy that it takes against campaigns using fake accounts.
Will it affect legitimate social movements?
Farmers’ protest: In February this year, the IT Ministry asked Twitter India to block more than 1,100 accounts related to the farmers’ protest citing that they are disruptive to public order. Many of these accounts were engage in coordinating and spreading awareness of the farmers’ protest, a legitimate campaign against the government’s new agricultural laws. But such activity could also be seen as coordinated social harm by Facebook if it sides with the government. “A lot of the time problematic behavior will look very close to social movements,” said Evelyn Douek, a Harvard Law lecturer told Reuters. “It’s going to hinge on this definition of harm … but obviously people’s definitions of harm can be quite subjective and nebulous.”
Takedown of tweets critical of government pandemic efforts: In April, MediaNama reported that Twitter has complied with government requests to censor 52 tweets that mostly criticised India’s handling of the second surge of the COVID-19 pandemic. The government could force Facebook to use the tools it develops to take on coordinate social harm to take down content that the government deems illegal as well.
Will Facebook go after accounts of influential people?
#CongressToolkit incident: In May, an image allegedly claimed to be Congress’s toolkit to misrepresent the government’s pandemic efforts were doing the rounds on Twitter with many prominent politicians of the ruling party tweeting about it. Twitter labelled these tweets as “manipulated media” much to the disliking of the government. Research into what triggered the hashtag #CongressToolkit event and who gave it momentum found that influential accounts connected to the ruling BJP party played an important role. This is not the only instance that authentic accounts have echoed the views of the ruling party. There have been many instances where influential actors and public personalities have copy-pasted messages allegedly given by the ruling party to change public perception of a given social issue. This is the kind of activity that Facebook wants to take on, but will it take action against accounts of influential people? If we go by a recent damning report on how high-profile users are exempt from some or all of the social media giant’s rules, then the answer appears to be a no. More importantly, what will the government do when Facebook takes on such activity? In the Twitter incident, the government asked Twitter to remove the “manipulated media” tag and the Delhi Police sent a notice to the platform.
Facebook’s alleged favouritism towards the ruling party: What Facebook will do gets all the more complicated to answer when considering its relationship with India’s ruling party, which has been under scrutiny since last year’s Wall Street Journal exposé. The report accused Ankhi Das, former Facebook India’s top public policy executive, of showing favouritism towards the ruling party. One major revelation was that Facebook refused to take down hateful content posted by BJP politicians despite many within the company finding the content in violation of the platform’s rule. Ms.Das reportedly advised against the removal in order to avoid damaging the company’s business interests in India.
- Facebook Knew Details About Instagram’s Impact On Mental Health Of Teenagers: Report
- Facebook Made Exceptions For High-Profile Users, Misled Oversight Board: Report
- After Oversight Board, Facebook Mulls Similar Body For Global Election Matters: Report
- US Lawmakers Want An Investigation Into Secret Google – Facebook Jedi Blue Deal
Have something to add? Post your comment and gift someone a MediaNama subscription.