wordpress blog stats
Connect with us

Hi, what are you looking for?

, ,

Facebook says it inadvertently restricted a hashtag. Now it needs to tell us exactly how and why

By Prateek Waghre

On 28th April, reports emerged that Facebook was ‘temporarily hiding’ posts using the hashtag ‘ResignModi’ because ‘some content’ in those posts went against their Community Standards.

This seemed like a rather fitting development in a week where the future of free speech and dissent in India had already managed to capture international attention. MEITY had ordered Twitter, Facebook, Instagram and YouTube to remove/withhold 100-odd posts critical of how the Union government is handling COVID-19’s second wave. While Twitter did disclose (as reported by Medianama) that it complied via the Lumen Database, the other platforms have not confirmed whether they received any such requests, nor whether they complied.

Facebook did clarify that restricting the ‘ResignModi’ hashtag was a mistake and restored full visibility to it after a few hours. Restricting in this context meant that users couldn’t search for posts with the hashtag but could post using it or view content that used it on their newsfeed.

A ‘mistake’

Understandably, there is resistance to taking this clarification at face value. Levels of trust in the company have been on a sharp downward trajectory since the Wall Street Journal’s reporting into its alleged closeness with the BJP. Neither has it acknowledged any actions it may or may not have taken in response to government requests this past week.

Advertisement. Scroll to continue reading.

It also didn’t help that the message users saw is the same one it uses for hashtags that are deliberately blocked. E.g. sharpiegate or stopthesteal.

#ResignModi on Facebook (Source)

#stopthesteal and #sharpiegate | Credit: Screenshot by author

In my opinion, the restriction, likely, was ‘inadvertent’ and not a political act. We’ve seen it happen before, many times. The potential PR fallout from such a blatant act would also probably disincentivise such action.

A precedent

In early June 2020, reports suggested that Facebook and Instagram were restricting the hashtags’ sikh’ and ‘sikhism’. As per a Twitter thread by the Instagram Comms handle on 4th June, the restrictions were in place since early March and were attributed to a mistake ‘following a report that was inaccurately reviewed’ (note the singular form), and were reversed (though, some errors persisted until June 14).

Then, in late November, amid the farmer protests in India, the hashtag ‘sikh’ was restricted on Instagram again. On this occasion, a Facebook spokesperson said it was ‘temporarily blocked because of multiple reports’.

Mistakes aren’t limited to India. In October 2020, Facebook and Instagram categorised content with the ‘EndSARS’ hashtag as false information. Once again, the InstagramComms Twitter account put out an apology. This one had an adverse impact on a movement against police brutality.

It isn’t limited to the Global South either. In May 2020, some Instagram users in the U.S. were blocked from sharing content using the ‘blacklivesmatters’ hashtag. This time, InstragramComms attributed it to a ‘technology’ used to detect spam. Another notable instance involving an ‘anti-spam system’ occurred in March 2020, when it cast too wide a net and restricted sharing of many legitimate news articles. In August 2020, a ‘technical error’ with Instagram’s related hashtags feature resulted in counter-messaging for some pro-Biden hashtag searchers and not Trump-related searches (according to an investigation by Tech Transparency Project). Instagram said it affected both campaigns as well as non-political hashtags and disabled the feature.

These are a limited set of examples. A quick (advanced) google search points to many other cases of ‘mistaken’, ‘inadvertent’ actions taken in ‘error’.

Advertisement. Scroll to continue reading.

A pattern

While the specific issue may vary, some patterns emerge when we look at them in aggregate. Users discover and report these issues, often in a political context (though not always). This is then the subject of broader media coverage and concerns/accusations of censorship, especially if the discovery was in a political context. Facebook then says it was ‘inadvertent’ or ‘a mistake’ or due to ‘an error’. To be clear, this pattern applies to most social networking platforms.

Let’s accept that these were mistakes for now. What happens next? Silence, until the next time. Then, rinse and repeat. Technological systems are complex, and people make mistakes, let’s accept that too.  Though when they can have political consequences (even without political intent), the responsibility cannot end with answers that offer no further understanding of what happened. Especially if it is a pattern that keeps repeating.

Returning to the ‘ResignModi’ case, there’s a significant chance that this too was the result of ‘community reporting’ just like the ‘sikh’ and ‘sikhism’ hashtags. But reading between the lines of Facebook’s response in those instances suggests that anywhere between one and an unknown number of reports were sufficient to trigger whatever combination of manual and/or automated processes that can lead to a hashtag being temporarily hidden. Note the number of unknowns there.

An explanation

The presence of a political context surrounding these cases also raises the question of how Facebook is responding to the possible weaponisation of its community reporting. We know from Facebook’s August 2020 CIB report that it took against a network engaged in mass reporting. What principles does it use to define thresholds for action? How is such coordinated activity that falls below its self-defined threshold of Coordinated Inauthentic Behaviour handled? Knowledge about the specifics of thresholds become essential when they make the difference between publicly disclosed and internal actions, as the Sophie Zhang – Guardian series demonstrated in the Indian context.

Facebook — this applies to other networks too, but Facebook is by far the largest in India — needs to put forward more meaningful explanations in such cases. Ones that amount to more than ‘Oops!’ or ‘Look! We fixed it!’. There are, after all, no secret blocking rules stopping it from explaining its own mistakes. These explanations don’t have to be immediate. Issues can be complex, requiring detailed analysis. Set a definite timeline, and deliver. No doubt, this already happens for internal purposes. And then, actually show progress. Reduce the trust deficit, don’t feed it.

This does raise concerns of being drawn into distracted by narrow content-specific conversations or being distracted by ‘transparency theater’, thereby missing the forest for the trees. These are legitimate risks and need to be navigated carefully. The micro-level focus can be about specific types of content or actions on a particular platform. At the macro-level, it is about impact on public discourse and society. They don’t have to be mutually exclusive and what we learn from one level should inform the others, in pursuit of greater accountability.

Advertisement. Scroll to continue reading.

(Prateek Waghre is a research analyst with the High Tech Geopolitics Programme at The Takshashila Institution. He writes MisDisMal-Information, a newsletter on India’s information ecosystem)

Written By

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.

Views

News

When news that Walmart would soon accept cryptocurrency turned out to be fake, it also became a teachable moment.

News

The DSCI's guidelines are patient-centric and act as a data privacy roadmap for healthcare service providers.

News

In this excerpt from the book, the authors focus on personal data and autocracies. One in particular – Russia.  Autocracies always prioritize information control...

News

By Jai Vipra, Senior Resident Fellow at Vidhi Centre for Legal Policy The use of new technology, including facial recognition technology (FRT) by police...

News

By Stella Joseph, Prakhil Mishra, and Yash Desai The Government of India circulated proposed amendments to the Consumer Protection (E-Commerce) Rules, 2020 (“E-Commerce Rules”) which...

You May Also Like

News

Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

News

By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

Advert

135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...

News

Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ