wordpress blog stats
Connect with us

Hi, what are you looking for?

Reliance on automated content takedowns needs to be reconsidered: MediaNama’s take

Screenshot of the Facebook prompt Megha Bahree got.

A day after Facebook finally announced that it would send home all its contract workers who moderate content and will rely on automated content removals, people around the world, including in India, reported that Facebook was marking legitimate news articles, including those about COVID-19, as spam. One such article that Facebook marked as spam was Huffington Post India’s report on the National Social Registry.

Facebook clarified that this is a case of correlation, not causation. Guy Rosen, Vice President (Integrity) at Facebook, tweeted that this was “a bug in an anti-spam system, unrelated to any changes in our content moderator workforce”. He again tweeted earlier this morning that the issue had been resolved and the company has since restored all posts that were “incorrectly removed” on all topics.

Megha Bahree, an Indian journalist, whose Facebook post about National Social Registry was marked as spam on March 17 told MediaNama that her post has now been restored. We were also able to share the article on Facebook without any problem today.

Advertisement. Scroll to continue reading.

Screenshot of the Facebook prompt Megha Bahree got.

How does Facebook’s content moderation work? On a normal, non-COVID-19 day, automated systems take down content that it definitively violative of community standards, such as child pornography. Content in grey area, such as hate speech, is sent to human content moderators. The humans then either take the content down, or let it be. The machine learning algorithms recognise patterns from these takedowns and restorations to improve themselves. Since this happens at a very large scale, things fall through the cracks, which is why users are allowed to appeal decisions and report content.

What is happening now? Now that human oversight has been removed, the algorithms are presumably relying only on what they have learnt in the last three years, that is, since Facebook’s current content moderation systems were put in place.

Why is this a problem? While the need to restrict information about COVID-19 to only governments, international organisations and verified publications is an exceptional, but understandable step to take, the case about HuffPost report on National Social Registry is far more disturbing. This is a report from a reputed digital news publication that is critical of the incumbent Indian government. The roughly 3,500-word piece had only 11 links within the story itself, not counting links on the HuffPost website that are a standard across all its pages.

The marking of this article as spam suggests that articles about Aadhaar, surveillance in India, etc. had been taken down by human content moderators in the past, perhaps even from verified news publications which is why the HuffPost article got flagged. Taking Facebook’s statement that this was a bug in the anti-spam filter at face value, why did this happen with an article critical of the government that deals with surveillance? That is not to suggest that the Indian government has ordered it, but that there are human biases at play throughout this chain of moderation that have unintended consequences.

Over-reliance on algorithms, that are trained on biased data sets, governed by human biases (this is unavoidable), is something that needs to be reconsidered. Biases, both in the algorithms and of humans, can at best be mitigated. Then should algorithms be allowed to censor? The Intermediary Guidelines (Amendment) Rules 2018, which are currently under deliberation, insist that intermediaries must use automated mechanisms to “proactively” identify and remove unlawful content. There is no provision for review of such takedowns, or for content restoration in the Rules.

Twitter and YouTube are also relying on automated content removal

Facebook is not the only one that to rely on automated content takedowns. Twitter is also increasing its “use of machine learning and automation”, but clarified that it would not “permanently suspend any accounts based solely on our automated enforcement systems”. YouTube announced that it would “temporarily” rely on automated content takedowns which will happen without human review, but strikes usually won’t be issued in these cases (YouTube follows a three-strike system).

All three platforms warned that “mistakes” might happen as it is the first time fully automated content takedown systems are being used. YouTube clarified that decision appeals might take longer because of the circumstances, while Facebook said that fewer people would be available which is why the company would prioritise “imminent harm”.

Advertisement. Scroll to continue reading.

Written By

Send me tips at aditi@medianama.com. Email for Signal/WhatsApp.

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.

Views

News

Amazon announced that it will integrate its logistics network and SmartCommerce services with the Open Network for Digital Commerce (ONDC).

News

India's smartphone operating system BharOS has received much buzz in the media lately, but does it really merit this attention?

News

After using the Mapples app as his default navigation app for a week, Sarvesh draws a comparison between Google Maps and Mapples

News

In the case of the ‘deemed consent' provision in the draft data protection law, brevity comes at the cost of clarity and user protection

News

The regulatory ambivalence around an instrument so essential to facilitate data exchange – the CM framework – is disconcerting for several reasons.

You May Also Like

News

Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...

Advert

135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...

News

By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

News

Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ