A report on content moderation by New York University recommends that content moderation, which is central to the work of all social media platforms, must no longer be outsourced but be included within the main fold of the platforms. In terms of numbers, 15,000 workers, majority of those employed by third-party vendors, moderate Facebook and Instagram; 10,000 moderators moderate YouTube and other Google products; Twitter has 1,500 moderators. As per the report, published by Professor Paul M. Barrett of NYU’s Center for Business and Human Rights, “given the daily volume of what is disseminated on these sites, they’re grossly inadequate”. The report primarily focussed on Facebook as a case study.

As the COVID-19 pandemic grew, Facebook, YouTube and Twitter sent their contracted content moderators home and relied more heavily on AI-based moderation. All three companies conceded that this would lead to greater errors and lack of contextualising. In a later call with American journalists, Facebook CEO Mark Zuckerberg had said that some of the company’s full-time employees would review “the most sensitive types of content”. The report wants Facebook to use its response to the pandemic as a “pilot project to assess the feasibility of making all content moderators Facebook employees”.

Read: Reliance on automated content takedowns needs to be reconsidered: MediaNama’s take


Problems with outsourcing content moderation

As content moderation has been outsourced to third-party vendors such as Cognizant, Genpact, Accenture, Majorel, Competence Call Center, etc. in the Philippines, India, Ireland, Portugal, Spain, Germany, Latvia and Kenya, these content moderators are not even considered full-time employees and are, what is called, precarious labour. This is “a way to achieve plausible deniability”, as per UCLA’s Professor Sarah Roberts, author of Behind the Screen: Content Moderation in the Shadows of Social Media, who is quoted in the report. As per the study, there are three problems with outsourcing content moderation:

  1. Outside of Silicon Valley, social media companies have not paid attention to how their platforms have been used to stoke ethnic and religious violence such as in Myanmar and Ethiopia. Facebook, for instance, has not hired enough content moderators who understand local languages and cultures.
  2. Content moderators don’t receive adequate counselling and medical care despite being exposed to toxic content online.
  3. The working atmosphere is not conducive to content moderators making the best decisions.

Facebook’s content moderation in India: Affiliations of online ‘antagonists’ with the ruling party

The report acknowledges that Facebook’s failure “to ensure adequate moderation for non-Western countries” has resulted in its platforms, including WhatsApp, becoming “vehicles to incite hatred and in some instances, violence”. India is one such case while the persecution of Rohingya Muslims in Myanmar is the most widely known case. As a result, Facebook has at times outsourced platform monitoring to local users and civil society organisations as “a substitute for paid, full-time content reviewers and on-the-ground staff members”.

Targeting Rohingya Muslims: The report states that Rohingya Muslims who fled Myanmar and ended up in India, were targeted by Hindu nationalists, “some of them affiliated with Prime Minister Narendra Modi’s Bharatiya Janata Party (BJP)” and who “have exploited Facebook in one component of a broader anti-Muslim movement in India”. It cited a video where people affiliated with “the militant wing of BJP” wielded knives and burnt effigy a child while screaming, “Rohingyas, go back!”, while other posts falsely claimed that Rohingya are cannibals against gruesome images of human body parts; these posts were often removed by kept reappearing. Facebook did not remove the video as it was posted by organisations claiming to be news organisations and was not directly linked to violence. “The link may not have been direct, but in June 2019, dozens of Rohingya homes were burned in Jammu, where the video and others like it were shot,” the report states.

Targeting Bengali Muslims: Avaaz, an advocacy group, had flagged 213 instances of hate speech to Facebook where Bengali Muslims were called “parasites”, “rats” and “rapists”, but Facebook removed only 96. These posts were easily found by native Assamese speakers, but Facebook itself had not detected any of them before being alerted by Avaaz.

Targeting caste, religious and sexual minorities: Equality Labs had published a report about hundreds of memes and posts that targeted women, caste, religious (Muslins and Christians) and sexual minorities in India. Despite bringing them to Facebook’s attention, the platform failed to remove them.

Recommendations from the Report

  1. Stop outsourcing content moderation. Instead, improve salaries and benefits of content moderators, and onboard existing content moderators as full-time employees of the social media companies.
  2. Hire more moderators in “at-risk” countries in Asia, Africa and elsewhere who understand local languages and cultures. These moderators should be full-time employees of social media companies. Currently, for Facebook, when moderators are asked to review content in languages they don’t know, they rely on Facebook’s proprietary translation software, the report noted.
  3. Expand fact-checking to debunk mis- and disinformation as the current scale is too small.
  4. Double the number of moderators to improve content review so that they have more time to review difficult content decisions. The report cites a November 2018 white paper by Facebook where Zuckerberg conceded that moderators “make the wrong call in more than one out of every 10 cases”.
  5. Hire a content overseer, that is, a senior official to supervise the policies and execution of content moderation.
  6. Provide all moderators with top-quality, on-site medical care to evaluate if an employee should continue to moderate the most disturbing content.
  7. Fund research into health risks of content moderation.
  8. Consider narrowly tailored government regulation so that there is government oversight of the “prevalence” of harmful content, that is, the frequency with which such content is viewed even after moderators have weeded it out. This came from Facebook.

Read: Facebook settles class-action suit by content moderators for $52 million