The pandemic resulted in YouTube removing highest number of videos in a single quarter in its history. The company said it removed twice as many videos in April-June 2020 as compared to the first three months of the year. As the COVID-19 pandemic accelerated in March, YouTube, like other platforms, modified its content moderation policies to reduce reliance on human reviewers and bank more on automated systems.

Human reviewers would have maintained higher accuracy, but would leave a lot of harmful content online, YouTube said. So it chose to rely on automated systems, while being aware of potentially over-enforcing its community guidelines. “Because of choices we made to prioritize the safety of the community, we removed the most videos we’ve ever removed in a single quarter from YouTube,” it said in its latest report on enforcement of community guidelines.

11 million videos were removed altogether (by humans and automated systems) between April-June, as compared to 6 million videos removed in January-March.

Second highest removals from India: Of the 11.8 million videos removed, 1.4 million were removed from India, accounting for the country with the second highest number of videos taken down after the USA. A little over 820,000 videos were removed from India last quarter.

3x increase in removal of most sensitive content: For the most sensitive content, such as having violent extremist content and which jeopardise child safety, there was a three-fold increase in removals. YouTube said it erred on the side of caution and “accepted a lower level of accuracy” to ensure that it was “removing as many pieces of violative content as possible”. Not surprisingly, a higher number of videos that do not actually violate policy was also removed (in these two categories). YouTube’s policy around online harm to children includes dares, challenges, or “other innocently posted content that might endanger minors”.

A high reliance on automated systems: Of 11 million videos removed, 95% 10.4 million were first flagged by an automated system; only 380,000 videos were removed were first flagged by users. Only around 550,000 pieces of content were removed by human reviewers. YouTube claims that less than a quarter (24%) of the removed videos were viewed by more than 10 people. Over 70% of them were removed before even a single view or between 1-10 views.

Facebook and Twitter have also had to rely more on automated content takedowns, as the pandemic accelerated. Facebook’s policy change led to posts about a legitimate news article about surveillance by the Indian government were marked spam. These posts were eventually restored, but it raised concern around the consequences of relying on automated takedowns. Twitter is also increasing its “use of machine learning and automation”, but clarified that it would not “permanently suspend any accounts based solely on our automated enforcement systems”.

Why videos were removed: A third (33%) of all the removed videos violated child safety guidelines, another 28% of spam, scams, and misleading claims. 14.6% of the removed content was related to sexual or nudity content. A little over 10% of the removed videos were violent/graphic and 8% promoted violence and violent extremism.

YouTube’s automated system typically flags all harmful content to human reviewers, who take the final call on removing a video. Human reviewers are better placed to judge the context and nuance around whether a video that appears harmful to an algorithm actually is harmful.

Higher removals meant higher appeals: YouTube said it was aware of over-enforcement of the guidelines and hence expected a spike in appeals from users and creators. Both appeals and reinstatement rates doubled from the previous quarter. Videos reinstated on appeal increased from 25% of appealed videos last quarter to 50% this quarter.

87% of suspended channels posted spam and scams: YouTube removes channels – which leads to all of there videos being removed – after they have three “strikes”, have a single instance of abuse, or is “wholly dedicated” to violating YouTube’s guidelines, such as with spam accounts. Around 87% of all removed channels were those posting spams and scams. This has not changed over the previous quarter.

Read more:

  • Reliance on automated content takedowns needs to be reconsidered: MediaNama’s take [read]
  • ‘Social media sites need to stop outsourcing online content moderation,’ says an NYU report [read]