wordpress blog stats
Connect with us

Hi, what are you looking for?

The Lesser Known World of Harmful Content on YouTube

An NYU-STERN report explores harmful speech spread on YouTube—and the gaps in information on the platform’s content moderation policies

A new report by NYU-STERN’s Center for Business and Human Rights outlines four recommendations for YouTube, and its parent company Google, to monitor harmful content amplified on its platform. 

Co-authored by Paul M. Barrett and Justin Hendrix, ‘A Platform Weaponized’ explores how YouTube inevitably spreads harmful content—and how this understudied phenomenon can be stemmed. While the report largely focuses on the United States, its recommendations have special salience for the Indian government, research community, and citizens. Home to the world’s largest YouTube user base of 450 million users, unchecked polarising content and misinformation on the platform have escalated violent communalism in India of late.

While the authors do not offer an explicit definition of what ‘harmful content’ constitutes, it broadly covers ‘political disinformation, public health myths, and incitement of violence.’ The report largely discusses examples of such speech propagated by the political right across countries. Further, it offers two specific recommendations to the United States government to enhance federal oversight of harmful content on social media.

Never miss out on important developments in tech policy, whether in India or across the world. Sign up for our morning newsletter, with a “Free Read of the Day”, to experience MediaNama in a whole new way.


How Much Do We Know About How YouTube Functions? 

More and more users find themselves able to access a range of harmful speech on the platform. This is despite YouTube’s existing content moderation policies—like its ‘Community Guidelines‘, which do not allow ‘pornography, incitement to violence, harassment, or hate speech’. However, it is difficult to grasp the efficacy of these guidelines given the limited public information available on them. 

  • Some scholars suggest that YouTube’s algorithms often recommend videos to users that lead them down a ‘rabbit hole of extremism’. Other studies suggest that YouTube’s algorithms ‘mildly’ nudge viewers towards conservative content—and that this bias increases as more recommendations are accumulated. This is significant as YouTube recommendations facilitate about 70% of the total time spent by users on the platform.
  • Additionally, a number of YouTube accounts spreading harmful speech still operate their channels with impunity—in India, the many accounts promoting anti-Muslim rhetoric backed by Hindutva are a key example. Complicating the issue further, the IT Rules, 2021, authorise the government to moderate content on social media, leaving some amount of regulation in the hands of the State.
  • In December 2019, YouTube took steps to reduce the discovery of ‘borderline’ content on the platform—that is, content that is potentially harmful or spreads misinformation, but doesn’t explicitly violate platform content guidelines. However, while YouTube reported a drop in the viewing of such content, it did not offer concrete examples of what borderline content actually is—or whether this drop was witnessed in countries beyond the U.S., which constitute 80% of its user base. 
  • Google regularly discloses how many channels it takes down for promoting harmful content—between October to December of 2021, it took down 3.9 million channels for ‘spam, misleading content, or scams’. However, Google does not disclose how many channels break the rules in total—and what proportion of that figure 3.9 million represents. While Google provides a break-up of how these accounts were taken down by an automated screening system, information on the nature of these takedowns or erroneous suspensions is not provided.
  • In April 2021, Google adopted a new metric—the Violative View Rate (VVR)—to measure how often a YouTube user encounters harmful videos. Human reviewers manually analyse YouTube views to estimate how often violative content slipped through moderation systems. By the end of 2021, the figure stood at 0.12-0.14%. But, in the absence of information on the number of views accounted for, what the percentages represent for a platform of YouTube’s scale remains unclear. All these incomplete data points make estimating the success of YouTube’s efforts to combat harmful speech difficult.
  • Recommendation: to build public trust in the platform, and to better inform political debates on the platform’s influence, the authors recommend that YouTube disclose more insight into how its platform ranks and recommends content. More clarity on why content is removed, and the weightage of the metrics used to describe these practices, will also improve transparency and understandings of how the virality of certain types of content is generated. 

Why Are YouTube’s Content and Moderation Policies Difficult to Study?

The authors contend that compared to other social media platforms like Twitter and Facebook, less scholarly research is available on YouTube’s content and moderation policies. 

  • This is partly to do with the fact that large-scale video analyses are resource-intensive and expensive for social science researchers studying the platform. Analyses of more textual platforms like Facebook or Twitter are relatively inexpensive.  
  • However, some scholars contend that the platform makes itself ‘almost inscrutable’ by design—which stymies research projects on its moderation policies. For example, YouTube provides only a few application programming interfaces (API) for the public to collect platform data with. However, the APIs themselves provide limited data on the platform—specifically sharing access to content and metadata when first accessed. Historic data is not provided. This is especially needed when trying to understand how videos attain virality over time, a particularly pertinent aspect of understanding the proliferation and popularity of harmful content on the platform.
  • The lack of data narrows the scope of information available to social scientists to work with. With only ‘glimpses’ of the platform’s functioning available, the impact of one of the world’s most influential social media platforms on the proliferation of certain kinds of speech remains vastly understudied.
  • Recommendation: in the interest of expanding public awareness of how the platform operates, the authors urge YouTube to widen the types of data available to researchers studying the platform. Additionally, some researchers have suggested that they be able to retrieve randomised samples of videos on the platform. This is especially useful for those search queries with a vast number of results—it allows researchers to derive findings on important issues from a smaller, randomised sample set.  

Who Reviews Harmful Content on YouTube?

The report cites those monitoring social media, who argue that harmful content on YouTube has a more negative impact on countries outside the U.S. However, the company refutes this charge, arguing that while its automated review system flags content at scale, its ‘20,000 people involved in content moderation’ support regional moderation interests too. However, the authors suggest that the platform remains a key tool to exacerbate ethnic and regional conflicts around the world. 

  • In India, YouTube has exacerbated the spread of pre-existing Islamophobic discourse—many channels espousing such views have over a million subscribers, while others are felicitated with a ‘Silver Play’ button award for crossing eight lakh subscribers. In Brazil, the platform is used to mainstream the far-right politics of President Jair Bolsonaro. In Myanmar, from 2020 onwards, multiple pro-army channels undermined its government and elections—months later, a military coup swept the country. In Russia, YouTube has been consistently used to spread state-sponsored propaganda—and is one of the few social media companies to not be banned by the government during the ongoing Ukraine war.
  • State-sanctioned usage aside, commentators from India suggest that for publicly created content, the issue could be to do with language—monitoring content and taking it down requires an understanding of local dialects. In India, where people regularly switch between multiple languages in conversations, picking up on harmful content can be difficult for automated systems.
  • Additionally, automated systems may often confuse harmful speech for informative speech—failing to make a distinction, for example, between a video glorifying Naziism and a video providing a sober historical understanding of it.
  • Recommendation: given the proliferation of harmful speech on YouTube outside of America, the authors recommend that the platform expand its cadre of human content moderators. This requires significant investments in its operations in the Global South—aside from actual moderators, a host of technologists and policy professionals are required to devise contextualised moderation policies that serve local interests. Psychological counselling should also be offered to reviewers given the nature of the content they work with.

Does YouTube Engage With Civil Society and the Media Sufficiently?

The authors contend that YouTube has had a profoundly destabilising effect on civil society and the media. This demands more engagement with both to signal real, sustained commitment to addressing some of the evils it has facilitated. 

  • On YouTube, a user’s engagement with a video contributes to its virality—which grows the company’s ad revenue. Algorithms are thus designed to recommend content that is engaged with the most—which inevitably often provokes fear, anger, or resentment. So, while YouTube certainly does not intend to promote any kind of harmful speech, its systems paradoxically do the opposite. This contributes to the proliferation of misinformation on the platform—ranging from COVID-19 conspiracy theories to outright calls for violence—which have deeply destabilising effects on civil society. 
  • While Google has contributed $1 million to the International Fact-Checking Network, this is a tiny proportion of its multi-billion-dollar revenue in 2021. The authors argue that this is an insufficient commitment to stemming its subsidiary’s larger role in spreading harmful content. 
  • The digital advertising methods pioneered by Google and YouTube have additionally crippled the media sector by centralising advertising revenue in their hands. This has limited the scope of news-gathering.
  • Recommendation: while Google has contributed over $300 million over the last four years to support digital media houses, the authors recommend that it step in heavily to support local media houses and ensure their viability in a capricious digital news system. More engagement with the media, and civil society organisations, would not only deepen trust, but facilitate platform accountability on questions of content moderation and the effects of harmful content.

Specific Recommendations to the U.S. Government

YouTube, and other social media platforms, have reportedly been used in the United States to fuel misinformation about the 2020 Presidential election and COVID-19, while also spreading far-right discourse. 

  • Sitting President Joe Biden has made several remarks on the ill effects of social media platforms. However, little legislative progress has been taken to address these issues materially. 
  • Recommendation: the authors urge President Biden to push social media companies to self-regulate better. However, because of the First Amendment to the American Constitution protecting freedom of speech, ‘the U.S. government cannot seek to cleanse online platforms of the many forms of expression that, while not illegal, are still harmful.’ To that end, the President can work toward shaping public debate on a future bill that provides ‘effective’ federal oversight of the industry. 

YouTube, along with other social media platforms, can be better regulated if more information on its operations is available. 

  • As described earlier, opaque self-regulated content moderation systems fuel the spread of harmful content on the platform. 
  • Recommendation: the authors suggest that the Federal Trade Commission (FTC) be authorised to oversee social media companies. By empowering its consumer protection authority, the FTC can order companies to disclose information on their moderation systems, and evaluate whether these adequately protect consumer interests. This will ensure consistent compliance by companies, while facilitating some transparency for consumers. Again, this oversight should not conflict with First Amendment freedoms.

This post is released under a CC-BY-SA 4.0 license. Please feel free to republish on your site, with attribution and a link. Adaptation and rewriting, though allowed, should be true to the original.

Read More

Written By

I'm interested in stories that explore how countries use the law to govern technology—and what this tells us about how they perceive tech and its impacts on society. To chat, for feedback, or to leave a tip: aarathi@medianama.com

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.



Is open-sourcing of AI, and the use cases that come with it, a good starting point to discuss the responsibility and liability of AI?...


RBI Deputy Governor Rabi Shankar called for self-regulation in the fintech sector, but here's why we disagree with his stance.


Both the IT Minister and the IT Minister of State have chosen to avoid the actual concerns raised, and have instead defended against lesser...


The Central Board of Film Certification found power outside the Cinematograph Act and came to be known as the Censor Board. Are OTT self-regulating...


Jio is engaging in many of the above practices that CCI has forbidden Google from engaging in.

You May Also Like


Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...


135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...


By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...


Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Your email address:*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ