YouTube has updated its advertising policy for content creators to prevent content with hate messages or discrimination of any type from featuring ads and monetizing the content. This is an update on the initial set of guidelines YouTube had announced in March this year.
Besides hateful and discriminatory content, YouTube will also prevent content with inappropriate use of family entertainment characters and incendiary and demeaning content from placing advertisements. Note that this update is specifically for YouTube’s advertiser-friendly content guidelines, to determine eligibility for advertising. In case a particular piece of content is deemed ‘hateful’ but satisfies YouTube’s Terms of Service and Community Guidelines, it can still remain on the platform, even though it won’t be eligible for advertising.
Hateful content: Content that promotes discrimination or disparages or humiliates an individual or group of people on the basis of the individual’s or group’s race, ethnicity, or ethnic origin, nationality, religion, disability, age, veteran status, sexual orientation, gender identity, or other characteristic associated with systematic discrimination or marginalization.
Inappropriate use of family entertainment characters: Content that depicts family entertainment characters engaged in violent, sexual, vile, or otherwise inappropriate behavior, even if done for comedic or satirical purposes.
Incendiary and demeaning content: Content that is gratuitously incendiary, inflammatory, or demeaning. For example, video content that uses gratuitously disrespectful language that shames or insults an individual or group.
In a way YouTube was forced into taking this decision, because of repeated complaints from brands about their brand name being associated with what they considered inappropriate content. The first step in this direction was taken in September last year, when YouTube shed light on its advertising-friendly content guidelines and clearly spelled out the types of content that won’t be eligible for monetization. But the matter really came to a head in February this year, when Disney cut its ties with one of YouTube’s most popular stars PewDiePie, who had over 53 million subscribers, because of certain anti-Semitic posts made by him.
Steps taken to curb stealing and reuse of content
In April of this year, the Google-owned platform had also stopped serving ads on YouTube Partner Program videos until the channel reaches 10,000 lifetime views, to curb stealing and reuse of content. YouTube Partner Program, which began in 2007, enables content creators to monetize their content on the platform using advertisements, subscriptions and merchandise sale. Currently, YouTube pays creators a 55% share of the ad revenue from pre-roll ads that appear in of their videos.
This is a step in the right direction. All major social media and content platforms such as Facebook, Twitter and Google have already clearly defined their content policies. However, like with most other forms of guidelines certain grey areas persist. If the content satisfies the Terms of Service and Community Guidelines provided by YouTube, how can it still be hateful or inappropriate? Or if the content is hateful and inappropriate and not eligible for advertising, how can it still satisfy YouTube’s Terms of Service and Community Guidelines?