In its ongoing efforts to make content advertiser and crack down on hateful content, YouTube has been flagging harmless videos that contain sensitive keywords. Earlier this week, YouTube demonetised AskDrBrown, a Christian ministry’s channel, flagging its content as unsuitable for advertisers as well as users. In a statement issued on their website, Brown stated that they are “rejoicing in the opposition we have received”, and that “we will not be silenced”. The ministry said that several of its videos — even innocuous ones — had been flagged just because a few contained keywords like ‘KKK’ in their titles or descriptions.

Why YouTube is demonetising videos

Earlier this year, YouTube announced the implementation of a broader policy of demonetising accounts which had “content that is harassing or attacking people based on their race, religion, gender or similar categories”. This policy came after reports that advertisers were funding extremist groups, with individual ad revenue as high as £250,000. This led to an international advertisers’ boycott against YouTube, which reportedly lost Google $750 million in revenue.

A similar incident resulted in pro wrestling channels being categorised as ‘inappropriate’, resulting in most wrestling-related videos being demonetised. Following backlash from the pro wrestling community, which relied heavily on revenue from YouTube, wrestling videos were unflagged. However, wrestling still falls under the “restricted category”, which means that these videos are not discoverable without turning SafeSearch off.

Recovering from the boycott

Last week, Verizon, which advertised heavily on YouTube before the boycott, announced that they would resume purchasing video ads, after boycotting the streaming service for five months.

More recently, YouTube has started notifying channel will receive a statement on why their video or channel has been demonetised, along with instructions to appeal the decision. Apart from the appeal process though, YouTube’s precision in identifying offensive content in the first place is clearly still a work in progress.

On platforms and censorship (Nikhil adds)

While this particular instance involves demonetization of content, this is similar to how censorship is also dealt with on platforms.

These problems occur when platforms play the role of regulators: it is impossible for them to police billions of pieces of content or hours of video. We’ve seen this historically in case of copyright claims, where even content that involved fair-usage used to be taken off YouTube, or incorrect claims were accepted. These practices often lean towards censorship, because platforms would rather be safe than sorry.

Algorithms that seek out specific keywords or phrases tend to be inaccurate because they don’t understand context around the content. Facebook faces similar problems around Fake News, and occasionally, around iconic photographs.

One approach that has typically been taken by platforms is to allow people to report content. This somewhat reduces the monitoring of content, but even this system can be gamed, if someone, as a campaign, starts reporting a video, account or comment. Sometimes people don’t have context. There are no easy answers here, but at least in terms of copyright claims, YouTube evolved an audio tracking system which could identify copyright violations, and allow content owners to claim them. The DMCA allows for claims and counter-claims process, following which the issue needs to go to court.

Issues such as hate speech, though, are much tougher to deal with.