Google has announced four new steps they will be employing to tackle extremist content across all of its services, especially YouTube. The company said that it’s also been working with government, law enforcement and civil society groups for this purpose.
Using technology to identify extremist content
Google said that they “have used video analysis models to find and assess more than 50% of the terrorism-related content” they have removed over the past six months. However, the challenge is to differentiate between a news report of a terror attack published by a legitimate news organization, and the same video “uploaded in a different context by a different user.” The company will now develop new “content classifiers” that can identify and remove such content quicker.
More independent experts in YouTube’s Trusted Flagger programme
Google claimed that Trusted Flagger reports are accurate over 90% of the time, which is three times the accuracy rate of an average flagger, and so they will be expanding it by adding 50 new expert NGOs to the 63 such organizations that are currently part of the programme, and provide them with operational grants. The new experts will be from specialized organizations with working knowledge of “issues like hate speech, self-harm, and terrorism,” and counter-extremist groups who have the expertise to identify content which might be used to radicalize and recruit extremists. The Trusted Flagger programme was started in 2012. They have access to a tool that allows for reporting multiple videos, violating YouTube’s Community Guidelines, at the same time. Last year this programme was renamed YouTube Heroes.
Warning for videos that do not completely violate content policies
Even if a video doesn’t clearly violate Google’s content policies, but contains “inflammatory religious or supremacist content”, it will henceforth “appear behind an interstitial warning.” Such content will also not be eligible for monetization, recommendations, and comments and user endorsements. This will ensure that such videos have limited engagement and are that much harder to find. The company said that it thinks “this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints.”
Note that earlier this month, YouTube updated its advertising policy for content creators to prevent content with hate messages or discrimination of any type from featuring ads and monetizing the content. Besides hateful and discriminatory content, YouTube will also prevent content with inappropriate use of family entertainment characters and incendiary and demeaning content from placing advertisements. This update was specifically for YouTube’s advertiser-friendly content guidelines, to determine eligibility for advertising. In case a particular piece of content is deemed ‘hateful’ but satisfies YouTube’s Terms of Service and Community Guidelines, it can still remain on the platform, even though it won’t be eligible for advertising.
YouTube’s Creators for Change programme
This programme works with YouTube content creators to promote voices against hate and discrimination. The company said that it is working with the Jigsaw incubator to implement The Redirect Method more comprehensively. It utilises targeted online advertising to first identify “potential Isis recruits” and then redirects them to anti-terror videos.
Combating online trolls and hateful comments: In March this year, user engagement platform Vuukle tied-up with Google to help develop a method for publishers to better combat online trolls and derogatory & hateful comments in the online comment sections. Publisher can integrate Vuukle’s comment box plugin into their comments section without having to edit the backend or making changes to the code.