internet-technology-ai

Facebook, Microsoft, Twitter and YouTube are teaming up to curb the spread of terrorist content online. The companies will create a shared industry database of ‘hashes’ for violent terrorist imagery or terrorist recruitment videos or images, that have been removed before. By sharing this data, potential terrorism related content can be identified and removed from their respective platforms.

These companies will start the initiative by sharing the most extreme and egregious content – content that is most likely to violate all of companies’ content policies. Once a participating company marks certain content as objectionable and shares it, other companies can use these hashes to review the content and subsequently remove them from its own service if appropriate. Each company will continue to independently determine what content hashes should be shared, or removed from its own platform.

The collaboration will look to involve other companies in the future as well. This is important, as having a shared public database will also help smaller websites automatically filter out unwanted content, which they otherwise might not have had resources to do so. Note that each participating company has its own definition of terrorist content, so the content removed and shared is bound to differ. Each company will also continue to apply its own transparency and review practices for any government requests.

Interestingly in June, the European Commission, along with Facebook, Twitter, YouTube and Microsoft released a code of conduct which stated that along with criminal law sanctions, online hate speech needs to be reviewed and removed (or blocked) within 24 hours by online intermediaries. This is applicable even for other social media companies which haven’t joined hands with the Commission.

Curbing spread of terrorism online:

This year, Microsoft updated its content policies to remove content that promotes terrorist violence or recruitment for terrorist groups, while Google said that it would show anti-terror links and ads to users who type in words related to extremism. Facebook too had updated its community standards, to curb terrorism related materials. Twitter also reported in August that it had suspended 235,000 accounts in the preceding six months for promoting violent extremism.

Note that at the time, Twitter added that it had collaborated with other social platforms, sharing information and best practices for identifying extremist content. The current collaboration will take this a step further, and make it easier for all major social media giants to share and help curb unwanted content. Eventually letting other websites also use this database to remove content for their websites could have a huge impact on the visibility of terrorist related promotions online.