The European Commission has proposed rules to give online platforms one hour to take down terrorism-promoting content after they have been notified of it. Firms that don’t comply will be fined up to 4% of their annual global turnover the rules say. This is the same penalty that the General Data Protection Regulation (GDPR) levies for firms that violate multiple provisions of data protection law. This provision was first recommended in March.

“The industry should, through voluntary arrangements, cooperate and share experiences, best practices and technological solutions, including tools allowing for automatic detection,” the Commission said in its March release, essentially nudging companies to create industry-wide filters. This, it said, would help small companies take down content under similar time restrictions.

“You wouldn’t get away with handing out fliers inciting terrorism on the streets of our cities – and it shouldn’t be possible to do it on the internet, either,” Julian King, Commissioner for the EU’s Security Union said.

“Not content with handing copyright law enforcement to algorithms and tech companies, the EU now wants to expand that to defining the limits of political speech too,” the EFF said in a blog post criticising the move, referring to the EU’s just-approved Copyright Directive that requires social media companies to effectively filter all posts and prevent the publication of copyrighted works.

Technology companies and terrorist content

  • In April this year, Twitter removed nearly 2.7 lakh accounts between July 1, 2017 and December 31, 2017 for violations related to the promotion of terrorism.
  • In December 2016, Facebook, Microsoft, Twitter and YouTube teamed up to curb the spread of terrorist content online with a shared industry database of ‘hashes’ for violent terrorist imagery or terrorist recruitment videos or images, that have been removed before.
  • In the same year, Microsoft updated its content policies to remove content that promoted terrorist violence or recruitment for terrorist groups.
  • Google said that it would show anti-terror links and ads to users who type in words related to extremism.
  • Facebook too had updated its community standards, to curb terrorism related materials.
  • Twitter reported that it had suspended 235,000 accounts in the preceding six months for promoting violent extremism.