twitter flickr stock image

Twitter has taken additional steps to curb users who promote violent extremism and has suspended 235,000 accounts in the past six months, the company said in a blog post. The company added that it has significantly made progress in disrupting these users from returning to the platform.

Twitter has also expanded review teams and has upgraded its tools and language capabilities. It also has collaborated with other social platforms, sharing information and best practices for identifying extremist content. In February, Twitter said that it had suspended 125,000 accounts since mid-2015. This brings the total count to 360,000.

The company added that daily suspensions are up over 80% since last year, with spikes in suspensions immediately after terrorist attacks. Twitter will be publishing its efforts in regular updates in its transparency report. In addition to suspending the accounts, the social network has expanded partnerships with organizations such as Parle-moi d’Islam (France), Imams Online (UK), Wahid Foundation (Indonesia), The Sawab Center (UAE) and True Islam which counter terrorism.

Rising abuse in the platform 

Twitter has acknowledged that increasing instances of abuse is a concern for the platform. “Abuse is not part of civil discourse. It shuts down conversation. It prevents us from understanding each other. Freedom of expression means little if we allow voices to be silenced because of fear of harassment if they speak up. No one deserves to be the target of abuse online, and it has no place on Twitter. We haven’t been good enough at ensuring that’s the case, and we must do better,” CEO Jack Dorsey said in a call with analysts.

 

Last year, Twitter improved its harassment-reporting process which includes issues like “impersonation, self-harm and the sharing of private and confidential information”. After streamlining the process to report harassment, the changes are expected to rolled out globally in the coming weeks.

EU commission tie-up with tech companies

In June, the European Commission, along with Facebook, Twitter, YouTube and Microsoft released a code of conduct which states that along with criminal law sanctions, online hate speech needs to be reviewed and removed (or blocked) within 24 hours by online intermediaries. This is applicable even for other social media companies which haven’t joined hands with the Commission.

Other tech companies blocking terror related posts 

– In May, Microsoft updated its content policies to remove content that promotes terrorist violence or recruitment for terrorist groups. Users can report any terrorism related activity to Microsoft via a form, which the company will then remove.

– In February, Google said that it would show anti-terror links and ads to users who type in words related to extremism in a bid to reduce radicalization.

– In March last year, Facebook updated its community standards. This included the ban on posting direct threats, support for dangerous organizations, bullying and harassment, sexual violence and exploitation, and hate speech. Such content might be allowed if it was for social commentary on terrorist activity or organized criminal activity etc.