twitter flickr stock image

Twitter has added new measures to help users deal with abuse, bullying and harassment on the platform, including an update to the ‘mute’ function, direct abuse reporting and retraining the Twitter support team to deal with reports faster:

  • Making the “Mute” feature better: The mute feature now enables users to mute certain notifications or specific words or emoji, preventing those messages from displaying. However, the company will not take down the reported content, rather just hide it from the views of the offended person. Previously users had to block/mute entire accounts rather than specific keywords. Twitter mentions that it will roll out the feature to everyone in the coming few days.
  • Direct abuse reporting will let users report content that is against Twitter’s hateful conduct policy, including targeting anyone based on their race, religion, disability, gender, or disease, even if the hateful content is not targeted at the user. However, the company isn’t very specific about what action it will take against reported accounts, or the threshold for getting accounts banned. Additionally, banned users can simply create new accounts since only an email id is required to create one.

Finally, the company mentions that it has trained its support teams with special sessions on ‘cultural and historical contextualization of hateful conduct’ for raising awareness and improving abuse response.

Not good enough

Note that Twitter has got flak for being slow in taking down hateful content. The company mentions that “because Twitter happens in public and in real-time, we’ve had some challenges keeping up with and curbing abusive conduct. We took a step back to reset and take a new approach, find and focus on the most critical needs, and rapidly improve.”

However, Twitter’s current approach is shying away from censoring hateful content itself, rather leaving it up to the user to view the content, and then decide to mute it. While this makes sense from the point of view of remaining neutral, a more proactive policy like automatically filtering out hateful content, especially the really bigoted stuff, would be more useful.

Previous measures

This is not the first time Twitter has taken steps against hateful content. In August, the company said it suspended 235,000 accounts for promoting terrorism and had done the same with 125,000 accounts in February. Last year, it improved its harassment-reporting process which included issues like “impersonation, self-harm and the sharing of private and confidential information”.