Update: After announcing that it will make significant revisions to its policy against online harassment, Twitter has now released its “calendar of the upcoming changes we plan to make to the Twitter Rules, how we communicate with people who violate them, and how our enforcement processes work.

By the end of this week (October 27), the microblogging platform expects to finalise its policy against Non-consensual nudity, plus define how Suspension Appeals are processed.

Here is the timeline for November, December and January.

Twitter also explained in details how it goes about making a policy change:

Making a policy change requires in-depth research around trends in online behavior, developing language that sets expectations around what’s allowed, and reviewer guidelines that can be enforced across millions of Tweets. Once drafted, we gather feedback from our teams and Trust & Safety Council. We gather input from around the world so that we can consider diverse, global perspectives around the changing nature of online speech, including how our rules are applied and interpreted in different cultural and social contexts. We then test the proposed rule with samples of potentially abusive Tweets to measure the policy effectiveness and once we determine it meets our expectations, build and operationalize product changes to support the update. Finally, we train our global review teams, update the Twitter Rules, and start enforcing it.

Earlier (October 18): In a series of tweets, Twitter co-founder and CEO Jack Dorsey announced that the microblogging platform will henceforth “take a more aggressive stance” against online harassment. Twitter is in the process of revising its policies related to online harassment – which are expected to be rolled out over the next few weeks – especially “unwanted sexual advances, non-consensual nudity, hate symbols, violent groups, and tweets that glorifies violence.”

In a letter sent to its Trust & Safety Council, which reviews policy changes and provides feedback on the same, Twitter elaborates on these changes. The Council is currently in the process of reviewing the latest changes proposed by Twitter. Here’s a brief look at the changes Twitter wants to incorporate:

Non-consensual nudity

We will immediately and permanently suspend any account we identify as the original poster/source of non-consensual nudity and/or if a user makes it clear they are intentionally posting said content to harass their target. We will do a full account review whenever we receive a Tweet-level report about non-consensual nudity. If the account appears to be dedicated to posting non-consensual nudity then we will suspend the entire account immediately.

Unwanted sexual advances

We are going to update the Twitter Rules to make it clear that this type of behavior is unacceptable. We will continue taking enforcement action when we receive a report from someone directly involved in the conversation. Once our improvements to bystander reporting go live, we will also leverage past interaction signals (eg things like block, mute, etc) to help determine whether something may be unwanted and action the content accordingly.

Hate symbols and imagery

Twitter is in the process of defining the extent of what will be covered as part of this policy. For starters, hateful imagery and hate symbols will be treated on par with adult content and graphic violence.

Violent groups

Groups or organisations that have a history of either inciting violence or using violence as part of what they do will come under the ambit of this policy. Twitter will provide more details, including the factors it will take into consideration while identifying these groups, in time to come.

Tweets that glorify violence

Twitter’s policy already has measures in place for tweets that threaten violence in any form, but following the revision of its policies it will also take action against tweets that either glorify or condone acts of violence.

Read the entire copy of the letter here.

Dorsey also addressed the issue of inconsistent implementation of its current policies, which sees a particular abusive behaviour reported by an user tackled effectively one day, and a similar instance of abusive behaviour not being dealt with effectively on another day.

It needs to be pointed out that Twitter was in a way forced to make these changes, because following the temporary suspension of American actress and director, Rose McGowan’s account a large number of Twitter users decided to boycott the platform in protest.

Twitter’s recent measures to combat online harassment

  • In February this year, Twitter fixed the mute/block feature and changed its policy to stop repeat offenders from creating new accounts.
  • In December 2016, Twitter teamed up with Facebook, Microsoft, and YouTube to curb the spread of terrorist content online. The companies created a shared industry database of ‘hashes’ for violent terrorist imagery or terrorist recruitment videos or images, that have been removed before. By sharing this data, potential terrorism related content can be identified and removed from their respective platforms.
  • Prior to this, in November last year, it had introduced new features to help users deal with abuse, bullying and harassment on the platform, including an update to the ‘mute’ function, direct abuse reporting and retraining the Twitter support team to deal with reports faster.

Twitter had earlier said that “because Twitter happens in public and in real-time, we’ve had some challenges keeping up with and curbing abusive conduct. We took a step back to reset and take a new approach, find and focus on the most critical needs, and rapidly improve.”