On July 9, Twitter announced that it its rules against hateful conduct will include language that dehumanizes others on the basis of religion. Twitter defines dehumanization as:

Language that treats others as less than human. Dehumanization can occur when others are denied of human qualities (animalistic dehumanization) or when others are denied of their human nature (mechanistic dehumanization). Examples can include comparing groups to animals and viruses (animalistic), or reducing groups to a tool for some other purpose (mechanistic).

Henceforth, tweets like the following will be removed when they are reported:

Tweets

Tweets that dehumanise others on the basis of other will be deleted. Source: Twitter

Such tweets that were sent out before July 9 will only result in tweet deletion, and no account suspensions, because they were tweeted before the rule was set. This change in rule took into account more than 8,000 responses from people located in more than 30 countries.

The Sisyphean task of tackling hate speech

Removing hate content on social media has turned into a sore point for both governments and social media companies as they grapple to find a balance between freedom of speech, and targeting hate speech.

  • Fance’s National Assembly approved a draft bill on July 5 that directs social media companies such as Facebook and Twitter to take down any hateful content form their platforms within 24 hours. Failure to do so would result in imprisonment and fine of up to €1.25 million.
  • On June 5, the Sri Lankan Cabinet approved a proposal to amend to the country’s Penal Code and Criminal Procedure Code to take action against people spreading fake news, including statements that impact national security and incite violence between communities. Under the proposal, those caught spreading fake news and hate speech on social media could face a five-year jail term and a fine of up to Sri Lankan Rs 10 lakh (about 4 lakh Indian rupees).
  • On June 27, Twitter announced that it would begin labeling tweets from political figures that it could otherwise have taken down for breaking its rules. The new policy applies to government officials/appointees and political candidates who have verified accounts with more than 100,000 followers, and will be used rarely. Before users can view tweets that the company has flagged for violating its guidelines, they will need to click on a notice that reads: “The Twitter Rules about abusive behaviour apply to this Tweet. However, Twitter has determined that it may be in the public’s interest for the Tweet to remain available.”
  • On June 20, Facebook announced that it in Myanmar, it had started to reduce the distribution of content from people who have consistently violated its community standards in the past. It said it will use “learnings” to explore expanding this approach to other markets. In cases where it identifies individuals or organisations “more directly promote or engage violence,” the company said it would ban those accounts. Facebook said it has also extended its use of AI to recognise posts that may contain graphic violence and comments that are “potentially violent or dehumanising”.
  • YouTube updated its policies on June 5 and said that it would remove and prohibit hateful and supremacist content, including that which glorifies Nazis, denies the Holocaust, and promotes one group’s superiority over another. Until then, YouTube had had a “tough” stance towards videos with supremacist content, but did not explicitly prohibit them. This move came in response to the backlash that YouTube received on Twitter when it tweeted that Steven Crowder, a conservative US commentator with nearly 4 million YouTube subscribers, had not violated YouTube’s policies despite having used racial language and homophobic slurs to harass Carlos Maza, a Vox journalist, for nearly two years.

Efficacy of such moves: MediaNama’s take

While cracking down on hate speech online is a necessary move, it is crucial to understand that these moves alone are not enough. Technological solutions to social problems are not sustainable, and in most cases, inadvisable.

When the most powerful country’s president actively denigrates groups of people, on the basis of their religion and ethnicity, online and offline, and the world’s largest democracy’s prime minister stays mum as minorities in his country are lynched, it galvanises fringe (which have now become central) elements in the society. Taking legal action against them, as well as systematically and systemically changing social mindsets through education would have more far-reaching consequences.

Also, governments and social media companies need to bolster their media literacy and digital literacy initiatives, so that people, especially those who got access to social media before they had access to books, understand the tangible consequences that their 280  characters have on people and society.