wordpress blog stats
Connect with us

Hi, what are you looking for?

Facebook, Google, TikTok, and Twitter make commitments to tackle the abuse of women on their platforms

While the move is a key first step, it also raises questions about addressing the root problem, curbing free speech, and establishing a time frame.

Facebook, Google, TikTok, and Twitter made commitments at the UN Generation Equality Forum in Paris on July 1 to tackle online abuse of women on their platforms. This is the outcome of a 14-month initiative led by the Web Foundation, involving people from tech companies, governments, digital rights organisations, civil society, and women affected by online abuse.

38% of women globally have directly experienced abuse online and this figure rises to 45% for Gen Zs and Millennials, the Web Foundation stated in its press release. The abuse is far worse for women of color and women from the LGBTQ+ community as well as other marginalised groups. “The consequences can be devastating, causing mental and physical harm, silencing women’s voices, and delivering an economic blow to those who rely on tech platforms for their livelihoods,” the release added.

“With their resources and reach, these four companies have the power to curb this abuse and improve online experiences for hundreds of millions of women and girls,” Web Foundation Senior Policy Manager Azmina Dhrodia said.

What is the focus of these commitments?

The commitments focus on two major themes:

Advertisement. Scroll to continue reading.

Curation: Focused on giving women more control and choice over what they see online, when they see it, and how they see it. This includes:

  • Offering more granular settings such as who can see, share, comment, or reply to posts
  • Providing easy access to safety tools
  • Proactively reducing the amount of abuse women see
  • Using simple and accessible language throughout the user experience

Reporting: Focused on improving the processes through which women report abuse.

  • Giving more guidance when reporting abuse
  • Providing additional ways for women to access help and support during the reporting process
  • Offering users the ability to track and manage their reports
  • Enabling greater capacity to address context and language

Does it address the root problem?

The two themes appear to revolve around what can be done to abusive content that surfaces on the platform but they don’t appear to address the root cause. What mechanisms are going to be put in place to prevent abuse in the first place rather than to react to it and prevent it from being shown to women?

We have to take a really holistic approach to this issue because we know that abuse is multifaceted. Certainly in an ideal world, we would eliminate abuse. But we also know that misogyny, sexism, and hate speech happens in the outside world and the online world. So we can’t eliminate it entirely, but we can take measures to make the experience safer for women,” Emily Sharpe, Director of Policy, Web Foundation told MediaNama in an interview.

Will platforms change algorithms if it’s hurting their business model?

Social media platforms run on algorithms that maximise the revenue they earn from ads. Sometimes, these algorithms favor abusive content and reward bad behavior. Will they be willing to change these algorithms even if it’s going to affect their business model?

“One thing that’s really important to note around online gender-based violence is that 20 percent of women who have experienced abuse leave the platform. If a significant proportion of the people who were on your platform decide to leave because they’re experiencing abuse on your platform, that’s not good for your business. Companies are certainly interested in ensuring that women decide to stay on their platform,” Emily Sharpe said.

Will proactively filtering content curb free speech in any way?

One of the solutions appears to suggest proactively filtering abusive content, but can this result in the creation of echo chambers or curbing of free speech where viewpoints different from the users’ is considered abusive and filtered out?

“Filtering content is never going to be perfect. But abusive content is not valuable content. That’s not about getting a different viewpoint. That’s about shooting women’s free speech down. So we all talk about how we want to make sure that the free speech of those who are posting potentially abusive content isn’t limited. But what about the free speech of women who want to be able to be active on these platforms if they’re seeing and receiving so much abuse that they shut themselves down,” Emily Sharpe said.

Advertisement. Scroll to continue reading.

When asked if the Web Foundation has created any definition for what constitutes abusive content, Sharpe said that “creating a more standard definition of what constitutes abuse is the next big challenge.”

What can we expect from the four social media companies?

The four platforms, which account for the majority of social media users, have committed to building solutions around the two themes (curation and reporting) highlighted above. They are expected to explore and test the prototypes and solutions developed in the workshops over the last year and implement the solutions in a timebound and transparent manner. This includes regularly publishing and sharing meaningful data and insights on their progress. However, we do not yet have a set time frame from the platforms.

The Web Foundation has assumed the responsibility to report annually on how tech companies have progressed. “Creating the template and ensuring that companies are reporting accurately on how their progress being made is our next project,” Sharpe said.

“Over the coming months, we’ll begin to develop and test a number of potential product changes to our platform that address these priorities and help make TikTok an ever safer place for women,” TikTok said in a blog post. The short-form video platform already gives users control over who can comment and prompts users to reconsider the impact of their words before posting a comment that may be inappropriate or unkind.

“Abuse and harassment disproportionately affect women and underrepresented communities online,” Twitter said. While the platform already has “conversation controls, prompts to reconsider abusive Tweets, and upcoming safety mode for screening Tweets, there is still much work to be done,” the micro-blogging platform added.

Open letter from prominent women around the world

More than 200 prominent women from around the world, including former heads of state, journalists, activists, and artists, signed an open letter sent to the CEOs of the four social media companies asking them to turn their promises into action.

Advertisement. Scroll to continue reading.

“The internet is the town square of the 21st century. It is where debate takes place, communities are built, products are sold and reputations are made. But the scale of online abuse means that, for too many women, these digital town squares are unsafe. This is a threat to progress on gender equality,” the letter stated.

“Your decisions shape the way billions of people experience life online. With your incredible financial resources and engineering might, you have the unique capability and responsibility to ensure your platforms prevent, rather than fuel, this abuse,” the letter added.

“Imagine what you can achieve if you follow through on commitments to build safer platforms: an online world where a journalist can engage with feedback on her reporting, not assassinations of her character. Where a politician may read complaints about her policies, but not threats of rape and murder. Where a young woman can share what she wants to on her terms, knowing there are systems to keep her safe and hold harassers accountable.”

“If you build this better internet for women, you will build a better internet for everyone,” the letter concluded.

Also Read

Written By

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.



Due to the scale of regulatory and technical challenges, transparency reporting under the IT Rules has gotten off to a rocky start.


Here are possible reasons why Indians are not generating significant IAP revenues despite our download share crossing 30%.


This article addresses the legal and practical ambiguities in understanding the complex crypto ecosystem in India.


It is widely argued that the PDP Bill report seeks to discard the intermediary status of social media platforms but that may not be...


Looking at the definition of health data, it is difficult to verify whether health IDs are covered by the Bill.

You May Also Like


Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...


135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...


Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...


By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Your email address:*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ