Jack Dorsey, CEO of Twitter, announced a new effort to improve the health of conversations on the platform in a series of tweets.

We’re committing Twitter to help increase the collective health, openness, and civility of public conversation, and to hold ourselves publicly accountable towards progress.

Why? We love instant, public, global messaging and conversation. It’s what Twitter is and it’s why we‘re here. But we didn’t fully predict or understand the real-world negative consequences. We acknowledge that now, and are determined to find holistic and fair solutions.

We have witnessed abuse, harassment, troll armies, manipulation through bots and human-coordination, misinformation campaigns, and increasingly divisive echo chambers. We aren’t proud of how people have taken advantage of our service, or our inability to address it fast enough. While working to fix it, we‘ve been accused of apathy, censorship, political bias, and optimizing for our business and share price instead of the concerns of society. This is not who we are, or who we ever want to be.

We’ve focused most of our efforts on removing content against our terms, instead of building a systemic framework to help encourage more healthy debate, conversations, and critical thinking. This is the approach we now need.

Recently we were asked a simple question: could we measure the “health” of conversation on Twitter? This felt immediately tangible as it spoke to understanding a holistic system rather than just the problematic parts. If you want to improve something, you have to be able to measure it. The human body has a number of indicators of overall health, some very simple, like internal temperature. We know how to measure it, and we know some methods to bring it back in balance.

Our friends at and introduced us to the concept of measuring conversational health. They came up with four indicators: shared attention, shared reality, variety of opinion, and receptivity. Read about their work here: Measuring the health of our public conversations

We don’t yet know if those are the right indicators of conversation health for Twitter. And we don’t yet know how best to measure them, or the best ways to help people increase individual, community, and ultimately, global public health. What we know is we must commit to a rigorous and independently vetted set of metrics to measure the health of public conversation on Twitter. And we must commit to sharing our results publicly to benefit all who serve the public conversation.

We simply can’t and don’t want to do this alone. So we’re seeking help by opening up an RFP process to cast the widest net possible for great ideas and implementations. This will take time, and we’re committed to providing all the necessary resources. RFP: Twitter health metrics proposal submission

We’re going to get a lot of feedback on this thread and these ideas, and we intend to work fast to learn from and share the ongoing conversations. , and I will do a Periscope next week to share more details and answer questions.

Thanks for taking the time to read and consider, and also, come help us: Twitter Careers.

Twitter has been criticized over the years for being indifferent the use of their platform for harassment, bullying, abuse, automated bot spam and more. There have been efforts in recent years that have had varying results and have also led to criticism at times.

Efforts to reduce the visibility of content that seemed spammy or abusive with the use of automated algorithms also led to accusations of “shadow banning” on the basis of political views. (Shadow banning is the reducing of the visibility of an account without blocking it, so that the messages are no longer visible to others as a part of activity streams or search results, while the person posting the message simply believes that no one is interacting with their messages.)

Twitter is indeed making reporting of abusive content easier and more intuitive. However, the action taken against this report can vary wildly on the basis of how influential the account is.

Many people running campaigns for awareness say that the top trends on Twitter are now almost impossible to breach without an organized campaign by volunteers or recruits flooding the tag, because social media “services” can actually sell organized campaigns for visibility. “Tweetdecking” is now a thing! These trends can be nonsensical and meaningless and almost entirely populated with bots or those earning an income from posting content on Twitter. If users on the platform can sell services to game the algorithm, they are effectively using an entire section of the service for an income and reduce its utility for regular readers. There needs to be a greater effort to crack down on such accounts to allow a more organic representation of the content on the platform.

Verified accounts engaging in incitement to violence or communal hatred usually suffer no consequences on Twitter. At best the account may be required to delete specific tweets calling for mass murder or issuing a rape threat and so on, but the account continues to be operational and it remains a verified account (implying an amount of credibility).

It remains to be seen what results this new initiative will bring and the Periscope event will make some of it more clear this week.

Medianama’s take

It can be complex and challenging to reliably filter content on a platform where messages move as fast as on Twitter where the highest effectiveness of a message can often be within minutes of it being posted and a process of reporting and action taken may remove the tweet or account long after the damage is done. Improving the speed of reporting and taking action was a step in the right direction. Proactively identifying problematic content may also be useful.

The real concern here is a matter of transparency and trust. If the consequences of violations are not even across users, accusations of bias and censorship will persist and will keep undermining Twitter’s credibility. If a verified account with thousands of followers can issue a relentless stream of hate propaganda, and suffer no consequences apart from being forced to delete the occasional worst tweets, it will not matter how many small accounts are caught and blocked, because the larger accounts have many thousands of times the reach of random anonymous disposable accounts. If Jack is indeed serious about improving the quality of conversations, Twitter is going to require a spine and act to uphold policies and stand up to influential people as well.

It is not uncommon to see prominent people, noted for doing good work in public interest delete their Twitter accounts over the toxic messages they receive. Unless Twitter is able to get control over its content, it will continue to be a ripe target for those manipulating narratives and hostile to the soft-spoken and more reclusive people who do not wish to court constant unpleasantness every time they have a view.