In March, Twitter CEO Jack Dorsey pointed to four main indicators of ‘conversational health’ on any public platform. The indicators, prepared by the analytics firm Cortico and MIT’s Laboratory for Social Machines, are shared attention, shared reality, variety of opinion, and receptivity. These indicators reflect some of the biggest criticisms to emerge of public platforms in the aftermath of the polarizing US presidential elections in 2016. On Tuesday, Twitter announced in a blog post that it would be filtering certain tweets and accounts in conversations and search results. The tweets that will be filtered, the company says, “don’t [violate our policies], but are behaving in ways that distort the conversation.”
That essentially refers to trolls. Just 1% of handles accounts for a majority of those that users have reported for abuse. While these users aren’t typically bots or aren’t behaving in any specific ToS-violating way, Twitter says that they harm the health of public conversations, as measured by Cortico’s indicators. By filtering these tweets out by default — they’re still visible under “See more tweets”, and not deleted — Twitter reported a 4–8% drop in abuse reports from users.
How ‘trolls’ are flagged
Twitter said that it would use automation to flag accounts that fit certain patterns and filter tweets by those handles out by default. These indicators include not having a verified email address, tweeting at handles that don’t follow them and creating multiple accounts simultaneously. “Because this content doesn’t violate our policies, it will remain on Twitter,” the company said.
The company clarified that this was just one in a series of steps to tackle abuse and polarization on Twitter, and said it would be doing more to improve conversational health on the platform. The company added, “There will be false positives and things that we miss; our goal is to learn fast and make our processes and tools smarter. We’ll continue to be open and honest about the mistakes we make and the progress we are making.”
A lot of Twitter’s recent moves have been on cleaning up in the aftermath of attention to its treatment (or lack thereof) to abuse and bots. On the bot front, the company has been removing hundreds of thousands of accounts and automatically catches millions of them algorithmically every week. The company is also trying to reduce hateful and violent content on the platform. To that end, they effected policy changes to undermine such content last December.
Specifically, the company has also been battling harassment, which tends to be personally targeted and has been a long-running problem. The platform has just now started to scale their reactions — and their proactive filtering — to the size those problems have already reached.