Twitter will share raw data on its content moderation decisions related to misinformation and co-ordinated harmful activity with experts ‘studying platform governance issues,’ the company announced in a recent blog post. The global group of experts will include academics, civil society members, journalists and NGOs.
The move from Twitter comes after several internal studies conducted by Facebook employees on similar themes were recently leaked. The decision was partly motivated by concerns regarding ‘the physical safety of our employees around the world tied to potential disclosures,’ the company said.
The Facebook leaks seem to have triggered a new set of norms around who has access to data generated by social media users. Increased transparency can enhance our understanding of the challenges that platforms face and identify issues for regulatory intervention.
What data will Twitter share and with whom?
What data will Twitter share? Starting in early 2022, Twitter will share raw data about ‘attributed platform manipulation campaigns’. The company defines such behavior as:
- coordinated activity to artificially influence conversations with fake accounts or automation
- inauthentic engagement to make accounts or posts seem popular
- commercial spam
Who will have access? A global group of experts from academia, civil society, journalism and NGOs will be given access to this data. Such groups or individuals must have “a proven track record of research on content moderation and integrity topics,” the company said.
Is there a catch? While members of the consortium will receive more comprehensive access to Twitter data, the company will discontinue its fully-public dataset releases. Public access to Twitter API, including the full archive of Tweets, will continue to be available.
What’s next: Later next year, the company will also share data about other areas including “misinformation, coordinated harmful activity, and safety,” the blog post said.
How will this help? Leave a comment below.
Twitter asks regulators to allow flexibility for disclosures
In a white paper earlier this year, Twitter had suggested a few regulatory principles to help policymakers ‘protect the open internet.’ One of those principles was transparency:
“Transparency enables accountability for companies and Governments… One of the critical areas where policymakers and regulators can enhance transparency is ensuring that laws governing information provide suitable flexibility for valuable disclosures, for example, the provision of data to academics and researchers,” – Twitter Position Paper (emphasis ours)
Aside from transparency, Twitter also emphasized themes like human control over algorithms and promoting competition. You can read our summary of the position paper here.
Facebook whistleblowers argue for increased transparency
Facebook whistleblowers Frances Haugen and Sophie Zhang have repeatedly emphasized the need for greater transparency from platforms. In a recent Reddit AMA, Zhang said:
To solve a problem, you need to understand and know that it exists. But the information asymmetry means that this question can only be fully addressed from within FB – which has no incentive to solve the issue. Imagine a world in which the Bhopal disaster occurred, but only Union Carbide knew who was responsible, and only Union Carbide had any chance of knowing who was responsible. – Sophie Zhang
Last month, Frances Haugen advised European lawmakers not to allow a ‘trade secrets exception’ for platforms in the Digital Services Act, arguing that it would allow them to avoid sharing crucial data, Euronews reported. “Only Facebook gets to look under the hood. Facebook can no longer be the judge, the jury, the prosecutor and the witness,” Haugen said.
- Twitter Will Now Take Down Pictures Uploaded Without Permission Of The Depicted Person
- Indian Users Share Hateful Content More Readily On Same-Religion Groups: Facebook Internal Docs
- How Can Facebook’s Content Decisions Resist Political Influence? Employees Knew Internally