Facebook banned 1.7 billion fake accounts in Q1 2020, the company revealed on its transparency dashboard. That’s the second largest number of fake account deletions in a single quarter since Facebook started reporting the numbers. The company said that 99.7% of these accounts were flagged by Facebook automatically before users ever needed to report them to the company manually. Since late 2017 (which is the period starting which this data has been published), that number has never been smaller than 98%. In 2019, the company said that its 2.2 billion peak was largely due to simplistic attacks by hackers that created millions of fake accounts at the same time that the company immediately shut down. It’s likely that similar dynamics are at play here.

Facebooks fake account takedowns (Source)

Hate speech hits new records

The company removed 9.6 million “pieces of content” that violated hate speech rules, a huge leap from the previous quarter, when the company took similar action for 5.7 million pieces of content. This number generally trends upwards, and so does the percentage of the content that Facebook discovers automatically — in Q1, 11.3% of the removed hate speech content was reported by users, the rest being detected by the company automatically. The share of user-reported hate speech is also shrinking — the 19.8% of content was flagged manually in the previous quarter.

Facebook’s Hate Speech takedowns (Source)

Other community standards enforcement stats

For other types of content and takedowns, Facebook only has data until February, so the information isn’t indicative for the whole quarter.

  • 0.05–0.06% of content viewed on Facebook violated rules on adult/explicit content in Jan–Feb 2020, a 0.01 percentage point increase from the previous quarter.
  • 8.6 million pieces of content breaking rules on child nudity and sexual exploitation of children were removed, compared to 13.3 million the previous quarter.
  • 2.3 million posts breaking rules on Bullying and Harassment were removed in the same period, 500,000 fewer than in the previous quarter. 84.4% of these posts were reported by Facebook users, and not detected automatically.
  • 4.7 million instances of organized hate were acted on, and 6.3 million pieces of terrorist content was removed. 96.7% and 99.3% of these instances were automatically detected respectively.
  • 7.9 million instances of promotion of drugs were removed, while 1.4 million posts on firearms were taken down.
  • 1.7 million instances of content on suicide and self-harm were removed in the period, compared to 5 million in the preceding quarter. Like the previous quarter, over 97% of this content was flagged automatically.
  • 25.5 million instances of content depicting graphic violence were removed, and up to 0.08% of posts viewed in this period were of this content.

See the dashboard here