Facebook has disabled over 1.2 billion “fake” accounts in the past two quarters, many of them created using “scripts or bots, with the intent of spreading spam or conducting illicit activities such as scams,” the company revealed in its first ever Community Standards Enforcement report. 583 million such accounts were taken down in the first quarter of 2018, and 694 million accounts were culled out in the previous quarter. The company attributes the decrease in takedowns in the most recent quarter to “variability of our detection technology’s ability to find and flag them.”

Facebook said that most of these accounts were “disabled within minutes of registration,” but a report on Recode noted that Facebook doesn’t catch all fake accounts. The company estimates that three to 4% of its monthly active users are “fake,” up from two to 3% in Q3 of 2017, according to a filing with the US Securities and Exchange Commission.

Other highlights from the report regarding actions taken this quarter are:

Terrorist propaganda: The company took enforcement action on 1.9 million posts related to terrorism by Al Qaeda, ISIS and affiliates in the first quarter of this year, up from 1.1 million posts in the last quarter of 2018. Facebook said the change in numbers between quarters are affected by “internal factors, including the effectiveness of our detection technology and review processes. They’re also affected by external factors such as real-world events that increase terrorist propaganda content on Facebook.” The company also noted that while this standard is “enforced for terrorist activities and terrorist groups both regionally and globally, this report only measures the actions we take on terrorist propaganda related to ISIS, al-Qaeda and their affiliate groups.”

Graphic violence: Posts that included graphic violence represented from 0.22% to 0.27% of views, up from 0.16% to 0.19% in the previous quarter. The company took action on 3.4 million posts, up from 1.2 million in the previous quarter. Facebook said that these numbers change based on how the “detection technology and reporting tools help us find potentially violating content, review it and take action on it.” In Q1 2018, the company found and flagged around 86% of the content it subsequently took action on, before users reported it.

Nudity and sex: Posts with nudity or sexual activity represented 0.07% to 0.09% of views, up from 0.06% to 0.08% in the previous quarter. The company took action on 21 million posts, same as the previous quarter. Facebook says that spam attacks that use pornographic content to attract clicks play a role in increasing the amount of adult nudity and sexual activity content posted to Facebook.

Hate speech: Facebook took action on 2.5 million posts for violating hate speech rules, up 56% from the previous quarter. Users reported 62% of hate speech posts before Facebook took action on them. On identifying hate space Facebook said it define hate speech as a direct attack on people based on protected characteristics—race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, and serious disability or disease. The company explained, “We define an attack as violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation.”

Spam: Facebook took action on 837 million spam posts, up from 727 million in the previous quarter. The company says it detected “nearly 100%” of spam posts before users could report them. Facebook said that external factors such as cyber attacks increase spam content on the platform. Cyber attacks may lead to a sharp rise in spam content which is taken down by the platform immediately, but there might be a lull after that. The company said that the fluctuation in the numbers can be attributed to such incidents.

This enforcement report arrives a month after Facebook made its community guidelines public for the first time and introduced an appeals process. The standards document released by the company on what kind of content isn’t allowed on the platform is also used as a guide by Facebook’s global team of content moderators.

Transparency report

The company also released its latest transparency report for the second half of 2017 that covers details regarding government requests for user data from the company. Facebook noted that these requests had increased globally by around 4% compared to the first half of 2017.

In India, the company saw the total requests rise to 12,171 up from 9,853 requests in the first six months of 2017. Of all the requests made by the Indian government, 97.6% were made using the proper legal process and 2.4% were emergency requests. Facebook said, “Each and every request we receive is carefully reviewed for legal sufficiency and we may reject or require greater specificity on requests that appear overly broad or vague.” The report shows that Facebook produced some data to the authorities in 53% of the total cases requested. The Indian government also sought details regarding 17,262 users/accounts the report revealed.

Content restrictions

Facebook’s content restriction report highlights content that Facebook has restricted or removed access to in regions where it is violating local law, but doesn’t go against the platform’s own community standards. In India, the second half of 2017 saw 1,914 pieces of content being restricted up from 1,228 in the preceding six months. Even then the figures were significantly lower than 2015, where the first half of the year had 15,155 pieces of content restricted and in the second half that number stood at 14,971

In its changelog for this second half of 2017 the company noted, “We restricted access to content in India in response to legal requests from law enforcement agencies and the India Computer Emergency Response Team within the Ministry of Electronics and Information Technology. The majority of the content restricted was alleged to violate Indian laws relating to the defamation of religion, hate speech, and defamation of the state.”

Internet disruptions

Facebook also released details about disruptions in internet services that specifically impacted access to Facebook and Facebook-owned services. In the second half of 2017, the company noted that India saw 14 disruptions in access to Facebook’s services and the cumulative period of disruption added up to 5 weeks, 9 days and 6 hours (which is Facebook’s bizarre way of saying 6 weeks 2 days and 6 hours).