Leaked documents have revealed exact guidelines for how Facebook removes content related to sex, terrorism, death threats, self-harm, suicide, and more, the Guardian reported. A series of presentation slides detail how Facebook deals with reports on posts, comments and videos that violate their sitewide rules. The company gave significant leeway to certain types of violent content, like self-harm and threats, while relying on ‘newsworthiness’ to decide if videos and livestreams of suicide and terrorism should be removed. “Not all disagreeable or disturbing content violates our community standards,” the Guardian quoted Facebook as saying (this statement is actually a part of their community standards page).

Facebook’s approach to reported content

Here’s how the leaked slides describe Facebook’s policy for different types of ‘disageeable or disturbing’ content.

1) Child and animal abuse: Non-sexual child abuse is allowed, as long as it doesn’t have a ‘celebratory’ overtone which glorifies the abuse. Animal abuse is allowed for the most part, but in cases of especially gory or disturbing visuals, the content needs to be marked as ‘disturbing’; content marked ‘disturbing’ can only be viewed if users (who must be more than 18 years old) specifically choose to view them. Similar to the approach taken for child abuse, animal abuse shared with celebratory or sadistic intent will be removed.

2) Suicide and self-harm: Livestreams of suicides and self-harm are allowed. In one of the slides, Facebook said that users livestreaming or posting videos of self-harm are “crying out” for help online, and therefore shouldn’t be censored. One of the documents says that Facebook did this based on advice from the Samaritans and Lifeline, both anti-suicide nonprofits that operate helplines in the US and UK. As for suicides, “Experts have told us what’s best for these people’s safety is to let them livestream as long as they are engaging with viewers,” one of the documents said. However, this content would be deleted once there was “no longer an opportunity to help the person.”

“We occasionally see particular moments or public events that are part of a broader public conversation that warrant leaving this content on our platform,” Facebook’s global public policy director Monika Bickert told the Guardian. She cited an example of a video of an Egyptian taxi driver who self-immolated protesting the government and ‘high prices’, which Facebook decided not to remove.

3) Violence and death: The leaked slides which describe how to deal with graphic violence and death also make a distinction between removing content and marking it as ‘disturbing’. For instance, videos of mutilations are removed no matter what, whereas photos are marked as ‘disturbing’. There are exceptions for content which ‘document atrocities’, though these too must be marked as disturbing.

4) Threats: In a slide titled “Credible Violence”, Facebook listed examples of ‘credible’ threats that warranted removal, as well as ‘generic’ threats that didn’t. For example, “I hope someone kills you” would not be removed by Facebook, since “people use violent language to express frustration online”, and this is one such example of people doing so. However, statements like “someone shoot Trump” would be removed, since he’s a head of state and is therefore in a ‘protected category’. Another example of ‘generic’ threats that Facebook would not remove was: “To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat”.

5) Sex and nudity: Facebook’s community standards generally prohibit most nudity. A leaked slide says that hand-drawn depictions of sex and nudity is allowed, but digitally rendered artwork is not. The documents also describe standards to identify and remove revenge porn, which is intimate content that is shared without the featured person’s consent (usually by the person who shot that content). Videos and photos of abortions are allowed, as long as they don’t contain nudity.

Facebook’s trouble with moderation

Due to a spate of livestreamed suicides and violence, Facebook has been under attack because of how it moderates — or doesn’t moderate — its content. To that end, it hired 3000 moderators to review reports from users about disturbing content. These leaked documents show the standards that those moderators will probably be using to make decisions on reported posts and videos.

As we pointed out earlier, it’s going to be difficult for Facebook to keep disavowing disturbing content just because they are an intermediary, and not a publisher. Regulatory intervention may force the company to take a more proactive role in how it polices content, forcing it to devote more resources and manpower in identifying and removing content in real time, as opposed to just when users flag content.

The company is also working on improving the quality of content on its news feed, which includes efforts to weed out misleading and exaggerated links, as well as a continuing multi-pronged offensive on fake news.