wordpress blog stats
Connect with us

Hi, what are you looking for?

Leaked Facebook documents reveal problematic content removal standards: report

Leaked documents have revealed exact guidelines for how Facebook removes content related to sex, terrorism, death threats, self-harm, suicide, and more, the Guardian reported. A series of presentation slides detail how Facebook deals with reports on posts, comments and videos that violate their sitewide rules. The company gave significant leeway to certain types of violent content, like self-harm and threats, while relying on ‘newsworthiness’ to decide if videos and livestreams of suicide and terrorism should be removed. “Not all disagreeable or disturbing content violates our community standards,” the Guardian quoted Facebook as saying (this statement is actually a part of their community standards page).

Facebook’s approach to reported content

Here’s how the leaked slides describe Facebook’s policy for different types of ‘disageeable or disturbing’ content.

1) Child and animal abuse: Non-sexual child abuse is allowed, as long as it doesn’t have a ‘celebratory’ overtone which glorifies the abuse. Animal abuse is allowed for the most part, but in cases of especially gory or disturbing visuals, the content needs to be marked as ‘disturbing’; content marked ‘disturbing’ can only be viewed if users (who must be more than 18 years old) specifically choose to view them. Similar to the approach taken for child abuse, animal abuse shared with celebratory or sadistic intent will be removed.

2) Suicide and self-harm: Livestreams of suicides and self-harm are allowed. In one of the slides, Facebook said that users livestreaming or posting videos of self-harm are “crying out” for help online, and therefore shouldn’t be censored. One of the documents says that Facebook did this based on advice from the Samaritans and Lifeline, both anti-suicide nonprofits that operate helplines in the US and UK. As for suicides, “Experts have told us what’s best for these people’s safety is to let them livestream as long as they are engaging with viewers,” one of the documents said. However, this content would be deleted once there was “no longer an opportunity to help the person.”

“We occasionally see particular moments or public events that are part of a broader public conversation that warrant leaving this content on our platform,” Facebook’s global public policy director Monika Bickert told the Guardian. She cited an example of a video of an Egyptian taxi driver who self-immolated protesting the government and ‘high prices’, which Facebook decided not to remove.

Advertisement. Scroll to continue reading.

3) Violence and death: The leaked slides which describe how to deal with graphic violence and death also make a distinction between removing content and marking it as ‘disturbing’. For instance, videos of mutilations are removed no matter what, whereas photos are marked as ‘disturbing’. There are exceptions for content which ‘document atrocities’, though these too must be marked as disturbing.

4) Threats: In a slide titled “Credible Violence”, Facebook listed examples of ‘credible’ threats that warranted removal, as well as ‘generic’ threats that didn’t. For example, “I hope someone kills you” would not be removed by Facebook, since “people use violent language to express frustration online”, and this is one such example of people doing so. However, statements like “someone shoot Trump” would be removed, since he’s a head of state and is therefore in a ‘protected category’. Another example of ‘generic’ threats that Facebook would not remove was: “To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat”.

5) Sex and nudity: Facebook’s community standards generally prohibit most nudity. A leaked slide says that hand-drawn depictions of sex and nudity is allowed, but digitally rendered artwork is not. The documents also describe standards to identify and remove revenge porn, which is intimate content that is shared without the featured person’s consent (usually by the person who shot that content). Videos and photos of abortions are allowed, as long as they don’t contain nudity.

Facebook’s trouble with moderation

Due to a spate of livestreamed suicides and violence, Facebook has been under attack because of how it moderates — or doesn’t moderate — its content. To that end, it hired 3000 moderators to review reports from users about disturbing content. These leaked documents show the standards that those moderators will probably be using to make decisions on reported posts and videos.

As we pointed out earlier, it’s going to be difficult for Facebook to keep disavowing disturbing content just because they are an intermediary, and not a publisher. Regulatory intervention may force the company to take a more proactive role in how it polices content, forcing it to devote more resources and manpower in identifying and removing content in real time, as opposed to just when users flag content.

The company is also working on improving the quality of content on its news feed, which includes efforts to weed out misleading and exaggerated links, as well as a continuing multi-pronged offensive on fake news.

Advertisement. Scroll to continue reading.

Written By

I cover the digital content ecosystem and telecom for MediaNama.

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.

Views

News

The DSCI's guidelines are patient-centric and act as a data privacy roadmap for healthcare service providers.

News

In this excerpt from the book, the authors focus on personal data and autocracies. One in particular – Russia.  Autocracies always prioritize information control...

News

By Jai Vipra, Senior Resident Fellow at Vidhi Centre for Legal Policy The use of new technology, including facial recognition technology (FRT) by police...

News

By Stella Joseph, Prakhil Mishra, and Yash Desai The Government of India circulated proposed amendments to the Consumer Protection (E-Commerce) Rules, 2020 (“E-Commerce Rules”) which...

News

By Rahul Rai and Shruti Aji Murali A little less than a year since their release, the Consumer Protection (E-commerce) Rules, 2020 is being amended....

You May Also Like

News

Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

News

By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

Advert

135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...

News

Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ