wordpress blog stats
Connect with us

Hi, what are you looking for?

On Facebook adding 3000 people to monitor flagged content

The global country that is Facebook will hire 3000 people by the end of this year to “review the millions of reports we get every week, and improve the process for doing it quickly”, its founder (and head of state) Mark Zuckerberg said in a post today.

Facebook reviews content according to its community guidelines, which disallow hate speech and child exploitation. “And we’ll keep working with local community groups and law enforcement who are in the best position to help someone if they need it -either because they’re about to harm themselves, or because they’re in danger from someone else.”

Policing content via community is going to be tough

Facebook doesn’t actively police content, but relies on its community to flag content, which is then reviewed. This is, in a manner of speaking, reactive policing. This essentially absolves Facebook of prior knowledge before content is posted, and allows it to play the role of an intermediary and a tech platform merely allowing others to publish content. If it were to screen content, it would, like media publications, have liability over that content.

Facebook’s challenge with live streaming is far greater than policing hate speech, and it includes suicide, murder, and to people considering committing suicide.  Zuckerberg, in his post, says “Just last week, we got a report that someone on Live was considering suicide. We immediately reached out to law enforcement, and they were able to prevent him from hurting himself. In other cases, we weren’t so fortunate.”

Earlier last month,  Justin Osofsky, Facebook’s VP global operations, wrote in a blog post that Facebook is reviewing its reporting flows and reviewing its reporting process, but he also indicated a timeline of how the community on Facebook reacted, following an incident in Cleveland:

Timeline of Events
11:09AM PDT — First video, of intent to murder, uploaded. Not reported to Facebook.
11:11AM PDT — Second video, of shooting, uploaded.
11:22AM PDT — Suspect confesses to murder while using Live, is live for 5 minutes.
11:27AM PDT — Live ends, and Live video is first reported shortly after.
12:59PM PDT — Video of shooting is first reported.
1:22PM PDT — Suspect’s account disabled; all videos no longer visible to public.

As you can see, depending on people to report is not very effective.

Advertisement. Scroll to continue reading.

Even with and addition of 3000 people by the end of the year, Facebook is clearly understaffed: it will have around 7500 people to monitor flagged content from a daily active user (citizen) base of 1.3 billion (and growing), with millions of instances of content flagged a week. Facebook will thus also build “better tools to keep our community safe.”


These tools, according to Zuckerberg, will involve:
– Making it simpler for users to report problems (which could lead to an increase in reports)
– Faster for our reviewers to determine which posts violate our standards (which could, if it works well, helps address the increase in reports)
– Easier for reviewers to contact law enforcement.

I don’t quite remember which company it was, but at the world wide web conference in Hyderabad in 2011, I remember a developer talking about how a Russian video messaging streaming site (which allowed strangers to interact via video) dealt with nudity on its platforms: its image recognitions algorithms were trained to spot instances of nudity, and the stream would shut down. Of course, like I mentioned earlier, the challenges that Facebook are far greater, owing to its scale.

The problem is that with time, and with an increase in incidents, regulators will step in: the more the instances of shocking and explicit video content being streamed live, the more the pressure from courts and governments to get Facebook to prevent live-streaming. Beyond a point, Facebook may not be able to retain its safe harbour from Intermediary Liability: it’s only a matter of time.

Advertisement. Scroll to continue reading.

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.



By Rahul Rai and Shruti Aji Murali A little less than a year since their release, the Consumer Protection (E-commerce) Rules, 2020 is being amended....


By Anand Venkatanarayanan                         There has been enough commentary about the Indian IT...


By Rahul Rai and Shruti Aji Murali The Indian antitrust regulator, the Competition Commission of India (CCI) has a little more than a decade...


By Stella Joseph, Prakhil Mishra, and Surabhi Prabhudesai The recent difference of opinions between the Government and Twitter brings to fore the increasing scrutiny...


This article is being posted here courtesy of The Wire, where it was originally published on June 17.  By Saksham Singh The St Petersburg paradox,...

You May Also Like


WhatsApp addressed grievances related to abusive or harmful behaviour while Facebook and Instagram looked into content related to a range of topics like impersonation,...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Your email address:*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ