Concerned by reports that WhatsApp is being used to spread images and videos of child sexual abuse, the IT Ministry last Thursday ordered WhatsApp to reveal how it plans to stop the spread of such content, the Times of India reported. Separately, the National Commission for Protection of Child Rights (NCPCR) said it will ask for a police probe and take other steps to counter the spread of such videos, reported the Economic Times. A WhatsApp spokesperson pointed MediaNama to a previous statement on the issue and said the company had nothing new to share. The statement read: “We have carefully reviewed this report to ensure such accounts have been banned from our platform. We are constantly stepping up our capabilities to keep WhatsApp safe, including working with other technology platforms, and we’ll continue to prioritise requests from Indian law enforcement that can help confront this challenge.”

A two-week investigation by the Cyber Peace Foundation (CPF) in March found dozens of public WhatsApp groups with hundreds of members sharing videos of children being abused, according to an ET report last week. These groups – to which participants add themselves via an invite link – were identified through a WhatsApp public group discovery app that was banned from the Google Play Store in December but can still be found through a Google search, the report said. It was the CPF’s second investigation into child abuse content on WhatsApp in four months. The transmission of material depicting children in sexually explicit conduct in electronic form is prohibited under section 67B of the IT Act, 2000.

WhatsApp says it banned 7.5 lakh accounts in 3 months

WhatsApp said in February it had banned about 7.5 lakh accounts suspected of sharing images and videos of child abuse since December. The company said it has a zero-tolerance policy around child sexual abuse and bans users if it finds they are sharing content that exploits and endangers children. Because WhatsApp messages are encrypted end-to-end, only the sender and recipient can see them, the company said it relies on “all available unencrypted information including user reports” to detect and prevent this kind of abuse. Apart from acting on user reports, WhatsApp says it uses photo-matching technology called PhotoDNA to proactively scan profile photos for images of child abuse. When its systems detect such an image, it bans the user and all other accounts within the group. The company added, “We rely on advanced machine learning technology to evaluate group information to identify and ban members of groups suspected of sharing child exploitative imagery (CEI).”