Social media companies with more than 5 million users will potentially have to proactively monitor live, or real-time content such as YouTube or Facebook live under India’s new social media rules. This might lead to platforms over-censoring content, would be difficult to get right due to limitations in current automated detection tools, and involves major technical costs for smaller companies, legal and technical experts MediaNama spoke to concurred. “The rule is very badly drafted,” said Divij Joshi, an independent lawyer, researcher, and tech policy fellow at Mozilla. “The rules don't make a distinction between 'live' and non-live media. It will apply to real-time uploads on a textual reading, but in practice it can be more difficult to monitor and takedown infringing/ illegal content as it is happening.” Why this matters: Content that the rules treat as problematic could be streamed live on platforms like YouTube, and Facebook. For instance, in 2019, shootings at different mosques in New Zealand’s Christchurch was streamed live on Facebook, and the disturbing visuals even remained on the platform for around an hour before being taken down. A bit of context: The rules, among other things, require that social media companies with more than 5 million registered users, “shall endeavour” to deploy technology-based measures such as automated tools to proactively identify information that depicts rape, child sexual abuse material (CSAM), or any information that that is “exactly identical” to information that was previously removed or access to which was disabled. Content taken down through such tools…
