“If all intermediaries proactively monitor content, there will be mass-scale private surveillance, given the entire Intermediary Rules amendment,” said Sarvjeet Singh from the Centre for Communications Governance, NLU Delhi. “This also totally violates Puttaswamy judgement,” he added. “Mandating the use of automated tools to carry out proactive monitoring of content can result in a prior-restraint regime,” said Prasanna S., an advocate. He was speaking at MediaNama’s discussion on Intermediary Liability in Delhi on October 23, conducted in partnership with CCG-NLU Delhi, with support from Google, Facebook, and the Friedrich Naumann Foundation.
Proactive takedowns may blur the line between a passive and an active intermediary, and goes against the very nature of what an intermediary is, according to Tanya Sadana of Ikigai Law. A intermediary is merely a passive transmitter of information and has no active knowledge; proactive monitoring is “antithetical to its very being”.
“You cannot have Safe Harbour provision based on [platforms] having no knowledge, and then impose a liability of removing the Safe Harbour provision because you are not proactively monitoring content. You’re damned if you do, and damned if you don’t.” — Tanya Sadana, Ikigai Law
Effectiveness of AI in tackling problematic content
The draft Intermediary Rules mandate proactive monitoring and removal of unlawful content, using automated tools or appropriate mechanisms. But the effectiveness of AI — which is an automated tool — depends on the quantity and quality of data that is fed into the algorithm. This can be a crucial distinguisher between a startup and a tech giant, said Deepit Purkayastha, co-founder of Inshorts. This is why only the largest companies are equipped to effectively deploy AI. Mark Zuckerberg has even acknowledged that Facebook spends more money on content moderation than Twitter’s entire revenue, said Adnan Alam from Nutanix.
Although automated tools are effective in identifying child porn and nudity, they fall short in identifying hate speech and content relating to national integrity, said Singh. Automated tools also face problems when it comes to non-English and non-Hindi content in India, he added.
While AI isn’t evolved to monitor problematic content effectively, manual intervention also comes with a fair share of problems, such as subjectivity and efficiency, according to Sadana. This can make the current provisions ineffective in implementation, she said.
Even then, YouTube’s transparency report said that 70% of the content that was finally taken down from the platform was first detected by an automated machine and then subject to human review, said Mozilla’s Udbhav Tiwari highlighting certain benefits of AI. “It’s important for us to realise that and then figure out how we can fit that into this judicial scheme,” he added.
Who decides what ‘unlawful’ content is?
Only the judiciary should decide what content is unlawful, and what is not, said Shehla Rashid Shora. After a teenager in Kashmir allegedly died of pellet injuries, a graphic image and an X-ray of his skull which showed pellet injuries were doing the rounds on platforms like Instagram, Facebook and Twitter, but all these platforms constantly kept flagging these pictures as sensitive, she said. “You had to uncover that photo and in some cases it was taken down, so how do you decide in this situation whether the content is unlawful or not?” she asked.
Srinivas Kodali, an independent researcher, said that deciding the kind of content to be taken down should be left to the courts. “I think we shouldn’t decide that or a machine or the intermediaries shouldn’t decide that,” he said. Content containing child porn, child self harm and suicide and games like the Blue Whale Challenge could be flagged, according to Suvarna Mandal of Sai Krishna and Associates.
For content that is defamatory in nature, “it’s really clear that there are some things that courts are exclusively empowered to decide upon; private platforms should not decide that,” Amitabh Singh, an independent public policy professional, said.
There should not be similar obligations on all types of illegal content, because it is not just unreasonable but might also result in legitimately illegal content — like child pornography and terrorist content — staying online for much longer than it necessarily should, said Tiwari.
When can an intermediary lose Safe Harbour?
“Does an intermediary become liable right when certain type of content is posted on its platform, or when an automated tool flags it to humans?” asked Nikhil Pahwa, founder and editor of MediaNama. Liability to take down content will kick in “from the time the actual knowledge came to the platform,” Sadana said.
“The moment there is a human interface, that is deciding what content should continue to be on the platform, you are somehow modifying and selecting the receiver of the transmission. In that sense you lose your intermediary Safe Harbour protection.” — Tanya Sadana
Referring to Section 230 of the US’ Communications Decency Act, Facebook’s Sachin Dhawan said that an intermediary platform is deemed not to have knowledge because of the free-speech protection that it gives. This actually encourages platforms to take down content such as child porn, without having to assume liability for not taking down other content,” he added.
But, what happens if an intermediary outsources content monitoring to a third party, and that third-party fails to remove problematic content and also doesn’t make the intermediary aware of it, Purkayastha asked. “Even when you outsource your responsibility to some other vendor, ultimately you will still remain liable for the acts of that vendor,” Sadana said.
“For the longest time, even in India, some of these big companies followed the First Amendment standard, but now they agree that the standards to follow are the ICCPR [International Covenant on Civil and Political Rights] human rights standard and the UDHR [Universal Declaration of Human Rights] standard. We should push them to follow international human right standards irrespective of their geographical location,” Singh said.
How can intermediaries collaborate?
Mozilla has been working on DNS over HTTPs (DOH), which can be used to block certain kinds of content. The company is working with certain ISPs that provide DOH to help them block certain kinds of content while preserving encryption at the same time, according to Tiwari. He also alluded to the GIFCT (Global Internet Forum to Counter Terrorism), where some of the biggest companies in the world are maintaining a common shared database of images and videos of terrorist extremist content, which they block on all of their platforms.