Days after reports emerged that government officials are set to meet social media platform executives to discuss regulating deepfakes, the Minister of State for IT Rajeev Chandrasekhar claimed in a recent interview that platforms that don’t take them down will lose safe harbour protections under Indian law.
“The way we will now regulate deepfakes is very simple,” Chandrasekhar said in a recent interview with India Today’s Raj Chengappa. “It is in the rules we enacted in February 2023. If any platform has a deep fake, whether it’s a messaging platform or a social media platform, and they have it on the platform despite their obligation to not put these contents, and when it is reported as a deep fake and they have 36 hours to remove it [under the IT Rules, 2021] but do not, then the safe harbour falls away. Whoever’s aggrieved, whether it’s Raj Chengappa or Rajeev Chandrasekhar, can take them to court, and criminally prosecute them under the IPC [Indian Penal Code] and IT Act.” Union Minister for IT Ashwini Vaishnaw had also recently mentioned that platforms failing to remove safe harbour will lose safe harbour.
This firming up of India’s regulatory stance on deep fakes follows the Indian Prime Minister’s recent comments over the consequences of these technologies.
Safe harbour protects platforms from being held liable for the third-party content they host, and is held under Section 79 of the Information and Technology Act, 2000 (IT Act). Under India’s platform regulation laws, the IT Rules, 2021, platforms have to take down flagged prohibited content, or risk losing their safe harbour protections. This includes content that invades bodily privacy, intentionally communicates misinformation, and impersonates another person.
However, it is unclear which February 2023 ‘rules’ Chandrasekhar is referring to—as none were enacted in February to specifically address deepfakes. However, reports from the time claimed that the government asked social media companies to take down deep fakes under the IT Rules, within 24 hours. The reports added that the advisory sent to companies came after a warning on deep fakes from the Ministry of Home Affairs. The communication reportedly said that “…significant social media intermediaries are advised to ensure that their rules and regulations and the user agreement contain appropriate provisions for the users not to host, display, upload, modify, publish, transmit, store, update or share any information that impersonates another person and that the users are duly informed of the same.”
Top Prime Ministerial advisors chip in on deep fake regulation: A recent Times of India op-ed saw Bibek Debroy (Chairman, Economic Advisory Council to the Prime Minister) and Aditya Sinha (Assistant consultant, Economic Advisory Council to the Prime Minister) bat for a multi-pronged approach to regulating deep fakes. First, India needs a “combination of advanced detection algorithms” to detect deep fakes and prevent their spread, as well as data sets to train them with. Technology companies and government agencies should collaborate to improve deep fake detection too. The government also needs to devise a regulatory framework on the misuse of deepfakes—it can collaborate with academia to explore different deep-fake detection methods (like “blockchain for digital content provenance”). Public education campaigns on the dangers of deep fakes are needed too.
Debroy and Sinha also highlighted various consequences of deep fakes, including some that assume significance in the run-up to the general elections next year—deep fakes created to spread false narratives about candidates or to influence voter perceptions and potentially sway elections. Deep fakes could also be used to propagate foreign propaganda and “incite turmoil”. They noted that currently, India’s penal laws could be applied to deep fakes if they are defamatory or incite violence.
Debroy and Sinha further recommended a policy regulating General Adversarial Networks (GAN) and “other deep fake applications”. This could entail requiring companies to register and disclose GAN applications, and in doing so, providing details to an oversight body on an algorithm’s design, purpose, training data, and potential applications. The body could review the ethical use of these applications—such as whether they contravene data privacy, or contribute to “misinformation campaigns”. It could also enforce algorithmic transparency standards for GAN developers to follow, which would allow third parties to understand and verify system outputs.
GAN deployment can also be monitored by “content-verification tools” which scan platforms for GAN-generated content, and tag them “based on their likelihood of being synthetic”. Digital watermarking can also be adopted to signal the content’s authenticity—for example, content on a platform without this watermark could trigger an alert for the monitoring technology.
Chandrasekhar’s historical stance on safe harbour clauses: In the India Today interview, Chandrasekhar also recalled his opposition to the introduction of safe harbour protections in India:
“The act, the law that governs these platforms, is the IT Act,” Chandrasekhar recalled. “It is a 22-year-old act. It was done in the time of Atal Bihari Vajpayee ji [the former Prime Minister of India] in 2000, and then subsequently amended by the UPA [government] in 2008. One of the amendments that was put in the IT Act in 2008, in a blind following of the US model, was Section 79, which gives essentially, immunity from any kind of prosecution to any internet platform. The narrative at that time is (sic) that platforms are not responsible for the content on them because some user does it, so if you have to prosecute somebody, prosecute the user. It’s a clever way they managed that. I was on the IT Committee in those days, and I was the only one who opposed it, for whatever it matters, now it’s academic. That safe harbour has caused platforms to not have good behaviour, or an obligation of good conduct.”
STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!
- Indian Government To Discuss Deepfake Regulations With Social Media Giants
- YouTube Announces New Measures For AI-Generated Content, Users Can Ask For Takedown Of Deepfakes
- Take Down Deepfakes Within 24 Hours, IT Ministry Tells Social Media Platforms: Report
- Microsoft To Help Combat Deepfakes In The Run Up To 2024 Elections