wordpress blog stats
Connect with us

Hi, what are you looking for?

IT Ministry’s Advisory Mandates AI Models to Be Government-Approved prior to use by Indians

Deep fakes in India must now carry unique IDs, says new MeitY advisory.

Updated on March 4, 2024 at 5:10 PM: IT Minister Rajeev Chandrasekhar tweeted a clarification about the advisory stating that it was meant to make AI companies deploying lab level/ undertested AI platforms aware of the fact that platforms have clear existing obligations under the Information Technology (IT) and criminal laws in India. As such, the best way for the platforms to protect themselves was to use labeling and explicit consent.

Any under-tested or unreliable artificial intelligence (AI) models can only be made available to Indian users after receiving explicit permission from the government, the Ministry of Electronics and Information Technology (MeitY) said in an advisory to AI companies sent out on March 1 (reviewed by MediaNama). Such models must be “appropriately labeled” to reflect that the output it generates is unreliable. Further, the AI models must have consent pop-ups in place which inform the users that output they receive is unreliable.

If any AI model facilitates the creation of deep fakes, the advisory states that such deep fake content should be labeled or embedded with a permanent unique metadata or identifier. This should allow the deep fake to be identifiable as being generated by an AI model, or identify the user of the model, the model itself and the creator/first originator of the deep fake. The intermediaries are required to submit a status report about action taken in response to the advisory within 15 days of the advisory. As per a tweet by IT Minister Rajeev Chandrasekhar, the provisions of the advisory would act as an insurance policy for AI companies, preventing them from getting sued by customers.

Why it matters:

With this advisory in place, companies would no longer be able to run beta tests of their AI models without seeking permission from the government. As such, only those companies that have the finances to run rigorous tests would be able to release AI models. This could in turn negatively affect competition in the AI space.

Chandrasekhar has said this advisory is aimed at significant platforms pointing out that only larger platforms will be required to ask for permission from the government before releasing an undertested/ unreliable AI model and that as such, startups would not be affected by it. Note that the advisory itself does not make any such classification based on platform sizes.

Some context:

This advisory comes soon after Chandrasekhar accused Google’s AI chatbot Gemini of violating the Rule 3(1)(b) of Intermediary Rules (IT rules) 2021 for its answer to the question “Is Modi a fascist?”. This rule focuses on the due diligence a social media intermediary must comply with. It essentially requires intermediaries to inform their users not to host, and to make “reasonable efforts” to avoid hosting certain kinds of content including that which impersonates a person and is patently false and untrue and is published with the intent to mislead or harass a person.

The IT Ministry previously relied on this clause to regulate deep fakes. In November, the ministry sent out an advisory to social media platforms to take down deep fake content within 24 hours or risk losing the protection available to them under Section 79(1) of the Information Technology Act, 2000. This section protects platforms from being held liable for any third party information, data, or communication link made available or hosted by them.

The ministry also sent a second advisory out in December, which stated that all intermediaries/platforms were required to clearly inform their users of the content not permitted under IT Rules [especially the kinds of content listed under Rule 3 (1) (b)]. This must be informed to the user in the terms and conditions of the platform, when a user registers themselves on the platform, and also as regular reminders, in particular, at every instance of login and while uploading/sharing information onto the platform.

Other key points mentioned in the advisory:

  • Platforms must ensure that the use of AI models on its service does not permit users to “host, display, upload, modify, publish, transmit, store, update or share” content outlined under Rule 3(1) (b).
  • Platforms must ensure that they don’t allow for any bias or discrimination and do not “threaten the integrity of the electoral process” including through the use of AI models.
  • All users must be informed, including through the terms of services and user agreements, “about the consequence of dealing with the unlawful information on its platform,” the advisory notes. These consequences include disabling access to content not compliant with the IT Rules, 2021, removal of such content, suspension or termination of access or usage rights of the user to their user account. The user could also be held liable for punishment under applicable law.

Our video analyzing the advisory:

Also read:

Updated on March 7, 2024, at 12:33 PM to add our video analyzing the advisory to the story.

STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!


Written By

Free Reads


According to Russian investigators, Stone had published online comments that defended hostile and violent actions against Russian military personnel.


bank-owned P-PA services do not require any authorization, but will also have to ensure compliance with other requirements for P-PAs.


However, it is possible to opt-out of the clause by emailing an opt-out notice to arbitration-opt-out@discord.com within 30 days of April 15, 2024, or...

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.



NPCI CEO Dilip Asbe recently said that what is not written in regulations is a no-go for fintech entities. But following this advice could...


Notably, Indus Appstore will allow app developers to use third-party billing systems for in-app billing without having to pay any commission to Indus, a...


The existing commission-based model, which companies like Uber and Ola have used for a long time and still stick to, has received criticism from...


Factors like Indus not charging developers any commission for in-app payments and antitrust orders issued by India's competition regulator against Google could contribute to...


Is open-sourcing of AI, and the use cases that come with it, a good starting point to discuss the responsibility and liability of AI?...

You May Also Like


Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...


135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...


By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...


Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Your email address:*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ