Update (February 26): This summary has been updated to represent the final Rules that were notified on Thursday evening.
All OTT streaming services will now be required by law to accept complaints from viewers, and on top of the self-regulation system they created to avoid government regulation, they will now be subject to two layers of oversight. The first layer (after individual streaming services’ own grievance officers) will be a self-regulatory organisation (likely the Internet and Mobile Association of India’s Digital Entertainment Committee, or the recently announced IAMAI Secretariat). This layer is required to be headed by a retired high court or Supreme Court justice.
This effectively brings back the Digital Content Complaints Committee that most streaming services roundly rejected in favour of a less onerous code where each streaming service would have been allowed to police itself with some participation from one or more external members. Streaming regulation will now be a quasi-judicial process.
Note: This summary was initially based on a copy of the Rules that was released by the Internet Freedom Foundation, and had been circulating in journalist groups. The final Rules have differences that we have outlined below as and when relevant. (The previous draft, for instance, explicitly impacted partly curated platforms like YouTube Originals and Facebook Watch, on top of curated-only streaming platforms like Netflix and Hotstar; the notified Rules don’t explicitly draw this distinction.)
Below is a full summary of the new regulations governing streaming services in India.
Summary of the rules
Streaming services (online curated content providers) shall adhere to a Code of Ethics (summarized below). Streaming services that don’t adhere to the code shall be liable for “consequential action as provided in any law which has so been contravened”.
There are three levels of regulation.
- The streaming service itself (Level I): Streaming services shall accept complaints through a designated grievance redressal officer, who is required to process the complaint and have a decision within fifteen days. (Note: The previous draft provided for a central government-owned portal to forward complaints from users to streaming services. This has been removed in the final notified Rules.)
- “Self-regulation” by self-regulatory organization setup by the streaming service (Level II): This SRO shall be headed by a retired Supreme Court justice, High Court justice, or “an independent eminent person” from the entertainment industry. It will also have up to six other members, who are “experts from the field of media, broadcasting, entertainment, child rights, human rights or such other relevant field”. How deliberations and decisions will happen (eg. by unanimity or majority vote, or by judgement of the retired justice) is not mentioned in the rules, and will likely be notified in a Code of Practice published by the I&B Ministry. (Note: The previous leaked version of the Rules only had a provision for a retired justice to head the SRO, and that too one appointed by the Ministry. The notified version does not include this requirement, but does require I&B Ministry approval for the committee’s registration.)
- Oversight mechanism by the Government (Level III): The government will publish a charter for SROs. It shall create an inter-departmental committee to hear grievances that have not been resolved in Level I and II. This committee can require content providers to reclassify their content’s age rating, edit the synopsis, or apologize. Complaints can also be referred to by Level III for blocking or censoring under Section 69A of the IT Act.
Every member shall be required to be a part of the Level II SRO (probably the IAMAI Secretariat/IAMAI Digital Entertainment Committee for streaming services). The SRO has to be registered by the last week of March 2021.
Age classification: All streaming services will be required to classify their content by age, “having regard to the context, theme, tone, impact and target audience” of the content. The code says that it is the streaming service (“publisher”) that has to classify content, so content rated already by the Central Board of Film Certification may need to be certified again by the streaming service.
Government committee can hear direct complaints: Level III (the government committee) can hear complaints if Levels I and II fail, but also if the committee itself feels that a hearing is necessary. It can also hear complaints referred by the Ministry directly.
Blocking/censorship orders: The I&B Ministry will appoint an Authorised Officer of Joint Secretary rank or above, who will head the Level III committee. If the Level II or Level III committee feels that content has to be censored or removed because it is illegal or harms public order, this recommendation gets passed up until the Authorised Officer, who can direct either the government or the publisher to censor or delete the content. Orders can only be passed after approval from the I&B Ministry’s Secretary. (Note: The previous draft did not mention censorship, or ‘editing’, as the Rules put it. The previous draft also does not provide for ordering streaming services directly to censor content; only the government was to be approached with blocking orders.)
Emergency blocking: The I&B Ministry can at any time issue an emergency blocking order for content under Section 69A(1) of the IT Act. Within 48 hours of this, the Level III committee must be notified so that it can come up with recommendations. After the committee files its recommendations, the Secretary will either maintain or set aside the blocking order. (Note: This entire section was not present in the previous draft.)
Recordkeeping and review: The authorized officer shall keep record of Level III proceedings. The Level III committee will meet once in two months to decide if blocking orders issued were legal. If they are not, the orders shall be set aside.
Disclosure: Streaming services and their SRO shall make “true and full disclosure” of all grievances they receive. They shall also disclose how they dispose grievances and the action they take on them.
Code of ethics
Applicability: These rules apply to curated content providers like Netflix, Amazon Prime Video and Hotstar, and also to curated sections of otherwise user-generated platforms like YouTube Originals (available only to YouTube Premium users) and Facebook Watch.
- No streaming service shall put out content that violates the law as it stands at any given point of time.
- Streaming services must exercise “due caution and discretion” with respect to content that “affects the sovereignty and integrity of India, “threatens, endangers or jeopardizes the security of the State,” and “is detrimental to India’s friendly relations with foreign countries”. Content which “is likely to incite violence or disturb the maintenance of public order” also falls under this category. (Note: The mention of content that incites violence or disturbs public order was added in the final notified Rules.)
- Streaming services “shall take into consideration India’s multi-racial and multi-religious context and exercise due caution and discretion when featuring the activities, beliefs, practices, or views of any racial or religious group.”
Content should be classified into the following ratings: U (suitable for everyone), U/A 7+ (suitable for seven year olds and older with parental guidance), U/A 13+, U/A 16+, and A (restricted to adults). Content shall be classified on the basis of “(i) Themes and messages; (ii) Violence; (iii) Nudity; (iv) Sex; (v) Language; (v) Drug and substance abuse; and (vi) Horror”. These bases can be modified or amended by the Ministry at any given point of time. (Note: The mention of the Ministry having the power to modify these bases at any point was not present in the previous draft.)
Classification should be presented to viewers at the start of content.
Age restrictions and accessibility
Streaming services shall “take all efforts” to restrict access to all content with an “A” rating through “appropriate access control measures”.
“Every applicable entity shall, to the extent feasible, take reasonable efforts to improve the accessibility of online curated content transmitted by it to persons with disabilities through the implementation of appropriate access services,” the code says. There is no specific requirement of closed captioning or audio description.
These are the guideline classifications:
- Context: The context in which a work is depicted, whether it is fantastic or historical in nature, for instance, can be considered.
- Theme: Theme can be considered but this is dependent on the treatment of the theme. Themes like drug misuse, pedophilia, and racial and communal hatred are “unlikely to be appropriate” for younger audiences.
- Tone and impact: Content with a “stronger depiction of violence” tone shall be rated for older audiences. (Note: The previous draft used the phrase “dark and unsettling tone” instead of “stronger depiction of violence”.)
- Target audience: Who a work is intended for is important in classifying it.
Issue related guidelines
These are the issue-specific guidelines that will determine classification.
- Discrimination: Portrayal of discrimination in a piece of content should be evaluated on the “strength and impact” of such portrayal. (Note: The earlier draft of the code said that criticizing discrimination would qualify content for a lower age rating. It also said that the context would have a bearing on the classification. This clarification is not present in the notified Rules.)
- Psychotropic substances, liquor, smoking and tobacco: Content that “portray[s] and promote[s] misuse” of these things is likely to be approved for older audiences. (Note: As with discrimination, there was a clarification that criticizing substance abuse could qualify content featuring it for a lower age rating. This is no longer present in the notified Rules.)
- Imitable behaviour: Behaviour that children may try to imitate, especially criminal or violent behaviour like bullying, violence, eve teasing etc. are likely to be rated for older audiences. This applies to content with innuendo-ridden “song and dance scenes”. (Note: This section has been shortened in the final rules; mentions of unnecessary violence have been removed, along with “cruelty and horror”.)
- Language: Bad language increases the age rating, with consideration given to region, race, background, beliefs, gender, and so on. (“It is impossible to set out a comprehensive list of words, expressions or gestures that are acceptable at each category in every Indian language,” the rules say.) (Note: This section in the final Rules has been shortened, but the meaning has overall stayed the same.)
- Nudity: Adult nudity with sexual content can only be portrayed at A rating. “No content prohibited by law at the time being in force shall be” streamed, the Rules say. (Note: The previous draft limited adult nudity without sexual content could be depicted until U/A 16+. This has been removed in the final notified Rules. Content prohibited by law being impermissible was not present in the previous draft.)
- Sex: The code implies that implicit and explicit sexual content can only be included in U/A 16+ and A categories. The Rules repeat the disclaimer in the Nudity guideline that content prohibited by law is impermissible. (Note: The previous draft explicitly said that pornographic content would not be acceptable.)
- Violence: The extent and nature of violence shall determine the maturity of classification. (Note: The previous draft expanded on this issue, but this has been reduced to a single line in the final Rules. The previous draft’s explanation of the violence guideline is summarized below.)
“Works that feature the following are likely to receive higher classifications
- Portrayal of violence as a normal solution to problems
- Heroes who inflict pain and injury
- Callousness towards victims
- The encouragement of aggressive attitudes
- Characters taking pleasure in pain or humiliation
- The glorification of glamorization of violence” — the Rules (previous draft)
“Sadistic or sexual violence, or other conduct that is demeaning or degrading to human dignity is likely to receive a higher classification,” the Rules’ previous draft said.
Update (12:37pm, February 26): Further changes from the previous draft have been updated in this summary.