YouTube on October 25 announced that monetisation of content targeting kids will depend on the quality of the content. “If a channel is found to have a strong focus on low-quality ‘Made for Kids’ content, it may be suspended from the YouTube Partner Programme. If an individual video is found to violate these quality principles, it may see limited or no ads,” the company said. These new guidelines are set to take effect in November.
“Made for Kids” content is not only content on the YouTube Kids app, but also content on the main app that is targeted towards children. While YouTube has long maintained standards on what constitutes high- and low-quality kids’ content and used this to determine recommendations and eligibility for YouTube Kids, this is the first time the platform will determine monetisation based on these standards.
What is “low-quality” content?
According to YouTube, low-quality content includes content that is:
- Heavily commercial or promotional: Content that is primarily focused on purchasing products or promoting brands and logos (e.g., toys and food). It also includes content that is focused on excessive consumerism.
- Encouraging negative behaviors or attitudes: Content that encourages dangerous activities, wastefulness, bullying, dishonesty, or a lack of respect for others (e.g., dangerous/unsafe pranks, unhealthy eating habits).
- Deceptively educational: Content that claims to have educational value in its title or thumbnail, but actually lacks guidance or explanation, or is not relevant to children (e.g. titles or thumbnails that promise to help viewers “learn colors” or “learn numbers,” but instead features mindless repetitive content or inaccurate information).
- Hindering comprehension: Content that is thoughtless, lacks a cohesive narrative, is incomprehensible (e.g. has shaky visuals/inaudible audio), as is often the result of mass production or auto-generation.
- Sensational or misleading: Content that is untrue, exaggerated, bizarre, or opinion-based, and may confuse a young audience. It might also include “keyword stuffing”, or the practice of using popular keywords of interest to children in a repetitive, altered, exaggerated, or nonsensical way.
- Strange use of children’s characters: Content that puts popular children’s characters (animated or live action) in objectionable situations.
Biggest channel on YouTube Kids might be affected
As noted by The Verge, these new guidelines will probably impact Ryan’s World, one of the biggest kids’ channels on YouTube. The channel, which has 30.8 million subscribers, mostly features 10-year-old Ryan Kaji unboxing toys. “It’s definitely what one would describe as ‘consumeristic’ — which is what YouTube says it’s trying to cut down on,” The Verge wrote.
YouTube said that it has “reached out to potentially impacted creators in order to support them before these changes take effect next month.”
What else has YouTube done to protect kids on the platform?
Ever since YouTube’s 2019 settlement with the US Federal Trade Commission over alleged violations of the Children’s Online Privacy Protection Act (COPPA), the platform has been working on making the platform more appropriate for kids. As part of this settlement, all content creators have been mandated to disclose whether or not their videos target kids and YouTube banned targeted advertising, comments, and some community features for kids’ content.
In February this year, Google introduced supervised accounts, which makes it possible for tweens and teens to explore the main YouTube app under parental supervision. Then in August, Google announced another set of changes targeted at protecting kids across its various platforms including removing overly commercial content from YouTube Kids and providing “take a break” and bedtime reminders by default for all users aged 13-17.
What have other major social media platforms recently done on this front?
TikTok: In January, TikTok announced changes to its app to better protect underage users, by limiting their public visibility and also giving users more control over who can see and comment on their videos. The short-video app said it will set the accounts of users aged between 13 and 15 years to “private” by default. Users in this age group will also choose to either disallow comments on their videos or only let their Friends comment; the “everyone” option will be removed, the company said.
Instagram: On July 27, Instagram announced three changes it is making to improve the safety of young users on its platform:
Making accounts of users under the age of 16 private by default
Making it harder for suspicious accounts to find young users
Limiting advertisers’ ability to target young users (coming to both Instagram and Facebook)
Notwithstanding these changes, Facebook and Instagram have come under heavy criticism after the recent Wall Street Journal revelations on how the photo-sharing platform harms teen mental health. Bowing to pressure from this development, Facebook in September paused its work on Instagram Kids, a version of its app for children under the age of 13, but said that it will continue to make the main app safer for teens and tweens.
Apple: Apple in August announced three new measures that aim to limit the spread of Child Sexual Abuse Material (CSAM) and protect children from predators:
- CSAM detection in iCloud Photos: Using advanced cryptographic methods, Apple will detect if a user’s iCloud Photos library contains high-levels of CSAM content and pass on this information to law enforcement agencies.
- Safety measures in Messages: Messages app will warn children about sensitive content and allows parents to receive alerts if such content is sent or received.
- Safety measures in Siri and Search: Siri and Search will intervene when users try to search for CSAM-related topics and will also provide parents and children expanded information if they encounter unsafe situations.
However, after facing pressure from privacy advocates over these features, Apple on September 3 said that it will delay its implementation.
What does India’s data protection bill say about children’s safety online?
The draft Personal Data Protection (PDP) Bill, 2019 has defined guardian data fiduciaries (GDF) as entities that
- Operate commercial websites or online services directed at children or
- Process large volumes of personal data of children.
What are the responsibilities of GDFs?
- GDFs are prohibited from “profiling, tracking or behaviourally monitoring or targeted advertising direct at, children”. Essentially, they cannot process children’s data that can cause “significant harm” to the child.
- GDFs are supposed to verify the age of their users, and obtain consent from their guardian or parents if the user is a “child” — anyone under 18.
- Failure to adhere to the provisions can attract a fine of ₹15 crore, or 4% of the company’s global turnover.
In a MediaNama discussion on this topic, we discuss how these fiduciaries will comply with this complex mandate. In another discussion, we also discuss whether there should be a blanket age of consent for using online services.
- Google Announces A Slew Of New Measures To Protect The Safety Of Children On Its Platforms
- Teenagers On Instagram May Soon Be Nudged To Look The Other Way On Harmful Content
- Exclusive: India’s Child Rights Body Sent 7 Content Takedown Requests Since IT Rules Kicked In
- YouTube Announces Ban On Anti-Vaccine Content For All Approved Vaccines, Not Just COVID Vaccines
Have something to add? Post your comment and gift someone a MediaNama subscription.