Major social media platforms like TikTok and Instagram have also taken steps to ensure the safety of children online while India’s draft PDP Bill moves to address this issue as well.
“As kids and teens spend more time online, parents, educators, child safety and privacy experts, and policymakers are rightly concerned about how to keep them safe,” Google said while announcing new measures geared at protecting the safety of children across its platforms.
“Some countries are implementing regulations in this area, and as we comply with these regulations, we’re looking at ways to develop consistent product experiences and user controls for kids and teens globally,” the company added.
Why it matters? Platforms like YouTube, Facebook, and TikTok have previously been accused of not doing enough to address children’s safety, and Google and TikTok have even been dealt fines for how they handle kids’ data, but these companies have made more efforts recently. Google’s announcements on this front are a major step forward because Google Search is essentially a gateway to the internet, and its YouTube app is one of the most popular platforms among children and teenagers.
What are the new measures announced by Google?
- Restricting ad targetting: In addition to preventing age-sensitive ad categories from being shown to teens, Google will block ad targetting based on the age, gender, or interests of people under 18. These changes will roll out globally in the coming months across Google’s products.
- Option to remove image from Google results: Google will allow anyone under the age of 18, or their parent or guardian, to request the removal of their images from Google Image results. This is in addition to all the existing removal options the company has including the removal of non-consensual explicit or intimate images, financial and health information, and doxxing content. “Of course, removing an image from Search doesn’t remove it from the web, but we believe this change will help give young people more control of their images online,” Google cautioned.
- Changes to YouTube: Google announced the following changes coming to YouTube and YouTube Kids in a separate blog post:
- Private option by default: Videos uploaded by users aged 13-17 will have the most private option as the default upload setting, which means the content can only be seen by the user and whomever they choose. Users can however change this setting if they wish to.
- Take a break and bedtime reminders on, autoplay off by default: YouTube will turn on “take a break” and bedtime reminders by default for all users aged 13-17 and turn autoplay off by default for these users.
- Autoplay for YouTube Kids: AutoPlay option will also be added to YouTube Kids and turned off by default. The parents will also be given the ability to lock this setting.
- Removing overly commercial content from YouTube Kids: YouTube already does not allow paid product placements on YouTube Kids, but now it will begin removing overly commercial content as well, “such as a video that only focuses on product packaging or directly encourages children to spend money,” the platform said.
- SafeSearch on by default: Google’s SafeSearch feature, which helps filter out explicit results when enabled, will be turned on for existing users under 18 and made the default setting for teens setting up new accounts. This feature is already on by default for all users under 13.
- Location history permanently off: Google’s location history feature is currently turned off by default for all accounts, but children with supervised accounts don’t have the option of turning it back on. The same will now extend to all users under the age of 18.
- Protections for Google Assistant: In addition to bringing the SafeSearch feature to smart displays, Google announced that it is working on new default protections that will prevent mature content from surfacing during a child’s experience with Google Assistant on shared devices. However, the announcement did not mention what these protections will be or how they will work.
- New safety section in Play Store: Google is adding a new safety section to Google Play which will provide developers with a way to showcase their app’s overall safety including letting parents know if their app follows Google’s Families policies. Apps will also be required to disclose how they use the data they collect, giving parents more information to decide if the app is right for their child.
- SafeSearch on and Incognito off in Google Workspace for Education: In addition to already having the ability to tailor the experience for their users based on age, K-12 institutions that use Google Workspace will now have SafeSearch turned on by default and switching to Guest/Incognito mode will be turned off by default.
- New digital wellbeing tools in Family Link: Google’s Family Link, which allows parents to monitor and control their children’s accounts by managing their apps and setting screen time limits, will now give parents the ability to filter content such as news, podcasts, webpages in Assistant-enabled smart devices.
- Improving communication of data practices to children: “It’s our job to make it easy for kids and teens to understand what data is being collected, why, and how it is used,” Google stated while announcing that it is developing “engaging, easy-to-understand materials for young people and their parents to help them better understand our data practices.”
What have other major social media platforms recently done on this front?
TikTok: In January, TikTok announced changes to its app to better protect underage users, by limiting their public visibility and also giving users more control over who can see and comment on their videos. The short-video app said it will set the accounts of users aged between 13 and 15 years to “private” by default. Users in this age group will also choose to either disallow comments on their videos or only let their Friends comment; the “everyone” option will be removed, the company said.
Instagram: On July 27, Instagram announced three changes it is making to improve the safety of young users on its platform:
Making accounts of users under the age of 16 private by default
Making it harder for suspicious accounts to find young users
Limiting advertisers’ ability to target young users (coming to both Instagram and Facebook)
Apple: Apple last Thursday announced three new measures coming to its operating systems this fall that aim to limit the spread of Child Sexual Abuse Material (CSAM) and protect children from predators:
- CSAM detection in iCloud Photos: Using advanced cryptographic methods, Apple will detect if a user’s iCloud Photos library contains high-levels of CSAM content and pass on this information to law enforcement agencies.
- Safety measures in Messages: Messages app will warn children about sensitive content and allows parents to receive alerts if such content is sent or received.
- Safety measures in Siri and Search: Siri and Search will intervene when users try to search for CSAM-related topics and will also provide parents and children expanded information if they encounter unsafe situations.
What does India’s data protection bill say about children’s safety online?
The draft Personal Data Protection (PDP) Bill, 2019 has defined guardian data fiduciaries (GDF) as entities that
- Operate commercial websites or online services directed at children or
- Process large volumes of personal data of children.
What are the responsibilities of GDFs?
- GDFs are prohibited from “profiling, tracking or behaviourally monitoring or targeted advertising direct at, children”. Essentially, they cannot process children’s data that can cause “significant harm” to the child.
- GDFs are supposed to verify the age of their users, and obtain consent from their guardian or parents if the user is a “child” — anyone under 18.
- Failure to adhere to the provisions can attract a fine of ₹15 crore, or 4% of the company’s global turnover.
In a MediaNama discussion on this topic, we discuss how these fiduciaries will comply with this complex mandate. In another discussion, we also discuss whether there should be a blanket age of consent for using online services.
- Children And The Internet At UN Internet Governance Forum: Issues Around Gaming, Online Harms, Education
- Roundup: Industry Leaders, Civil Society, And Technical Experts React To Apple’s CSAM Filter Plans
- How WhatsApp Deals With Child Sexual Abuse Material Without Breaking End To End Encryption
Have something to add? Subscribe to MediaNama and post your comment