According to a report by the New York Times, Meta has received more than 1.1 million reports of users under the age of 13 on its Instagram platform since early 2019 and yet it disabled only a fraction of those accounts. Instead, it collected these users’ data— like their location and email addresses — without parental permission which is a violation of a federal children’s privacy law in the US.
These charges are part of an unsealed complaint from the lawsuit filed by 33 US states against the social media conglomerate Meta last month. In this lawsuit, the states have alleged that Meta knowingly designed and deployed harmful features on Instagram and other social media platforms that make children and teens addicted to them. The states also pulled up Meta for non-compliance with The Children’s Online Privacy Protection Act of 1998 (COPPA) suggesting that its platforms, Facebook and Instagram, process children’s data without parental consent.
Soon after the complaint became public, the Wall Street Journal released an exclusive story discussing how they had conducted an experiment testing Instagram’s algorithm by setting up accounts that only follow young gymnasts, cheerleaders and other teen and preteen influencers. The experiment found that Instagram’s algorithm delivered risqué footage of children as well as overtly sexual adult videos to those accounts. The algorithm spliced this content with ads for major brands like Bumble and Match Group (Tinder’s parent company). Both these companies have reportedly paused advertising on Instagram.
The challenges of obtaining parental consent:
Just like the US, India also has regulations in place that prevent tech companies from processing the personal data of a child. Under India’s Digital Personal Data Protection Act (DPDP Act), 2023, platforms are required to obtain “verifiable” parental consent before they process the data of anyone under the age of 18.
Speaking at MediaNama’s annual conference PrivacyNama, Uthara Ganesh, social media platform Snap’s head of public policy said that age verification methods each come with a trade-off. “There’s self-disclosure, which is easily circumventable. The second is, of course, ID-based verification, which we know, of course, has data privacy risks, but then also the trade-off there is that some people may not even have IDs. There’s an access versus accuracy trade-off there. The third is, of course, using biometrics of some sort, which some experts think actually might be quite fine from an accuracy perspective, but then there are variances because of things like skin color and your physical features, etc,” she said at the conference. These trade-offs would apply not just in India but anywhere else in the world that seeks to bring in parental consent as a means of protecting children.
Another challenge with obtaining parental consent is to find out who is a child and who isn’t which would require everyone to verify their age, which in turn will require data collection and processing. So how would a company like Meta ever be able to comply with the DPDP Act or even with COPPA?
Why not create a different application for kids?
Other social media platforms, like YouTube, have created platforms that are specifically targeted towards children. This platform comes with in-built parental controls such as allowing parents to set the age level of the videos that the child enjoys – preschool, school age, or all kids. There is also a timer feature that can limit the time a child spends on the platform. It makes one wonder why Meta couldn’t go the same route. The fact is that they tried. Here is an overview of their attempts and the issues they presented—
Instagram for Kids: Meta did indeed have plans to create a platform called ‘Instagram Kids’ for those under the age of 13 back in March 2021. This platform was meant for 10-12-year-olds and would require parental permission to join, wouldn’t have ads, and would come with age-appropriate content and features. According to Meta (called Facebook at the time), the idea behind the platform was to provide parents the option to, “give their children access to a version of Instagram that is designed for them.” However, those in opposition to Instagram Kids argued that instead of responding to a need, Meta was creating a new one, “as this platform appeals primarily to children who otherwise do not or would not have an Instagram account.”
The same year as Meta announced its plans for Instagram Kids, the Wall Street Journal conducted an investigation into Meta which revealed the negative impact that Instagram had on teenagers’ mental health. With that, Instagram Kids was paused and in the past two years, there have been no talks of a revival.
Messenger Kids: This is a video chat and messaging service for children under the age of 13 that Meta launched in 2017 and continues to be active today. In 2018, a Wired report found that the majority of experts who vetted Messenger Kids before its launch were given money by Facebook. Then, in 2019, the service was reported to have a critical design flaw that allowed users to sidestep protections in the group chat system, allowing children to enter group chats with unapproved strangers. (For context, the app is supposed to notify parents when a child adds someone as a friend and parents can override any new connections made through the app.)
Even this year, the US Federal Communications Commission (FTC) has said that Meta was misleading parents and failed to protect the privacy of children using its Messenger Kids app, allowing kids to communicate with contacts that were not approved by the parents.
Who is responsible for ensuring that children are safe online?
A participant at MediaNama’s PrivacyNama conference, Intellectual Property lawyer Rahul Ajatshatru highlighted that age verification measures protect state and platforms from liability in cases where a child is harmed by the data processing activity of a company. “When you become an adult, you’re responsible for the consequences of your action from the state and citizen point of view. So, when you do something, you are by law responsible for all consequences thereof which you can foresee. Now, in terms of a parent and a child, it is not that the parent or the child is able to appreciate the harm, but the state is fixing responsibility of the harm to be caused to the child on the parents saying, listen, it was done under your supervision care and you’re responsible for the harm,” he argued.
Meta wants to be even further removed from liability. It doesn’t even want to be the party verifying the age of users on its services, based on a recent blog post by Meta’s Global Head of Safety, Antigone Davis. Davis suggested that app stores (like Google Play and iOS App Store) should take charge of age verification citing that it can be impossible for parents to keep up with all the apps that teens use today.
STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!