Key Takeaways:
- The upper age limit for defining a “child” should be lowered
- Age verification and parental consent is not possible at scale
- Need to consult children while framing the Bill
- Children should be allowed internet access by default
- We need a risk-based approach to age verification like in the United Kingdom
- Businesses might end up getting disincentivised to create products for children
The four clauses on how companies should process children’s data in the draft Digital Personal Data Protection Bill, 2022, had policy experts and lawyers immersed in intriguing open discussions at Medianama’s events – “Reworking the Data Protection Bill”, held in Delhi (December 8) and Bangalore (December 14). They pointed out that the Bill could lead to – a system of widespread age verification for everyone on the internet, restricted internet access for children, and obstacles for businesses offering services to children.
Below is a summary of the top discussion points from both the events. But if you wish to read what the 2022 Bill says about protecting children’s data, click here.
Here you can watch the summary of our Delhi discussion. The link to watch Bangalore’s discussion will be added soon.
The Ministry of Electronics and Information Technology (MeitY) is seeking chapter-wise public feedback on the draft law until January 2, 2023. The submissions will be held in a “fiduciary capacity” and will not be publicly disclosed. Click here for more of MediaNama‘s journalism on the DPDP Bill and India’s data protection laws.
FREE READ of the day by MediaNama: Click here to sign-up for our free-read of the day newsletter delivered daily before 9 AM in your inbox.
Change the definition of “child”
- Upper age limit for defining “child” should be lowered: The current definition of “child” includes those between 13 years and 18 years, emphasised Aparajita Bharti, co-founder of Young Leaders for Active Citizenship, and The Quantum Hub. She said, their parents can control whether they have access to some parts of the internet. Across the world, the age of consent is as low as 13 years, in GDPR, it is 16 years and member states have the option to reduce it if they want to, she added. She also said that several members have changed it to 13 years or 14 years.
- Look at the Guardians and Wards Act, 1890: Referring to the Act, one speaker said, “if the child is old enough to form an intelligent preference, the court may consider that preference (…) when appointing the guardian of the child.” This gives the child (below 18 years of age) the autonomy to make her decisions in certain circumstances.
- Look at Juvenile Justice Act, 2015: “I think even in the Juvenile Justice Act, the court has said that when it comes to 16 to 18, they will determine whether the child is able to take decisions like an adult. I mean, they did bring in that distinction,” another speaker added highlighting how in certain cases those below 18 years are treated as adults.
- Look at POCSO Act, 2012: When we look at “the POCSO Act, there are so many cases relating to young adults in the age bracket of 16 to 18 years of age, whose consent really stands on a different threshold than younger children,” a speaker said.
Problems with parental consent and age verification
- Age-verification could become mandatory for everyone: “We forget that for companies to make a decision whether parental consent is required, they will have to verify everybody who comes and makes an account on their platform. So it’s not just going to apply to children, it will apply to everyone, whether it’s me, whether it’s you. When we make an account, then we will have to verify our age,” Bharti said. Such verification could even lead to excess data collection, like facial data or collection of identification documents, Medianama has previously reported. Bharti added that although “data minimisation” is mentioned in the explanatory note issued alongside the Bill, it doesn’t get reflected in the Bill.
- Age verification can led to excess data collection: Vaneesha Jain said, for age verification “you’d have to collect even more data, you might have to implement facial recognition or other forms of (verification) (…) , to actually identify both the individual who’s the child and the parent as well as the relationship between them. So (…) that actually increases the amount of data being collected as opposed to minimising data.”
- Teenagers should get autonomy: Jain said, “you don’t want teens to have to necessarily take consent for every kind of activity that they may be indulging in such as, you know, accessing different streams of educational content, or getting into dating profiles or watching certain games or, you know, there’s a certain level of autonomy (that teenagers need).”
- Age gating is not possible at scale: “I think, from the Digital Economy Act in the UK, and the conversation around that, (…) a big learning that came out (was) that you didn’t have any clear method to do this (age verification) at scale, and so you would want to do it on a case to case basis and properly run trials to figure out where it’s absolutely necessary. So for example, I mean, they couldn’t even create this just for pornographic websites, because you couldn’t define what pornographic websites were, and then you couldn’t figure out how (…) age gate mechanism would actually work.”
- Remove the provision on parental consent: “At one level, you’re saying that you need parental consent to allow the child to access stuff. Why don’t you (…) just leave it at the prevention of processing (of data) that may harm a child?”, a speaker questioned. his comes from the point of view that if platforms are not allowed to cause harm to a child then parental consent becomes less important.
- This bill just puts all the responsibility on parents: Bharti said, “With the current phrasing (…) we are putting all the responsibility on parents – that parents, you take a call whether your child is safe or not on this platform – and you can consent on (children’s) behalf. I think we’ve had multiple rounds of discussions knowing that consent itself is broken in that sense. You actually had an opportunity to actually nudge platforms to do better in design itself, which you have actually taken away completely because:
a. privacy by design was deleted in the entire bill itself
b. there is no obligation on the platform to help children make better choicesAlso, not every parent will always be someone who’s acting in the “best interest” of the child, Jain said. She added, that’s a phrase that’s absent from this version of the bill, which was there before. - No clarity on what happens when a child turns 18: Jain said, “is there fresh consent being taken from that child when they turn 18 years old? So there’s been a lot of data that’s been collected and processed?” There’s complete silence on questions like a child can withdraw consent to processing of data once she turns 18, she added.
- Does verification mean KYC for all users?: A speaker said, “What you’re looking for is the digital version of a signature, and then expecting people to do digital signatures or any other form (…) of verification, you’re probably going to end up doing KYC, or something of that sort, you’re going to do either a video verification or you’re going to do an ID based verification.” He added, platforms that want to avoid liability will look for the most foolproof mechanisms of verification.
- Is the bill in line with the Puttaswamy judgement?: One speaker asked, “if you have to verify then you need to verify everyone who’s using the internet, and then to that extent, does that pass the Puttaswamy test?”. Another speaker responded, “It will abjectly have failed all tests. So that effectively means there is no right to anonymous speech on the internet, and that’s a bridge too far for any court to take really”.
- Need to consult children while framing the Bill: I’ve been to many consultations “but where are people who work on child rights, and children themselves?”, Bharti said. She added, “(…) you can get some 17-year-olds, 18-year-olds, we work with a few, they actually are very smart and articulate about what they think should be done”.
- Three ways of parental verification: A speaker who has worked has worked on implementing parental verification systems mentioned three ways in which it can be done:
- Companies can do facial recognition where the parent and child are both present in front of the camera and the video recorded for verification can be immediately deleted.
- Unless a valid card information is entered, it can be assumed the person is an adult and all payment options can be disabled.
- You can different version of the same apps for children and adults
- Age verification without collecting personal data: A speaker suggested a less invasive age verification system, like making internet users solve simple problems (like Math problems or), which children below a certain age group cannot solve.
Don’t restrict children’s internet access
- Allow internet access to children by default: Nikhil Pahwa, founder of Medianama, while quoting from a previous Medianama discussion, said, “(…) the (way) UK approaches this particular part of the regulation is that they come from a point of access to the internet – saying that access to the internet is important for children.” He added, ” (in the UK) you start with the point (…) (where) everything is whitelisted, and then let’s put criteria (restrictions) for other (harmful) aspects. Here (in India), what we’re doing and what we’ve done since the beginning is, first, let’s block everything, and then let’s give parents the key to that door and let them choose which door they will open. So in that sense, if you’re below the age of 18, you don’t have access to start off with.” He suggested, “I think we need to reverse that approach.”
- Whitelist approach for allowing tracking is not a good idea: The government bans tracking or targeted advertising for children unless exceptions are provided by the government, Bharti said. She added that the government is essentially looking at a whitelist approach where tracking and targeted advertisements will be banned for all services and a positive list will be made, which could allow tracking for services such as Ed-tech. “Now the question is what is EdTech? Isn’t the whole of internet EdTech, in some sense? Isn’t YouTube EdTech, I study from YouTube. If you’re an artist, you look at Instagram, you follow artists of your choice to actually understand (…) their artwork etc. So what is EdTech? We cannot slice and dice (the) internet for children with these definitions, (…) anyway services have multiple features and may be serving multiple purposes for somebody.”
- Wide-ranging social impacts of mandating verifiable consent: Bharti said, for example, the reaction when a boy goes to his parents and asks about making a social media account and when a girl asks for the same thing will be very different. We know that girls face more resistance – what we see in the physical world will also translate into the virtual world (if you put internet access-related decisions in the hands of parents). Another example is that, we, as a country talk about improving digital literacy, and parents will have the option to restrict children’s access to the internet by not giving consent, Bharti added.
- We need a risk-based approach: “I do think what we need is a risk-based approach that has safeguards proportionate to the risks on the set platforms. So we should not do one size fits all, in the sense ban everything, ban tracking on all platforms, ban targeted advertising from all platforms, we should look for a core regulatory model that presses platforms to be better for children similar to age-appropriate design code in the UK,” said Bharti. Later she also said, “if you look at the age-appropriate design code, you have to do it (risk-assessment) before launching a product in all cases, if it’s likely to be accessed by children.” She also said that the India’s bill doesn’t ask companies to do such assessments “because we’ve anyway, just banned everything”.
- Can children withdraw the consent given by parents: A speaker questioned, where does the law stand on a child’s right to withdraw consent that the parent has already provided? No one in room was able to provide a clear answer on this.
How the Bill protects children from harms
- This bill has gone backwards: The United Nations Convention on the Rights of Child (UNCRC) says laws should keep in mind the best interests of the child, Bharti noted. She added that we also had a similar phrasing in the 2019 Bill but it has now been removed from the 2022 Bill. “So in that sense, we’ve actually gone backward from where we were in 2019 and that is really, really concerning.
- The Bill may not adequately protect children from harms: Medianama has previously reported how the definition of “harms” has been trimmed down compared to the 2021 Bill. Speaking on this subject, a speaker said, “children face some very specific harms (…), like mental health – we work with a lot of children who face mental health issues, they have esteem issues, they compare their life a lot more with their peers, etc. Now, because you have this one size fits all policy, you actually had to reduce the “harms” because you’re not taking the risk-based approach in that sense.”
- Need better product designs for children, but the bill doesn’t talk about it: One speaker said, platforms often deploy nudge techniques to make children use their product more. The speaker said, “the way the architecture of any software product is designed, is essentially a hook or a nudge technique. And I believe that it would really do bundles of advantage to the child and (…) the parent if something could be done about those nudge techniques”. He suggested that companies should conduct ethical impact assessments and benchmarks for products should be set up to deal with this issue. Another speaker said, “in the age-appropriate design code, they do have specific recommendations around what kind of nudges you can’t have on platforms. And therefore, I think the onus (…) actually shifts (…) to platforms.” She added, “so I think the platform design is a very important piece, as you’re rightly saying, but we have not made any space for it because we have banned everything. When it got banned then you don’t need any platform design intervention, in that sense, you can’t do much. So that is the problem, you are actually curtailing innovation in that sense.”
Impact of banning tracking
- Businesses will be disincentivised to create products for children: Bharti said, “you are essentially stifling innovation for a child. And why should children not benefit from innovation on the internet? You know, if I’m an entrepreneur, and if I know that if I’m going to make a product for children, I’m going to get into these hassles, then I would rather put my money and effort into something else, where regulation is a lot more clear, where I’m not seen as somebody who’s out to harm children”.
- Impact of restricted tracking on education, suicide prevention services: One speaker said, “a lot of social media platforms have been saying that they use tracking to also have some kind of nudges (…) that help the child or whoever the user (is), seek any kind of help. So it needs to be thought that are we leaving room for such interventions. Clause 4 of Section 10 provides exemptions and in the earlier version of the Bill, exemptions were “based on offering counselling services or child protection services. They’ve removed that, but I’m just hoping that whatever rules come up, will include more detailed provisions so that whatever behavioral monitoring happens, happens for welfare of the children.”
- Profiling of children can be useful: One speaker said, profiling can help differentiate children from groomers. He said, “profiling could be for the purpose of identifying groomers, right? Because someone can create an account as a child, and then based on behavioural patterns there are instances where platforms determine whether that person is actually a child or a groomer, and then they can block that account.” He also said targeted advertising can come in handy when “you are determining the weaknesses of a child when they’re doing courses, even on Khan Academy, etc. And then they will pop up courses that they can do. Now you can say that’s not advertising, but even ed-tech companies build profiles and on that basis, they recommend certain paid courses. So that would be targeted advertising, which is off (in the bill). So perhaps the inclusion of a clause in the best interests of the child would be the right way to go over here, because that creates an exception.”
The story and headline were updated on 17 December 2022, (5:35PM) to include comments from our second event on DPDP Bill, 2022, held in Bangalore.
This post is released under a CC-BY-SA 4.0 license. Please feel free to republish on your site, with attribution and a link. Adaptation and rewriting, though allowed, should be true to the original.
Also read:
- Seven Issues With How The Data Protection Bill Safeguards Children’s Data
- DPDP Bill, 2022: Fewer Protections For Children In India’s Latest Data Protection Bill
- Businesses To Brace Themselves For California’s Age-Appropriate Design Code
- Facing The Consequences Of The Data Protection Bill On Children’s Digital Privacy
I cover privacy, surveillance and tech policy. In my reporting, I try my best to present the most relevant facts, and sometimes add in a pinch of my thoughts.
