MediaNama held a roundtable discussion on March 23, 2023, to discuss a possible framework for integrating proportionality in anonymity and verification – exploring a spectrum for how verification can be rolled out. Various government departments have called for verification, be it to access the internet, make calls or play online games in the interest of preventing online harms to children, scamming or spamming.
Our objective was to identify:
- The room for anonymity (and understanding what role anonymity plays in privacy).
- The correlation (or lack thereof) between verification, identification and the prevention of harms, including spam and fraud
- The effectiveness (or lack of) of existing verification norms such as Telecom KYC, CNAPs.
- A framework for gradation of identification, from anonymity to pseudonymity, verification and identification
- Technical challenges with verification, traceability and identification
- Privacy risks of verification, traceability and identification
Download a copy of the event report
Executive Summary
Despite the 2017 Puttaswamy judgment setting up the doctrine of proportionality involving tests of legitimate purpose, suitability and necessity, Indian courts have failed to apply the proportionality test. While the test went on to include two more prongs of proportionality over the years, the Indian government has enforced and/or proposed policies verification and identification of citizens, without taking proportionality and potential for harms into consideration. It is thus important to explore whether there can be a mechanism to provide a gradation of proportionality to verification that can make room for anonymity online?
A scale from anonymity to identification is important because a blanket mandate for verification can lead to discrimination. For example, a landlord may choose to deny lodging to a person based on their sexual orientation. However, should that information have been shared with the landlord in the first place? A proper scaling system would answer this question and also specify cases where verification cannot be considered mandatory.
However, this is a slippery slope considering there is no consensus as to what is “private.” Can mobile numbers be considered private information? Previously, there was a phone directory detailing every landline number and there was no hubbub about this data collection. Yet, as technology progresses the manner in which information can be used changes and so people’s rights change and evolve as well.
Moreover, there is no consensus on “anonymity online.” David Kaye defined offline anonymity “the condition of avoiding identification.” If the same is applied in an online context, then the focus shifts to “identification.” Whom do we want to avoid identification from? How will pseudonyms figure into this? Is this infringement of privacy legal? Even the Puttaswamy judgment does not view anonymity as absolute.
At the same time, user verification on social media can be seen as an infringement of a person’s right to free speech. At the same time, caller identification on mobile networks is seen as a means of preventing frauds and scam, but in reality, there is no correlation between the two. Without a proper framework in place, verification can normalise surveillance online, violate privacy and leave no room for anonymity.
There are other issues to consider like how the verification mechanism can impact specific groups like children, Dalits, LGBTQIA+ members. Age-verification can hinder a child’s privacy and identification online can endanger non-binary people or people who have been facing caste-based discrimination. Real-world use of these verification mechanisms have also led to practical issues. For example, some speakers talked about how caller-identification fails to address the issue of spamming and scamming, CERT-In cybersecurity directions led massive collection of user data for a long period of time and how for a two-person gang in Delhi e-KYC helped them fake DSE verification of certain service providers.
Industry stakeholders said verification puts undue pressure on intermediaries to follow vague directions that compel that to collect a wide data-set of users. Market regulations coupled with KYC norms push entities like financial firms to collect as much data as possible of every customer. Often, platform measures are incongruent with the government’s vision of verification as a ‘trade-off’ for security. Some verification measures can be spoofed. The Puttaswamy judgment states that the government has to justify verification measures that infringe on privacy. This is done to highlight the idea that one does not have to choose between privacy and security.
An alternative is that parties can be left to decide the verification process between themselves or establishment of a well-defined set of expectations around e-governance, public services and digital services to restrict the amount of verification. A well-defined set of expectations around e-governance, public services and digital services can help reduce the amount of verification, and enable data minimisation, which is a key requirement of privacy laws.
Video & Coverage
MediaNama’s coverage of the discussion can be found here.
About the discussion
Speakers:
- Amol Kulkarni, Research Director, CUTS International;
- Anand Venkatnarayan, Cybersecurity Researcher and Co-Founder, DeepStrat;
- Beni Chugh, Research Manager, Dvara Research;
- Jhalak Kakkar, Executive Director, Center for Communication Governance, NLU Delhi;
- Lalit Panda, Senior Resident Fellow, Vidhi Centre for Legal Policy;
- Pallavi Bedi, Senior Researcher, Centre for Internet and Society;
- Pranesh Prakash, Co-Founder, Centre for Internet and Society;
- Prasanna S., Advocate;
- Renuka Sane, Research Director, TrustBridge;
- Varun Sen Bahl, Public Policy Professional;
- Vijayant Singh, Principal Associate, Ikigai Law;
- Vrinda Bhandari, Advocate.
Participation:
We saw participation from organisations such as CCAOI, PwC, Thompson Cooper LLP, NASSCOM, CSLG-JNU, Disney, Bumble, Microsoft, Ikigai Law, DSCI, Simplilearn, Shardul Amarchand Mangaldas, GGSIPU, Polygon Technology, KNS Digiprotect, Grip, Deloitte, The Quantum Hub, SFLC.in, The Hindu, The Dialogue, Times Internet, Mimir Technologies, Internet Governance Project, Gestalt Strategy Consulting, Dvara Research, NITI Aayog, DigitalTrends, Inshorts, CRED, EY, Oijo, ICAMPS, Saarlegal, Sdela Telecom, Delhi Assembly Research Center, Article 21 Trust, IAMAI, Koan Advisory and more.
Support and partners:
The discussion was hosted with support from Meta and Truecaller. The Internet Freedom Foundation, CUTS International, Centre for Internet and Society, and the Centre for Communication Governance at the NLU, Delhi, were our community partners.
I'm interested in the shaping and strengthening of rights in the digital space. I cover cybersecurity, platform regulation, gig worker economy. In my free time, I'm either binge-watching an anime or off on a hike.
