“Why is it that there is this uniformity in what [detailed information] we are asking for [from users] when we are verifying [them]?” asked Renuka Sane, Associate Professor, National Institute of Public Finance and Policy, at MediaNama’s roundtable on “Exploring User Verification” last Thursday. “To the best of my knowledge, it all started with the 9/11 attacks. The Financial Action Task Force [FATF, which leads global efforts against ‘money laundering, terrorist and proliferation financing’] said we want to do something about weapons proliferation and terrorism finance. We need to identify financial flows between persons. Therefore, the financial sector now needs to verify who the person is, who is sending the money, and who’s receiving it.”
The roundtable saw experts pour over the future of anonymity online, amidst broadening government mandates to “verify” the identity of Indian netizens. These mandates, arguably introduced to ‘protect’ users from online harm, are often difficult for platforms and companies to comply with. Experts debated on how verification impacts industries, when it should be used, and whether platform-led alternatives to this top-heavy and privacy-infringing form of regulation exist.
Read: Can We Map A Framework For Verification? NASSCOM’s Varun Bahl On A Model For Proportionality #NAMA
In parallel, India had the Prevention of Money Laundering Act (PMLA) guidelines, Sane added. The PMLA and FATF provisions came together to produce the Indian KYC (Know Your Customer) form, which is used to collect detailed information on customers and verify them as they sign up for services.
“The FATF didn’t put [down] stringent [KYC] requirements,” Sane argued. “They said you can do a risk-based KYC, you can figure out that for small value transactions, you don’t need to know who that person is. For large value transactions, for politically sensitive people, maybe you need to do a larger KYC.”
“But, that implementation has just not happened in India,” Sane asserted. “It doesn’t matter if you are a 20-year-old customer. They know everything about you, well, everything financially about you. That’s because there is that implementation [of stringent KYC norms] and then there is the enforcement process by the [market] regulator, who is checking what these regulated entities are doing. There are a lot of mistakes in the enforcement process as well, which makes everyone super risk averse. They say, ‘I’m going to collect as many Xerox copies as I can, because I want to play it safe when the regulator knocks at my door’. [In verification debates] There are questions of privacy and you know, what should be anonymous, what should not be. But then there are also deeper questions around just basic implementation and choice given to financial firms.”
MediaNama hosted this discussion with support from Meta and Truecaller. The Internet Freedom Foundation, CUTS International, Centre for Internet and Society, and the Centre for Communication Governance at the National Law University, Delhi, were MediaNama’s community partners for this event.
STAY ON TOP OF TECH POLICY: Our daily newsletter with top stories from MediaNama and around the world, delivered to your inbox before 9 AM. Click here to sign up today!
Verification is seen as a tradeoff to provide “security” online: “The government’s argument [for verification] in a lot of ways comes from the concept of security and threat to life and liberty,” said an audience member. “There is an expectation in many contexts that you’re willing to exchange privacy for security. For example, [a parallel could be] when you go to the airport and you get yourself scanned, and your baggage gets rifled through.”
The government’s penchant for verification mandates online stems from a desire to curb bad practices, they added. “There have been many instances where anonymity [online], especially in democratic countries, has been twisted, and has been misused in a big way to cause real harm to people,” the audience member added. “The government’s taken action in the form of user verification—in the hope that all bad elements in society will think that if they are easily identified, they won’t act in a bad way on the Internet.”
But, these solutions implemented by platforms aren’t necessarily foolproof—and may not always solve the issues the government is considering. “But, the flip side is that it’s so easy to spoof the system,” the audience member noted. “You can get authenticated identities on the Internet, on the dark web, very easily. You can get authenticated credit cards, and live credit cards, and you won’t even know that these are being used to authenticate ‘yourself’ [in scam scenarios] on various systems.”
Trade-offs don’t exist, the burden is on the state to justify why privacy infringements are necessary: Disagreeing with the trade-off argument, Advocate Prasanna S. referred to the Supreme Court’s 2017 Puttaswamy verdict, which recognised privacy as a fundamental right.
“The foreseeability of harms was central in the Supreme Court’s reasoning on why privacy was a fundamental right,” Prasanna explained. “With the 2017 moment, the burden has shifted so that [in] any public policy that seeks to engage the right to privacy, the burden [to protect privacy] certainly shifts on any person who wishes to deny that. This shift in burden is also important for us to disabuse the notion of a trade-off [between privacy and security].”
“There is no trade-off, there are no two equal entities here,” Prasanna argued. “With the burden shifting, the state now has to show that if this [verification] is required, then what is the…security they’re going to achieve for forcing this particular policy measure…I know airport security was given as an example, and we’ve kind of accepted it [this trade-off here]. But there has been, at least to the limited knowledge that I have, very little theorisation and study on what differential security it has achieved. When that is the case, we cannot use that as a norm to say that this is a valid trade-off, that we have to therefore part with our privacy in exchange for something else.”
Under-justified government verification mandates impact Indian industries, and lack clear aim: Varun Sen Bahl, Public Policy Manager at NASSCOM, observed that “current verification requirements are not asking questions of feasibility and scalability. That has both knockoff effects in terms of what service providers have to do, and also in terms of overall harm to privacy. The best example of this is the CERT-IN cybersecurity directions.”
Released last April, these government rules mandated that companies report cybersecurity incidents within six hours. Companies also had to maintain system logs for 180 days, maintain “Know Your Customer” (KYC) and transaction information of customers if they were crypto exchanges, and maintain customer information for VPN, cloud service, and data centre providers. The government recently stated that it has no information on how many companies complied with the directives.
“We spent a lot of time analysing the directions, and trying to unpack that one requirement which said that cloud service and VPN providers are to essentially collect a range of information,” Bahl continued. “The first thing we noticed is that there was a disconnect between the problem that you [the government] wanted to solve and the information you were trying to collect. The problem to solve was that people were using public cloud services to host, let’s say, malicious domains, and they were using free accounts to do that. So, you wanted to curb that practice. But, you started collecting information on ownership patterns [of the companies], of logs from the time of [user] registration and generally. I don’t think that connection was ever established and the information sought to be collected.”
“That’s something we’re seeing with a lot of regulations,” added MediaNama’s Editor Nikhil Pahwa. “There seems to be a gap of proportionality [between the solution and the problem]. Second, there’s no correlation with the harm that can be prevented just because someone’s identified.”
Pahwa used the example of the telecom regulator’s recent proposal of a Truecaller-like caller ID system, where persons receiving a call automatically know the “verified” name of the caller. The system is reportedly being introduced to protect consumers from spam and unknown callers—but it makes anonymous calls impossible, raising privacy concerns too.
“If my name is being shown to someone I’m calling, they may not know me, but might still take the call,” Pahwa argued. “No one’s not going to pick up your call just because they can see your name, right?”
State’s verification approach waylays other pre-emptive actions that platforms can take: “Identity as deterrence [for online harms] is a tricky, slippery slope,” added Beni Chugh, Research Manager at Dvara Research. “We’ve seen that all of us our identifiable in the income tax regime and so many other regimes, [but] I don’t know how much of a deterrence effect this has had on so many of us. The other part is it’s so focused on ex-post [action on user harms], [in] that should something go wrong, we will be able to catch hold of someone. That completely vacates the room for ex-ante [or pre-emptive regulation].”
Chugh referred to PornHub’s program to only allow verified users to upload content, in an effort to combat revenge porn on the platform, among other issues. “One thing a technology-intensive company like that could have done is already place Artificial Intelligence to detect what’s happening [in the content]…[That is, it could have used pre-emptive] content moderation as opposed to ex-post going after the person. Identifying a person and putting them behind bars is really the extreme step, and by then, the Internet has already become unsafe for so many of us. Can we not think of what are the ex-ante measures that can be used to mitigate or prevent the[se] harms?”
Verification measures should be proportionate to their outcomes: “When we speak about specific [verification] requirements, it helps to realise who the anonymity is against,” observed Bahl. “Is it against the service provider? Is it against the regulator? Is it against third parties like auditors? Is it against the public? So the issue changes depending on where exactly the intervention lies. That changes the way you calculate or make the calculus around the impact [verification has] on privacy.”
Verification mandates may normalise surveillance online: “Anonymity should be the default setting [for users online],” added Chugh, on a separate note. “If I have to make a comparison to my real life, then every time I’m going to my pharmacy, unless I’m asking for a restricted drug, I’m not even asked to show a prescription. Why am I then being asked to identify myself over and over again on Practo [the online platform where you can book appointments with medical professionals]. So, why is it that the Internet needs to have so much more surveillance?”
Allow different parties to decide their verification optimum: Pushing against the commonly held position at the roundtable that user anonymity should be the default online, Sane said “I feel that too often, we are presuming that there is some sort of a universal [standard online] where this is anonymity, and that’s the default. Anything beyond that is where we start talking. But, I want to push back against that. There are transactions in which there is a buyer and a seller—let them figure out what makes sense for them [when it comes to verification]. I think we need to not have a standard universal approach and allow for different parties to choose what their optimum is.”
Responding to Sane, CUTS-CCIER’s Research Director Amol Kulkarni added that while two parties decide the level of privacy and terms and conditions of verification, some are more powerful. “In the financial sector, and other sectors as well, we have seen that service providers are disproportionately powerful,” Kulkarni argued. “Consumers and service recipients do not necessarily have that choice [to negotiate, presumably]. As the laws of the financial sector have evolved, we have necessarily provided an extra layer of protection for the vulnerable. Therefore there is a minimum [standard of] security, anonymity, or privacy settings which applies to everyone so that the vulnerable can be protected.”
Sane noted that there are differences between the information asked for when providing a service, and how the information is used or stored once the service is provided. “The questions surrounding both of these are different, and in many of these conversations today, I feel like we are mixing them up. So, I may show my Aadhaar card to get a bank account, that’s the way they verify who I am. But, after that, what does the bank do with that Aadhaar card, or the data on my transactions are very different questions.”
“But isn’t this a question of trust and efficacy within the system as well?” Pahwa asked. “You’re putting the entire onus now on the [verified] user having to part with more of their privacy because your [the private and public sector’s] technical implementation [of security measures] is not strong enough.”
Sane agreed, adding that few are asking financial firms about what needs to be verified for a user to be provided with a service. “I may not need to know too much about you, but may be happy to provide you a service as long as you are paying me,” Sane mused. “Today, financial firms don’t have this choice. [Different] Rules mandate them to ask for information from you.”
Part 1 of this series explores how notions of private information and privacy rights have shifted over the years. Part 2 discusses how user verification impacts fundamental rights.
This post is released under a CC-BY-SA 4.0 license. Please feel free to republish on your site, with attribution and a link. Adaptation and rewriting, though allowed, should be true to the original.
Read more
- Can We Map A Framework For Verification? NASSCOM’s Varun Bahl On A Model For Proportionality #NAMA
- When Does Information Become “Private” And How Do “Privacy Concerns” Arise? #NAMA
- Why Is Online Anonymity Important—And Does Taking It Away Hurt Fundamental Rights? #NAMA
- Do Marginalized Groups Support Online Anonymity? #NAMA
I'm interested in stories that explore how countries use the law to govern technology—and what this tells us about how they perceive tech and its impacts on society. To chat, for feedback, or to leave a tip: aarathi@medianama.com
