In our other reports on the Consent panel at the #NAMAprivacy conference in Delhi, we look at whether we should do away with consent altogether, and ways of fixing consent. The alternative solution posited was that of having a privacy and data regulator, and a third option was the combination of having better consent norms, and a data regulator. But isn’t regulation broken too?

Chinmayi Arun, Executive Director of the Centre for Communication Governance at the National Law University, Delhi, made a case for paternalistic regulation, and not leaving everything to consent: “There is an information asymmetry problem, and sometimes users don’t have all the information. There’s a bounded rationality problem, where even if users are given the information, they’re not able to process what they’re told. There’s also a predictability problem: a lot of the time technology companies are in a place where they have collected data but they also want flexibility to play with that data. They say that if we can’t experiment, we can’t give you new services. The trouble is that nobody is able to foresee what is coming and any harms that might emerge are of often invisible, and you’re not able to create a link between the harmful impact and what you consented to. This is why regulations are not completely consent based all the time, and that is why there’s a degree of paternalism, in which the state makes some decisions for you.”

In the Right to Privacy judgment, she continued, “Justice Chandrachud acknowledges judgment that there are elements of privacy which are fundamental rights, and there are elements which are not necessary. Especially within the data protection debate, one question to ask is what parts of our data would fall within the realm of the fundamental right, and what parts will be property related rights, which is you’re treating it like property and you can negotiate it. If it’s a basic right, you can’t give it away, and the state is supposed to enable to you protect your right to privacy even if a private actor violates privacy.”

“Even if you’re taking about consent,” she added, “what are the different devices that you can use, and in some cases transparency works, but not if the user can’t process the data. Can we think in terms of audits, which can allow users to understand what users are consenting to. There are ways that users can be empowered without having the need to process the data.”

But regulation is broken too

Renuka Sané, Associate Professor at NIPFP, pointed towards the failure of competition leading to the need for regulation: “In the financial sector,” she said, “you find competitive forces to solve your problems, but when there is information asymmetry and there aren’t enough users who can discern the difference between a good privacy policy and a bad privacy policy, firms will have no incentive to put out a good privacy policy. That makes the case for paternalistic regulation.”

Sané suggested that there can be a regulator who does “third party audits of your algorithms and data, and come in ex-ante to see if there is a harm that is going to be caused. If it finds that a harm is caused or going to be caused, it can come in and intervene.”

But who will share their algorithms? Algorithms are like a businesses’ secret sauce, and giving access to them is the last thing that any Internet company would want to do.

Sané pointed out that Matthan’s paper outlines Learned Intermediaries, “that you don’t have to show your entire algorithm to, but it can come and understand what it is that you’re upto. My worry with such a model that in India we have a big challenge of regulatory governance. While consent is broken, setting up good regulation is equally broken. We don’t know how to design good agencies, how to do checks and balances within agencies, and how to design accountability frameworks. We work in finance and we’ve seen innumerable instances of financial regulators, where orders are given without any reasoning. We know problems of rent seeking in many other sectors as well. When you give power to the state to preemptively take action so harm may not be caused, you’re opening up another box of problems. Yes, consent is broken, but regulatory governance is equally broken. Between two broken problems, what would you rather solve?

But we still need a regulator…

Jochai Ben-Avie, ‎Senior Global Policy Manager at Mozilla pointed out that “a strong data protection authority has been critical around the world in helping ensure the realisation to rights of privacy in data protection. Having a strong data protection and privacy authority is helpful, and can help to to cover some of the gaps that might emerge, as this first exercise happens. It’s better than having a self regulatory approach. Clearly we need strong enforcement if privacy is to be real.”

“We need a strong ex-ante regulation to compel businesses to protect privacy, and set rules of the road. We need to empower regulators. We have to put limits on the government. It has access to tremendous amounts of information, and disproportionate power vis-a-vis the citizen and we need to be mindful of that. We need to deal with the education side of this too. You need to be clear to the user about what you’re doing. It’s not that hard. You have a data collection and processing choice, you need to be clear about what the user benefit it. If you don’t have a user benefit, you probably shouldn’t be doing it.”

Chinmayi Arun pointed towards the need for creating accountability: an ombudsman system was removed from the Aadhaar regulation before it was passed. “One of the things to watch out for in the new data protection law is this similar tendency to use data protection and privacy almost like a PR strategy in the legislation, but not to create actual accountability. The question we should keep asking ourselves is: who do we go to if something should happen? A single company does not hold its own data. It subcontracts it to one person who subcontracts it to someone else. There needs to be clear accountability that is accessible to the average citizen. Litigation is expensive in India, and that’s something to bear in mind when we’re building consumer oriented framework.”

Active investigations and monitoring

Malavika Raghavan, Project Head – Future of Finance Initiative at IFMR Finance Foundation, pointed towards the US FTC model, where they actively launch investigations. “That’s a capacity point, and maybe we don’t have that much capacity, but if you can set up a stakeholder in government or outside government, whose job it is to look at this practice, it’s a little bit of money, and a couple of committed people there, it could go a long way.”

She also pointed towards reg-tech (regulation-tech), where “If you have a way that people are holding databases, and a reg-tech can come in. The regulator could potentially observe the manner in which firms are holding databases, and utilise a reg-tech solution that would enable them to directly talk to databases. The Reserve Bank [of India] already does this to some extent with banks. They have a supervision framework, and look at data, and that’s how they figure out that this bank is in trouble. It’s all sensitive and confidential because we don’t want to have a run on anybody, but a similar thing could happen for systemically important databases.”

Better funding makes better regulators?

Suhaan Mukherjee of PLR Chambers asked whether there is the possibility of self regulation (like with the self-governance model in broadcasting), and whether the private sector be interested in funding regulators who get paid more than what they get today? “Have we incentivised for the best minds in society and industry to join these regulators? Is it a career of choice? Are we willing to go out and say that our regulators need to be paid more, so that we can create more professions out of this?” He suggested a cess to fund better regulators.

Murari Sridharan, CTO of Bankbazaar, pointed out that learned intermediaries can fail too: during the financial crisis in 2008, “You had two rating agencies that were supposed to rate these complex tranches and they failed miserably.”

At the same time, lack of funds is a concern: “one hand you have one [regulator] who’s trying to figure out what the hell is going on, and on the other side you have these highly paid PHDs trying to point that what he’s saying doesn’t make sense. You have to assume that the companies will have more money to throw at this problem than a regulator, in which case it is set up to fail, and it won’t perform the function intended,” Sridharan said.

“If everybody is using machine learning and big data to analyse,” he added, “there are equally competent ways you can take a blackbox approach to machine learning, and figure out whether the algorithms are biased. One potential way is to tackle this also from an algorithm point of view, where you have AIs measuring a particular companys AI.”

Metrics of accountability for regulators

Sané said they were worried with Aadhaar, that some metrics of accountability were missing when the UIDAI assumed such a large function, and that these metrics are needed for a Privacy and Data Protection regulator as well: “One is to have a performance metrics around the regulator itself. So whenever we have a data protection regulator, how do we think about evaluating what has this data protection regulator done, and what would be a good performance metric. Secondly some accountability provisions: when can they intervene, not intervene, what kind of cost-benefit analysis they should do. In this whole landscape of data privacy regulator, we need to push for these kind of measures.”

*

Updates: Malavika Raghavan sent in slight modifications to what she said, to provider greater specificity, and this post has been updated to reflect that.

The #NAMAprivacy conference was supported by Google, Facebook and Microsoft. To support/sponsor #NAMAprivacy discussions, contact harneet@medianama.com