On 5th October, MediaNama held a #NAMAprivacy conference in Bangalore focused on Privacy in the context of Artificial Intelligence, Internet of Things and the issue of consent, supported by Google, Amazon, Mozilla, ISOC, E2E Networks and Info Edge, with community partners HasGeek and Takshashila Institution. Part 1 of the notes from the discussion on AI and Privacy: 

What happens when Artificial Intelligence (AI) and Machine Learning start learning more about people than previously possible with paper records? Does the mobility of data mean that privacy is indeed dead?

“Privacy is dead”

Akash Mahajan, co-founder of Appsecco, believes that privacy as we know it is dead. Why?

So in terms of what will AI and ML mean for privacy, as a pessimistic view, I would say that it’s all gone.” 

He decsribed how the very nature of data collection and storage discourage informational privacy: “There are some simple terms we go into of talking about what can happen to data when it is at rest, or in motion, and what can someone do with data. As soon as you start talking about privacy, you are going into the realm of ‘I am able to identify that this data should be acted upon, or not acted upon’, depending on the context. The healthcare data of an individual is required. Whenever it is required for the doctor or the diagnosis, it will be there. But what happens to it, and how does it reach the place where the diagnosis can happen? As and when we talk about machine learning, Artificial Intelligence, what is enabling all this is a lot of aggregated computing power. Right? Where does that computing power come from?

“Through a nebulous term, cloud computing. Typically, it is someone’s server. At this point, in most cases, it is a server which is outside the bounds of this nation. Right? And any discussion about ‘where does the data end up’ cannot ignore the fact that where did the software get written. Where did that process for whatever software get initiated? And the local laws of that particular nation may just kick in.”

“Privacy is not dead”

Kiran Jonnalagadda, co-founder of HasGeek and a founding trustee of the Internet Freedom Foundation, disagreed. “I’m going to insist that privacy is not dead and never will be.”

If you define privacy in the fine terms of somebody having your data, sure, somebody has your data all the time. But, so what? The point that matters is that somebody has your data and is in a position to do something with it that’ll hurt you, or will affect your life in a way that you don’t want. That is when you have a privacy problem. For the most part, that is not what we’re dealing with most of the time. As long as you’re dealing with institutional apathy, where someone has no reason to target you specifically, it doesn’t matter that they have your data. And in this case, then where is the privacy problem?”

He illustrated this with an example: “Let’s take a situation where you really want your privacy to do something and someone doesn’t want you to have your privacy. Let’s say you’re an anonymous troll on Twitter, you’re harassing someone, your target obviously wants to know who you are and why you’re harassing them, and they can’t, well because you’ve succeeded in masking yourself. Does this mean you have privacy? From Twitter? Most certainly not. Twitter already knows who you are and has your contact information. Twitter knows who you are but your target does not. Therefore you have privacy with respect to your target, you don’t have it with respect to Twitter; but Twitter doesn’t care. Twitter has no policy of unmasking trolls. And as long as that policy and that institutional apathy stays, you have privacy. I think we should recognize this and say apathy is what gives us privacy.”

Karthik Shankar from GroupM differed, saying that apathy is not enough to protect from attacks on privacy. “When data gets captured, you will never know when it’s influenced at what level and how it will affect you. You will never ever come to a point where you have been a victim of the privacy of the data that has already been collected. Today there are companies here who collect in the name of frequency fingerprinting, — whatever you talk, whatever TV channel that you watch — so in that sense, they’ve been listening to you all the while, and with no regulatory things in place. You would never know when you’ll be a victim of what. It’s not a question of apathy. You don’t know which trap you’re falling into and when.”

*

#NAMAprivacy Bangalore:

  • Will artificial Intelligence and Machine Learning kill privacy? [read]
  • Regulating Artificial Intelligence algorithms [read]
  • Data standards for IoT and home automation systems [read]
  • The economics and business models of IoT and other issues [read]

#NAMAprivacy Delhi:

  • Blockchains and the role of differential privacy [read]
  • Setting up purpose limitation for data collected by companies [read]
  • The role of app ecosystems and nature of permissions in data collection [read]
  • Rights-based approach vs rules-based approach to data collection [read]
  • Data colonisation and regulating cross border data flows [read]
  • Challenges with consent; the Right to Privacy judgment [read]
  • Consent and the need for a data protection regulator [read]
  • Making consent work in India [read]