On 5th October, MediaNama held a #NAMAprivacy conference in Bangalore focused on Privacy in the context of Artificial Intelligence, Internet of Things and the issue of consent, supported by Google, Amazon, Mozilla, ISOC, E2E Networks and Info Edge, with community partners HasGeek and Takshashila Institution. Part 1 of the notes from the discussion on AI and Privacy are here. Part 2: In a matter of decades, algorithms have gone from solving simple math equations to processing immense volumes of data to throw up analytics that even the creators of the algorithms have trouble dealing with. So what do you do when these algorithms create and process some of our most sensitive data? How do we regulate them, if we can do so at all? How algorithms deepen bias Beni Chugh, a Research Associate at the IFMR Research Foundation, explained how algorithms can amplify prejudices that people already have: "Our data is our window to our informational privacy. Around ourselves, we see that a lot of aggregation, deep machine learning, algorithms, AI, and such semantics are now the templates of how businesses work. So how does it impact the customer, and is it really a point of tension? The point of tension is only this: that it amplifies certain biases that humans already had; however, data networks, the way algorithms are designed in stages, they tend to be very highly path-dependent. Therefore some of these biases tend to get amplified over stages, and they become structured biases that can lead to discrimination, which…
