wordpress blog stats
Connect with us

Hi, what are you looking for?

India’s privacy law needs to incorporate rights against the machine

By Divij Joshi, Tech policy fellow, Mozilla. 

India’s long-awaited privacy legislation – the Personal Data Protection Bill, 2019 – is currently being deliberated by members of a Joint Committee of the Houses of Parliament. The Committee has its work cut out for it – the PDP Bill, while progressive on many fronts, suffers from several lacunae and needs to be future-proofed. One aspect that the PDP Bill must account for is whether it is sufficient for an era of ‘Artificial Intelligence’ and ‘Big Data’, where personal data is used to predict and control the behavior of individuals.

Modern technologies which fall under the broad head of ‘Artificial Intelligence’ increasingly use personal data to make predictions about our behavior or personal attributes, and then apply such behavioral predictions to make decisions about our lives. The abundance of data ‘available’ on the internet, along with the development of complex modeling techniques such as deep learning has made these systems easier to pervasively deploy.

In India, AI systems are being implemented to make decisions about individuals in fields of healthcare, insurance and criminal justice, leaving affected individuals with no remedy in the case of a misguided and incorrect prediction made by such systems. ‘Alternative’ credit scoring systems in India, for example, are supplying consumer information to banks, ranging from social media data to information about an individual’s movements. This information is algorithmically processed in order to classify or rank individuals according to pre-determined categories of ‘creditworthiness’, which can determine access to loans. Employers in India are also utilising algorithmic systems to determine whether candidates are suitable for hiring, by attempting to predict their future behavior at a workplace. Machine learning systems are similarly being used to predict whether individuals or institutions are engaging in ‘fraud’ under welfare schemes, where access to welfare can potentially be cut off based on automated processing and classification by an opaque algorithmic system.

While AI systems have the potential to considerably improve the capacity or overcome limitations of some types of human decisions, the manner in which they are currently implemented is eroding individual autonomy and leaving our society at the mercy of machines and their masters.  The rampant use of personal data to make classifications and predictions has severe consequences for civil liberties, resulting in curtailing freedom of expression, movement and their ability for individuals to have meaningful input in the decisions made about their lives. In the case of the credit score, for example, while a human loan agent can be questioned, reasoned with and an unfavourable decision may be changed. However, the process and rules by which an AI-generated decision is made is usually opaque, and offers little scope for disputing or overturning. Consider, for example, China’s ‘social credit system’, where the automated processing of personal data classifies an individual’s social and economic reputation, which is used to strictly control individual behavior against the threat of adverse consequences. At a societal level, the lack of transparency and accountability means that these machine-decisions may be secretly undermining social protections or ideals, such as non-discrimination or affirmative action on the basis of gender or caste, by basing the logic of decisions on parameters which are discriminatory or biased.

Is the PDP Bill Equipped for an ‘AI’ Era?

Will the PDP Bill curtail the tyranny of the machine? The Bill does, to a large extent, limit the effects of automated decisions, particularly by allowing individuals to control their personal data and its use, as well as structural changes aimed at entities using personal data. In particular, the Bill provides individuals with a (limited) right to access, rectify and erase personal data, which includes inferences for the purpose of profiling. Profiling, in turn, is defined as “any form of processing of personal data that analyses or predicts aspects concerning the behaviour, attributes or interests of a data principal.” Therefore, the Bill takes express cognizance of profiling of individuals by automated processing and to some degree allows individuals to control such profiling. However, despite such recognition, it provides few protections against the specific harms from automated profiling and decision-making, leaving the Data Protection Authority to specify certain ‘additional safeguards’ against profiling for only a subset of personal data deemed to be ‘sensitive’.

In order to be a robust legislation for our ‘AI’ era, we need to implement expanded protections against automated decisions. One way of extending such protection would be to draw from the legal tradition of ‘due process’, which ensures that decisions affecting individuals incorporate certain procedural guarantees which are essential to ensuring that they are fair and non-arbitrary. These guarantees include the right to obtain a justification or explanation of decisions, the right to obtain information which was used to make the decision, the right to be heard and have one’s views incorporated in the decision, as well as the right to contest or appeal a decision. In the absence of such protections, legal mechanisms should exist which ensure that individuals have the right to object to automated decisions and to have such decisions be subject to meaningful human oversight.

However, placing the burden of contesting decisions on affected individuals will not be sufficient. To overcome this burden, data protection law like the PDP Bill could incorporate structural protections to ensure that automated profiling is fair and transparent. These protections may include, for example, regular audits on the data and techniques used in profiling, to ensure its robustness and safeguard against systematic discrimination. Further, the logic or rules of automated processing of data for purposes of proofing must be made transparent by default. Different levels of protection may be offered in different circumstances, according to the potential harm which may be caused to the subject of the decision.

Opaque and unaccountable AI systems are antithetical to our constitutional ideals of privacy. The Supreme Court of India has noted that decisional autonomy – the freedom to make informed choices for oneself – is a core component of the fundamental right to privacy under the constitution. However, AI systems limit our ability to make such informed decisions by classifying and typecasting us according to their own secret rules. As we hurl headfirst into the age of ‘AI’, our legal systems must stand up to the task of protecting our privacy and decisional autonomy.


We would like to invite you to observe a round-table discussion on algorithmic accountability in India, hosted by Divij Joshi, in collaboration with MediaNama, on June 4th (Thursday) 2020, at 11:30 AM IST. This article is an attempt to offer a glimpse at the type of ideas that we’ll be discussing at the round-table discussion. You can sign up to be an observer at the discussion here. We will soon publish a curated reading list before the discussion. 

Written By

Free Reads

News

Telecom companies are against a regulatory sandbox, as they think information revealed by businesses during the sandboxing process might be confidential should be out...

News

According to a statement, the executive body of the European Union had also sought internal documents on the risk assessments and mitigation measures for...

News

The newly launched partially open-sourced LLM Grok-1 can be commercially used but not trademarked.

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.

Views

News

NPCI CEO Dilip Asbe recently said that what is not written in regulations is a no-go for fintech entities. But following this advice could...

News

Notably, Indus Appstore will allow app developers to use third-party billing systems for in-app billing without having to pay any commission to Indus, a...

News

The existing commission-based model, which companies like Uber and Ola have used for a long time and still stick to, has received criticism from...

News

Factors like Indus not charging developers any commission for in-app payments and antitrust orders issued by India's competition regulator against Google could contribute to...

News

Is open-sourcing of AI, and the use cases that come with it, a good starting point to discuss the responsibility and liability of AI?...

You May Also Like

News

Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...

Advert

135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...

News

By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

News

Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ