Algorithmic systems can be rife with inaccuracies and often render results which could lead to the exclusion of citizens from democratic processes, said Srinivas Kodali, an independent researcher at the round-table on algorithmic accountability in India, hosted by Divij Joshi, tech policy fellow at Mozilla. Several speakers outlined the many inherent issues with algorithmic systems, and pondered on ways that litigations could be filed against them. The discussion was held in partnership with MediaNama.

Following is Part of II of our notes from the discussion, and you can find Part I here. Quotes have been edited for clarity.

Issues with algorithmic systems

Algorithmic system can lead to exclusion of people from functions of a democracy: Kodali recounted how around forty lakh voters in Telangana and about thirty lakh voters in Andhra Pradesh were deleted from electoral rolls after respective state governments set out on an automated electoral roll purification drive. “What Telangana in particular did, was that it had the data of citizens which it collected through a Grand survey post bifurcation. From there, it just used all the data to identify people based on their location. They were essentially comparing the address on the Aadhaar card and the address on the voter ID, which we know could often be  different since  people have different residential and permanent addresses,” he said. Despite the issues last time around, the state has no intentions of stopping, however. “Telangana now has a new digital ID based on facial recognition where it wants to use it  for voter de-duplication. It has already piloted this in its civic elections earlier this year, and an RTI by the Internet Freedom Foundation found that the accuracy of the system was around 60%”, he said.

“Machines have errors”, but “hardly anyone tells you what this error is”, Kodali continued. Systems like speed guns or image processing-based systems which are trying to identify the speed of vehicles on the road for instance, have penalised people even when they were going well under the speed limit, he said, adding that sometimes people get fined because someone else was actually using their registration number. “But nobody in India challenges these, and  there is no mechanism to challenge these,” added Kodali.

Erroneous tech in the hands of law enforcement agencies can compound problems: When such erroneous systems are placed in the hands of law enforcement agents on the ground, the problems can often be compounded, according to Gopal Sathe, editor of Gadgets360. Algorithmic systems are used as a shield for what are often very in the moment  decisions by the people on the ground, Sathe said. “You will find officials saying okay, I can’t do this if you don’t have an Aadhaar card even though there is no rule saying that it’s required. Or, for example there was a case in Bangalore some months ago in February where a person was given a challan [ticket] for jumping a red light based on an image capture of the number plate. The person demanded to see the picture, but the police didn’t show it to them because it was a part of the algorithm. This person didn’t live in that area, and had never been there”, Sathe said.

Results can be subjective: Results produced by algorithmic systems can also change according to the preferences of a person annotating the data being fed into the system, thus making its results subjective, said Tarunima Prabhakar, co-founder of Tattle Civil Technologies. “To give you an example, three of us were annotating certain content and there was a post with a religious figure advocating that people should read  certain religious texts, and he had his number at the bottom of the post, urging people to call him. Two of us felt that it was a scam post and should be archived, while one person didn’t think it should have been archived,” Prabhakar said.

Function creep a very real possibility with algorithmic systems: Algorithmic systems built for one purpose can potentially be used for other purposes in the future, said Kumar Sambhav Srivastava, a journalist, who reported on India’s National Social Registry, a 360 degree citizen database. “When you have this kind of technology, there is no end to what you can add to it and how you can use it for different purposes. They want to add satellites to the National Social Registry and then geo tag every house, and then they want to even amend the Aadhaar Act to allow for it, without taking citizens’ consent. Even if the government has a legal base for deploying an algorithmic system today, tomorrow, they could amend it and then use it for different purposes such as creating the National Register of Citizens (NRC), or for delimitation  delimitation exercises during elections.

How to litigate algorithmic systems

Systems have to be proportionate: Any tech litigation is eventually going to answer the question of proportionality of the tech system, and if  there is a less restrictive alternative that is available, advocate Vrinda Bhandari said. “So, to be able to make that argument, there has to be facts presented before the court where you are trying to explain the various alternatives, and how the current system fails the proportionality test,” she said.

Being cognisant of differences in levy given to governments and petitioners: While doing so, it is important to remember that the government and petitioners could often have different symmetries of information during litigations, Bhandari added. Recounting the litigation on the constitutionality of Aadhaar, Bhandari said, “the UIDAI came and explained through a presentation how the system works, and also gave a physical demonstration with how they actually do an enrolment. The petitioners on the other hand, were not allowed to cross-examine the UIDAI”.  “We have to be cognisant that sometimes there are differences in the levy that is given to petitioners and to the state,” she added.

Affidavits should be filed by subject matter experts: She also stressed on the importance of having an expert file an affidavit, Bhandari said that in the recent case against Aarogya Setu in the Kerala High Court, professor Subhashish Banerjee from IIT Delhi explained the privacy issues with contact tracing, in very simple terms. “Experts can tailor the affidavit to a particular context, which an academic article might not be able to do,” Bhandari said.

Judges have to be explained in a language which they can understand: “The fact is that you have to show, you don’t have to tell”, advocate Rahul Narayan said. “The challenges in Aadhaar began in 2012, but they only picked up traction and public support in 2015-16 when people actually saw what was going on,” he said. Judges are not tech experts and that has to be kept in mind, he added. “You have to explain to him in language which he or she can understand, which basically means we apply the facts to the principles,” he added. For judges and for Indians as a whole, there is an inherent bias in favour of technological solutions, Narayan said. “Everybody thinks that’s the actual answer because the computer says so, that’s where it ends for 95% people including the government. If the machine answers no, then you are a fake. Arguing against that mindset is a big problem, and should be considered before beginning the litigation, he added.

Framing laws, policies for algorithmic accountability

Existing legal frameworks should be adapted to develop a regime around algorithmic accountability instead of developing new ones, said Sudhir Krishnaswamy, co-founder of Centre for Law and Policy Research. “When we have databases, we think that we need a database protection law, and then when we have a unique ID, we think we need a unique ID law which is not somehow different from some database protection law. Then, when we have information collection systems, we need an information data privacy law. That’s not a useful way to think about this field,” Krishnaswamy said.

NITI Aayog’s AI strategy, which proposed a concept called AI for X, can be used as a useful tool for framing policies around use of algorithmic systems, Arindrajit Basu, research manager at the Centre for Internet and Society. “Under the AI for X concept, a specific gap is to be identified by a certain process and AI is used to fill that specific gap”, Basu explained. However, as of now, the government is using AI for the sake of it, he said. Reference to previously conducted studies or what went wrong and what didn’t go wrong in other parts of the world can also be used to ascertain different AI use cases and establish regulatory tools accordingly, he added.