“Algorithmic systems are being applied in institutional settings, which have issues of caste, race, etc., very deeply embedded. Not just the algorithm, but the entire system needs to be held accountable,” said Urvashi Aneja, founding director of Tandem Research at the round-table on algorithmic accountability in India, hosted by Divij Joshi, tech policy fellow at Mozilla. Several speakers at the discussion called for greater accountability of algorithmic systems being deployed by the state, often in partnership with private entities, and concurred that current accountability measures are not enough. The discussion was held in partnership with MediaNama.

Following is Part of I of our notes from the discussion, and you can find Part II here. Quotes have been edited for clarity.

Should states deploy algorithmic systems?

State could be a laggard if it doesn’t engage in database-driven governance: There is a need for database driven governance, because otherwise, the government might end up being a laggard to the private sector in terms of providing services to its citizens, Parminder Jeet Singh, executive director of NGO IT for Change argued. “A lot of people say that if  database driven decision making doesn’t take place, it’s good. But, that means there is no digital state, and it remains at the industrial age level while the rest of the world moves on”. He explained: “Public education is generally considered a state responsibility and we increasingly see there are huge private initiatives, which are based on AI-based education which are trying to change the current classroom-based education paradigm.  We have seen it with Byjus and others. What happens if all education-based database decision making takes place in the private sector, what happens to public education as a government role and the same applies to health and agriculture, among others”.

State’s access to citizen’s non-personal data important for algorithmic decision-making: He also made a case for government access to non-personal data, even though the current provision in the draft Personal Data Protection Bill, 2019 is “badly drafted”. “Unless the government has access to a lot of such data [non-personal data], you can’t do database decision making and right now only private platforms collect all that data.  So, there will have to be some kind of non personal data, which goes under public control with constitutional guarantees”, Singh added.

Government should develop its own AI-based systems, shouldn’t depend on the private sector: The participation with the private sector also leads to a lot of criticism for the government, Singh said, suggesting that the only way it can reduce some of the criticism is by developing its own AI-based systems. “Almost always, the private sector is asked to take decisions on behalf of the government.  The problem is that the private sector’s whole thinking, and everything about them is different, it’s commercial.  So, they will not do things as the state is supposed to do in our constitutional terms. The only way therefore to not have it, is for the government to develop its own public sector capacities of developing principles-based algorithms and principles-based decision-making systems”, Singh added.

States, too, could deploy AI systems out of business interests: However, it is not just the private sector that acts out of business interests, and the state too, could do the same, Anupam Saraph, adjunct professor at the Symbiosis Institute of Computer Studies and Research, said. “If you look at the procedures that are laid out in our acts and laws, and if you look at the algorithms, there is an incredible amount of similarity. In some sense, the democratic process is accountable to those who fund the electability of those who govern, and the algorithms are accountable to the venture capitalists and the business owners of those who fund the businesses that create algorithms.  So, in some sense, both are driven by market and both are driven by some kind of interest in order to be able to make more money, and therefore it is extremely important to recognise that whenever we are actually applying any kind of algorithms, and any kind of means to create law and new legislation in our society, we are essentially making business decisions,” Saraph said.

What is the state’s motive? It is also important to ascertain the ethics of the people who deploy such systems, Saraph said:

“The fundamental cornerstone of making algorithms and laws accountable, is that those who make these decisions have to be equally and similarly affected by those decisions.” — Anupam Saraph

“We completely are oblivious of the dignity of the participants of the system on whom we are taking decisions.  We completely forget about the justice that cannot be delivered.  We simply believe that by creating a procedure, we have created a means to deliver justice, but nothing could be further away from the truth,” Saraph said.  “We also don’t recognise that we are creating an unforgiving and an irreversible system because whatever we implement through our laws and our algorithms is usually very unforgiving, cannot be subject to appeal, cannot be subject to human compassion and somehow, we feel that we actually are making progress,” he added.

Need for transparency in algorithmic systems; issues with private sector involvement

Algorithmic systems in India are too opaque: “We need to reverse engineer complicated algorithmic systems to understand their impact when deploying them in a society that is fundamentally unequal. We can do that by auditing the system itself to try and understand the impact it will have, in terms of proportionality or demographic parity. But that is not very feasible in India for several such systems, because you cannot study a system that is not open to being shared at all,” said Vidushi Marda, digital program officer at Article 19, relying on her experience while co-authoring a paper on Delhi Police’s predictive policing system.

Narrating the lack of transparency in institutions such as the Delhi Police, she said: “We wanted to understand how the system was built, how the models looked like, how they were optimised, what were they tested by, and their accuracy rates, but we realised that there was a breakdown of the Right to Information (RTI), and accountability mechanisms within institutions. We had to study the institution itself.  So, we had to study how data was thought of within the Delhi Police to understand how data will be used in this opaque system. The idea that we can understand accountability of systems by looking at the system is also completely broken because of all of the legal processes that aren’t available to us,” Marda added.

No public discourse or information about such systems: There is often no, to very little, public discourse around installing massive algorithmic systems, Venkatesh Nayak, RTI activist, and a member of the Commonwealth Human Rights Initiative said. He narrated: “My colleague and I were on a tour of some of the district police headquarters in Uttarakhand and we came across one board that was put up on top of a room in one of the police control rooms, which said social media monitoring cell. We asked the people there as to what exactly was the social media monitoring that was going on and they said that they’ve got a software that looks at all the social media messages that are floating around, so that they can identify potentially harmful messages of those, which are likely to incite the commission of crimes and then they will take action on that”.

He added that there was no public discussion on what this software was all about, who supplied it, how much money was spent on it, and whether authorisation was granted by the state legislature in an informed manner for spending money on that. Marda meanwhile said that there is a need for transparency before such systems are deployed. “If we start only studying systems, once they are deployed or once we read about them in the newspaper, we’re already too late,” she said.

Need transparency because today almost anyone can build AI systems: We also need greater transparency because developing algorithmic systems isn’t as difficult as it used to be, and can be done with very little knowledge about coding, said Sahil Deo, co-founder of CPC Analytics. “Coding neural networks was a pain six or seven years ago, but today, anyone with very little coding background can download a package of the internet and operationalise a neural network pretty quickly. This means that someone who might not know the entirety of this algorithm, can operationalise it pretty quickly. That again means that there is a greater need for transparency because a tool is being used without really understanding what lies inside the tool even at the very developer level,” said Deo.

Involvement of the private sector has further eroded transparency and accountability: Apart from a general breakdown in institutional accountability, involvement of the private sector with the state to develop algorithmic systems has further aided in a lack of transparency around such systems, several speakers said. “There is some level of private sector involvement either to a great extent or in terms of conceptualising a project,” said Arindrajit Basu, research manager at the Centre for Internet and Society. “An example is that of the Punjab Artificial Intelligence System, which is basically a predictive policing and facial recognition system and is being developed in partnership with a Gurgaon based firm called Staqu,” he added.

“There is the question of how people’s data which passed these decision making is being gathered, and you realise that it’s not just one company spying on you. Everybody is harvesting as much as they can, and then there are brokerages selling this information separately. There is a whole economy of your information out there, and at the end of the day government’s systems are open to question on at least at some levels. But when you start to ask these same questions to the private sector, they can clam up and they do.” ⁠— Gopal Sathe, editor, Gadgets360

Basu illustrated how the Karnataka government had partnered with Microsoft to provide farmers alerts on their cell phone to give them information about sowing crops based on some data aggregation. “Even though there have been public statements by both the Karnataka government and by Microsoft, questions on evaluations, audits, what empirical assessment was done before the project was rolled out and even results are not in public circulation as much as it should be,” he added.

Lines between the state and private sector are becoming ‘blurry’: Urvashi Aneja, founding director of Tandem Research concurred and said that the lines between the state and the private sector are becoming blurry. “You have these local governments who in one sense think that this problem of capacity is particularly acute and so you have a lot of private sector actors in some sense defining the problem, and then the solution being developed in terms of that definition,” she said. A lot of times, states are compelled towards turning to the private sector because they want to be seen as “AI forward” as they are under “increasing pressure” from the central government to implement such systems, she added.