- Regulate AI applications rather than the tech: Speakers largely agreed that rather than regulating artificial intelligence in general, the use cases of the technology should be regulated.
- Data needs to be regulated too: In cases where data collected for AI applications could have problematic implications for the data subjects or their communities, then such data collection needs to be governed under specific regulations.
- Self regulation may not be enough: Self-regulation within the AI industry may not be enough since it may not solve for the massive differential between the people developing the technology and the people affected by it. However, for low-risk applications, self-regulation could be explored.
“While regulating AI technologies for protection against harms, we need a regulation that cuts through — you can sow the seeds for harm at the design stage, or at the development stage, or at the deployment stage. We don’t have to wait for the technology to become an application before we think of regulating it effectively,” said Vidushi Marda, digital programme officer, ARTICLE 19.
She was speaking at MediaNama’s discussion on the Impact of Data Policies on Artificial Intelligence, in context of upcoming ecosystem changes and regulations, such as the Personal Data Protection Bill and the Non-Personal Data Framework, held on January 28. MediaNama hosted the discussion with support from Facebook, Microsoft and Flipkart. The Centre for Internet and Society was the community partner for these sessions. All quotes have been edited for clarity and brevity.
What should AI regulation look like?
When regulating the use cases of AI, we should remember that “every level within that use case may have some implications for both user rights, and governance,” said Arindrajit Basu, research manager, Centre for Internet & Society. He therefore suggested that while regulating particular use cases of AI, we need to regulate those use cases at several layers based on the kind of implications these layers have.
Marda offered an explanation about the harms several layers of a particular AI-based could have: “When we talk about regulation often there is an assumption that regulation will come in once the system exists. Regulating use cases of AI will have to cut through its various layers, because the seeds of harm can be sown at the development stage itself. After that, there is a possibility of harms at the stage of piloting a technology, and we should not wait for that technology to become an application before we can think of regulating it effectively,” she said.
Regulate the use-case: We should be looking at regulating applications, said Rahul Panicker from the Wadhwani Institute for Artificial Intelligence. “Let’s take chemistry. Do we regulate chemistry or do we regulate pharmaceuticals, chemical weapons, and dyes that go into clothes. It is the latter because the regulations have to be associated with risk and risk is very much tied to applications. Institutional capacities are also tied to applications,” he said.
While explaining why it was important to regulate applications rather than AI in general, Panicker said:
“An algorithm can be an image recognition algorithm. We use a ResNet backbone, which is a deep conventional neural net [CNN] and you use that to classify images. Now those, algorithms can be used to classify medical images, the sort of things being done in China, apparently of racial segregation, and those algorithms can be used to identify traffic violations, through vision systems. So, a CNN that is being used for health care should have certain characteristics. Something that is used for racial classifications should probably never be built. And something that is used for traffic violations, probably needs much lesser regulation” — Rahul Panicker, chief research and innovation officer, Wadhwani AI
Sectoral regulations already there: Lenders collect people’s information while conforming to regulations laid down by the Reserve Bank of India, said Meghna Suryakumar, founder and CEO of Crediwatch. “You don’t need a new regulation for AI because the underlined thing is you can regulate use cases. You should not be regulating the technology. So the regulation is already there with the RBI of how banks are supposed to handle this information. This regulation predates the use of AI. We help our customers, who are banks or financial institutions to be able to securely collect this data within their own IT infrastructure and also give them the right technologies to purge it. So, from that perspective, the industry well before regulations come into place, has been following best practices equivalent to the GDPR, like collecting the data for a legitimate purpose, purging data, and not retaining data because it belongs to the borrower, even if they are not individuals,” she added.
- Counterpoint — existing laws not enough: Tarunima Prabhakar, co-founder of Tattle, however, said that current regulations might not be enough to ensure people’s privacy. “Currently the way the RBI guidelines are especially for NBFCs, there are no restrictions on credit scoring and doing behavioural credit risk assessment,” she said. Prabhakar also pointed out that even in the Personal Data Protection Bill, credit scoring and debt recovery is an exemption for consensual data collection.
Regulation before the use case, or after? “With AI, it’s simply too risky to come out with a regulation once a particular use case has come about, given the kinds of harms that AI could cause. These harms go just beyond privacy,” said Basu. “We need to have a law that speaks to AI in general before a use case comes about with certain principles,” he said, adding:
“One of these principles to regulate AI can be what is known as a precautionary principle, which in international environmental law, says that if present scientific consensus cannot accurately pinpoint the kind of risks that a specific measure can take, then you have to be cautious and not go ahead with that measure f” — Arindrajit Basu, research manager, Centre for Internet & Society
AI needs to be regulated at the data level depending on sensitivity of datasets: “Apart from looking at the use of AI while regulating, we also need to look at what level we are regulating it,” Basu said. “In case of predictive policing, for instance the Lucknow Police’s facial recognition system which they claim will send a distress signal by looking at the facial emotions of women, obviously the data is very important. If my data is being used against me and my community, then it needs to be regulated at the data level before we go to the algorithm level. The final application in terms of how that algorithm is being used by the police becomes relevant as well.”
Need to address the basics: “We still don’t have data protection law in India. So, the first thing we need to do is come up with a data protection law to protect the privacy of individuals and their data. After that you look at the use cases of AI, and if those use cases are legitimate, and if they aren’t then you would prevent that kind of data collection or analysis from happening. And you also would make those kind of use cases illegal,” Suryakumar said.
Need for checks and balances: It is important to have checks and balances in the use and access to AI that go beyond just technological means. Access control could be one such way, Panicker said. “For instance, there are technologies that you can’t just walk up to the street and buy. That is true of defence tech, of medical tech. So many regulated domains have controls even at the level of access, and similar things can be applied to the context of AI and data — which is if we want access to data or access to models or just want access to inferences,” he added.
Need for India-specific laws: “We saw how policymakers in the US questioned the Big Tech companies, and the amount of understanding they had of technologies like AI, Big Data etc. If these kind of people make regulations for us, we will end up with a general guiding principle. Also, if we just copy the GDPR, I think it will be devastating for India given that our income per GDP capita is extremely low compared to several European countries,” said Abhishek Agarwal, co-founder and CEO, CreditVidya.
Fast growth of tech means regulation will keep playing catch up: Panicker also said that it is impossible to regulate AI while thinking about all of its potential adverse consequences. “If you look at the history of technology, when the industrial revolution started we did not know about the possibility of global warming. When we had leaded petrol, we did not know that it was going to be causing serious health harms. We have to accept from history that we cannot possibly predict all adverse consequences of technology and that’s because it is not just technology that has adverse consequences, but the context in which is applied, the people who apply it, and also the people on whom it gets applied to. So in lots of regulated domains there is this notion of post market surveillance, which is where the developer bears the responsibility of how the technology developed by them is going to be used. Developers have a burden of post deployment monitoring and that’s what helps them identify adverse consequences and then actually take corrective actions,” he said.
- “I don’t think that AI per se can be regulated because today it is AI, tomorrow it will be Augmented Reality or Virtual Reality, and day after tomorrow it may be something that we can’t even think of right now,” Suryakumar said.
Is self-regulation a viable alternative?
“I think there is a large space within AI applications and digital applications for self-regulation, but it has to be within defined norms because otherwise you’re looking at the possibility of the wild West. It means that you have to keep track of certain things including reports generated of certain things you have to file these reports at some periodic intervals you maintain certain compliances and subject to certain audits. And we have to remember that self-regulation is not the same as having no regulation,” Panicker said.
Counterpoint — Self regulation doesn’t always protect those not in power: “I don’t think that self-regulation lends itself to protecting those who don’t have power. I think self-regulation, in and of itself contemplates people in power, deciding how they will act,” said Vidushi Marda, digital programme officer, ARTICLE 19.
“For years IBM, Microsoft, Amazon were saying we are going to self-regulate, we have these ethic codes, we will do no evil. Then they had a political upheaval and within two weeks they all said “please regulate us, this is too dangerous.” I don’t think that we need to wait for catastrophic events to happen for us to really recognise the fact that self-regulation in and of itself will never work” — Vidushi Marda, digital programme officer, ARTICLE 19
Also in this series:
- #NAMA: Does Artificial Intelligence Threaten Privacy? Do The Government’s Data Protection Laws Have Adequate Safeguards?