wordpress blog stats
Connect with us

Hi, what are you looking for?

FICCI Round-table on AI and ethics: Accountability, fairness, and non-discrimination

“[The] State should not stop anyone from doing anything until and unless there is tangible harm. For instance, political parties generate most fake news in India. There’s no will to correct it. Therefore, we have the fake news problem. It [fake news] is not a technical problem,” said a participant at FICCI’s roundtable discussion on “Responsible AI and Framework”. Before the discussion, Responsible AI: A Global Policy Framework, a book by International Technology Law Association (ITechLaw) was launched by Kavita Bhatia, as part of MeitY’s Digital Payment Division, in India. The book gives 8 discussion principles (8 principles and policy framework are available below) for the “responsible development, deployment, and use of artificial intelligence”.

Nikhil Narendran, partner at Trilegal, and Smriti Parsheera, policy researcher at NIPFP – who are chapter leads of the Accountability chapter, and Fairness and Non-Discrimination chapter respectively – introduced key concepts of the book and sought feedback from the participants. The first draft is now open for public consultation and comments are invited till September 15, 2019. Narendran explained that after this round of public consultation, the second edition will be released in mid-2020. He told MediaNama that this book has been envisioned as a “living document” that will be continually updated to meet the demands of the society.

Bhatia briefed the participants about how the government was thinking about emerging technology. She focused on the government’s aim to convert India into a “knowledge-based economy”. She mentioned the 4 committees, constituting industry and academia, that the ministry had formed: on data and platforms, on identifying critical sectors that need focus, on skilling and reskilling of individuals required by an AI/automation regime, and on cybersecurity.

Bhatia also spoke about the National Programme on AI that focuses on IPR, legal issues, platforms, and data. She said that the National Centre on AI would follow a hub and spoke model. 29 incubation centres have been established in education institutes along with 11 centres of excellence. These centres are associated with 150 start-ups, which have filed 56 IPRs, 10 of which have already been granted.

On accountability

In his presentation, Narendran gave an overview of how accountability was an integral principle of AI ethics. He differentiated AI as a function (as a service and a product, such as Siri, Alexa, Cortana), as a tool (as in credit scoring), and in government. To Bhatia’s question on whether there was any legal penalty for lack of compliance, Narendran said no. He said that the principles looked at compliance from a function-specific perspective, not a legal one. At the moment, they are not suggesting any changes to the legal system. He clarified that the principles did not look at personal data, but focussed on two kinds of data: metadata, and performance data. Also, companies judge the criticality of data themselves.

Advertisement. Scroll to continue reading.

Bhatia mentioned MeitY’s Open Government Data platform. Also, MeitY is working on an open data sharing platform to encourage interoperability. She asked the participants to consider how open data could be during the round-table discussion.

On fairness and non-discrimination in AI

Parsheera gave a brief overview of her chapter on fairness and discrimination. She cited a few case studies — ProPublica’s Machine Bias, Joy Buolamwini and Timnit Gebru’s Gender Shades, and ACLU’s Jacob Snow’s study on problems with Amazon’s facial recognition software “Rekognition” — to demonstrate how biased datasets produced prejudiced outcomes. She recognised that all of them are American studies, but argued that there were key takeaways for India as well. She also mentioned Winterlight Labs’ auditory tests for neurological diseases, IBM Watson for Oncology that exemplifies problems with US medical approaches, and problems of homogenous data to show how lack of diversity in training data led to AI replicating real life prejudices in technology.

Parsheera highlighted four principles that could be considered:

  1.     Develop a theory of fairness and non-discrimination
  2.     Interaction with proposed data protection law
  3.     Ensure “fairness by design”
  4.     Create a culture of open data

After Narendran and Parsheera’s presentations, the round-table discussion began. Here’s what was said (NB: The round table discussion happened under the Chatham House Rule. Thus, we have paraphrased statements.):

  • “[Because of AI] future of work will also change.”
  • “We use loose terms [to define contours of AI] because we have loose concepts.”
  • “Ironically, China has better data protection legislation [than India] … 52 pieces of regulation.”
  • “India doesn’t look at UDHR [for formulating its regulation].”
  • “What do you mean by ‘weaponisation of AI’? Because different countries mean different things.”
  • “Credit scoring regulation in the US is not done for data protection. It is done state by state.”
  • “Law itself can be a nudge.”

Challenges for India

“There are three big challenges for India: first, the awareness dynamic, that is, understanding what’s going on. Second, agility of response from regulatory perspective. There has never been a scenario where the regulation has come ahead of the innovation curve. And third, moral relatives for countries, which is an engineering challenge. Developing and Indian voice on AI is the biggest challenge.”

Diversity in AI

  • “In any such design [of AI ethics], there is a huge diagonal that is missing, constituting Africa, China, Eastern Europe, etc.”
  • “Apart from data training, design team also needs to be diverse. For instance, in IIT Bombay, technology for visually impaired people is tested by a full-time employee who has no sight himself.”
  • “At IEEE, there are three foundation areas that are considered: universal human rights, well-being of humans, and data agency. And three aspects looked at: anthropological, political, technical.”
  • “Two European countries that can be looked at: Denmark’s People’s Data Protection Act that is similar to our [Indian] draft e-commerce policy, but the content is vastly different. The other is Sweden’s Vinnova, their innovation agency. It came up with a framework on AI and innovation.”

Which regulatory approach to AI?

  • “anticipate certain harms. … How does one think about regulatory models?”
  • “[If we go down] the harm-based route, who is responsible for showing the harm? Is the onus on the researcher, on the government? It’s very rare to have onus on company to pre-empt the harm. [It is important to] shift burden to provider instead of the consumer.”
  • “Do we need a rights[-based] regulatory model?”
  • “Perhaps sector-specific regulations will be more useful.”
  • “Harm-based approach is very conservative, though it needs to be sectoral.”
  • “By human behind the machine, [I mean] no entity status should be given to AI.”

Role of engineers

  • “Telling engineers, ‘you’re going to jail if you do this’ is very helpful [in making them stay ethical].”
  • “We need a mix of all [internal code and government code]. The world of regulators and engineers often need binding law. Getting really experimental people [to fall in line is difficult].”
  • “It is becoming easier for engineers to cross the line. Maybe we need hard codes?”
  • “How drastically do we redraw the current legislation? Or do we go for a harm-based approach?

Accountability and liability

  • “[Cambridge Analytica scandal] is the example of the largest weaponization [sic].”
  • “Whom do you blame in this [Cambridge Analytica] scenario? Where is the accountability?”
  • “Consider neural nets. How do we prove accountability there because they are not completely understood?”
  • “It is not just a question of liability. Accountability is more important than liability. For instance, if I am driving an autonomous Mercedes, and the car hits someone, who is accountable there? The company or the service provider is accountable. Liability is a question of what happened there at that moment.”

What affects implementation of AI ethics?

  • “On codes — done through law or codes. It is ultimately about incentivizing people. Closed-door discussions in companies, while important, are understandably now known. We need to bring a law about transparency to know about how ethics were implemented. We don’t know how internal incentives work.”
  • “Law will never be stringent or agile enough [to keep up with technology] because that’s how legislative processes work.”
  • Certain companies don’t share their facial recognition APIs be they don’t know if this technology will be misused.
  • “Reality is going to consider internal and external factors.”
  • “Japan has a binding code of ethics.”

Competition and data sharing

  • “Competition framework [important for] product development.”
  • “An AI regulation shouldn’t cause a chilling effect on the start-up ecosystem.”
  • “To encourage start-ups, there should be fair and equitable access to data.”
  • “It [equitable access to data] is very difficult because as the cliché goes, ‘data is the new oil’.” (The round table was also not immune.)
  • “Delhi land acquisition disaster. … the tech industry has overworshipped [sic] data for the last 20 years.”
  • “It is only now that we are linking competition to data access.”
  • “The UK government has said that approving DeepMind acquisition [by Google] was a mistake.” “[Along that vein,] perhaps Facebook-WhatsApp deal wouldn’t have happened either [because of data merging across platforms].”
  • “Principle 6 [Open Data and Fair Competition] is so that someone else can use the data set. AI needs data to be tuned.”
  • “Need for open data policy, that is, if you are using public resources, data produced from it must be given back [to the commons].”
  • One of the speakers also said that the government is considering using data from public processes. The Department of Biotechnology has come up with a policy where certain genetic data will be made available to researchers, with some restrictions in place to protect sensitive data.
  • “Financial regulators collect too much data.”
  • “How do we fix bias while encouraging innovation, and so that legislation isn’t always playing catch-up?”

[embeddoc url=”https://www.medianama.com/wp-content/uploads/ResponsibleAI_PolicyFramework-1-1.pdf” download=”all”]

[embeddoc url=”https://www.medianama.com/wp-content/uploads/ResponsibleAI_Principles1.pdf” download=”all”]

Advertisement. Scroll to continue reading.
Written By

Send me tips at aditi@medianama.com. Email for Signal/WhatsApp.

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.



The Delhi High Court should quash the government's order to block Tanul Thakur's website in light of the Shreya Singhal verdict by the Supreme...


Releasing the policy is akin to putting the proverbial 'cart before the horse'.


The industry's growth is being weighed down by taxation and legal uncertainty.


Due to the scale of regulatory and technical challenges, transparency reporting under the IT Rules has gotten off to a rocky start.


Here are possible reasons why Indians are not generating significant IAP revenues despite our download share crossing 30%.

You May Also Like


Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...


135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...


Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...


By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Your email address:*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ