wordpress blog stats
Connect with us

Hi, what are you looking for?

NITI Aayog recommendations: Make “explainable” FRT systems, establish PDP laws

The report deals with different aspects of facial recognition technology (FRT), including principles of non-discrimination, transparency, etc.

What’s new: November started off with the NITI Aayog, the government think-tank, releasing “Responsible AI: #AIFORALL” report that discussed how to inculcate AI ethics in Facial Recognition Technology (FRT) applications in India.

Looking at the Digi Yatra policy as a case study, the report also made recommendations with respect to law and policy, and institutional interventions to ensure responsible and safe usage of FRT. Based on the Responsible AI principles, the report looked at i) legislation and policymaking; ii) design and development of FRT systems for public sector; iii) procurement processes; and iv) consumers impacted.

Why it matters: As mentioned in the report itself, “FRT has garnered domestic and international debate around its potential benefits.” In India, educational institutions, airports and law enforcement agencies like city police seem especially enthusiastic about using FRT or FRT-enabling technology. However, there are also basic human and fundamental rights to consider like privacy and equality. Practices of surveillance and profiling infringe on these rights. Moreover, the FRT has been criticised over-time for poor accuracy rates. Parts of UK even disallow the use of FRT for policing due to its potential of increasing racial bias. In such situations, it is important to look at the suggestions made by the government think-tank in the absence of a data protection law.


FREE READ of the day by MediaNama: Click here to sign-up for our free-read of the day newsletter delivered daily before 9 AM in your inbox.


Recommendations for developers and vendors of FRT systems

Principal of transparency

Make explainable FRT systems: NITI Aayog’s report advised developers to ensure that the decision-making process of the FRT system can be accurately explained to an auditor or judge. This means that the AI system must be:

Advertisement. Scroll to continue reading.

Self-explainable: It should be able to provide an explanation, evidence, or reasoning for each of its outputs, in a lucid and clear manner. This includes the disclosure of details about input factors considered in the decision-making process. For an FRT system, this means denoting the facial regions that contributed to the match and the degree of their contribution.

Meaningful: Further, the AI system must be capable of providing information that is meaningful and understandable to operators as well as recipients of outcomes. For an FRT system, this entails providing a “humanly understandable map of facial regions according to their contribution to the match.”

Be able to integrate models to explain outputs: The report suggested that vendors utilise different models for “explainability” or interpretability of underlying algorithmic models, like the Local Interpretable Modelagnostic Explanations (LIME). Such models can indicate why and how certain predictions or outputs were generated by an FRT system.

Knowledge limits: The design of the AI system must include adequately stated knowledge limits, or areas for which the base algorithm is untested for or where the AI system may fail to act due to lack of sufficient knowledge, said the report. It said the AI system must only operate and provide its output under the conditions it was designed for and when it reaches a certain level of confidence in its output or actions. For a FRT system, if a predetermined confidence level is not reached, the software may not provide an output.

Principles of inclusion and non-discrimination

Customise FRT for Indian use cases: The report asked developers to consider the realities of the Indian population while training the AI model. The system must ensure accurate and inclusive identification, for e.g., based on gender. Meanwhile, the vendor must provide accuracy rates according to segments of Indian face types, genders, age, and so on.

Human in the loop: The think tank recommended that a ‘human review’ be built into the AI system for specific cases where its utility and accuracy may appear dubious. A human reviewer should be enabled to take over such specific cases and prevent AI systems from making decisions without having sufficient expertise in the data presented to it.

Advertisement. Scroll to continue reading.

Principle of accountability

Checks and balances for accountability: The report suggested periodic, external, technical audits to cover the internal governance process including sourcing, building, deploying, maintaining data and AI models. Further, it recommended that the developer constitute an independent, internal ethics committee to ensure ethical design and development of FRT systems. The committees must establish internal governance processes for vendors and address lawful sourcing of data, building ethical and responsible FRT systems, incorporating privacy by design, and maintaining records and audit trails on AI models developed while designing the final FRT system.

Principle of privacy and security

Work in Privacy by design (PBD): NITI Aayog said vendors/developers must make a public document explaining the PBD policy and other principles used by developers. This will ensure:

  • user’s consent prior to processing personal information
  • collection of the user’s explicit consent in case the data (including biometrics) is used for reasons other than those stated during collection

Further, it said that in no circumstances should such consent for biometrics be inferred “from conduct of a data principal.” It mandated consent while collecting and processing facial data and any insights gleaned from it, including transferring, licensing, or permitting external agencies to access the data, if the purpose is not consented to by the user.

Additional value-added services: Vendors providing the additional value-added services (with explicit consent) must ensure protections for facial data and other relevant subject data. For this, the report suggested setting out clear licensing requirements between the procuring agency and the third-party vendors prior to sharing any sensitive personal data.

“The use of facial recognition data and other relevant subject data for providing value added services must be activated through an opt-in rather than an opt-out method of consent with an ability to revoke consent at any time,” said the report.

Recommendations for governing legislation and policy

Principle of privacy and security

Enforce a data protection law: The report stressed the need for a personal data protection (PDP) law in India. It recommended a codified data protection regime to ensure propriety and legality in data processing for training and developing FRT systems. Such a regime should also adequately codify protections for the fundamental right to privacy against state agencies, including law enforcement. It called for sensitive data including biometric data such as facial images and scans to be protected under the new data protection law.

“Rigorous standards for data processing, storage and retention of sensitive biometric data should be adequately addressed in any proposed data protection regime, to address privacy risks associated with FRT systems,” it said.

Advertisement. Scroll to continue reading.

FRT must stand the test of proportionality: To determine proportionality, the Supreme Court has stipulated four identifiers: legitimate goal; suitability of proposed intervention in furthering that goal; whether it is the least restrictive but effective alternative; whether it does not have a disproportionate impact on the right holder.

According to NITI Aayog, any ongoing or future application of FRT systems by Indian governments must comply with the three-pronged test (legality, reasonability, and proportionality) and the proportionality identifiers, to ensure constitutional validity. The RAI principles also value constitutional morality, i.e., compliance with constitutional ethos.

Principle of accountability

Regulate non-privacy risks of FRT systems: Aside from the PDP Bill, the report called for separate regulations to investigate specific challenges posed by FRT like transparency, algorithmic accountability, and AI bias issues. This regulation can be done either through codes of practice, industry manuals and self-regulation, or through more formal modes like statue and rules made thereunder.

Principle of transparency

Ensure transparency in the deployment of public FRT systems: A significant concern around FRT systems is the surreptitious nature of their deployment. Using the example of Digi Yatra policy, it said other ongoing and prospective FRT applications should put adequate information in the public domain. It exempted time sensitive surveillance to “diffuse a law-and-order situation.”

“Transparency around the deployment of FRT systems in the public domain must be a norm followed at the central and state level. This is necessary for individuals to exercise their informational autonomy (and the right to privacy) as well as securing public trust in the development and deployment of such systems, which is intrinsic to the concept of responsible AI,” said the report.

Reinforcement of positive human values: NITI Aayog advised organisations deploying an AI system to constitute an ethical committee to assess ethical implications and oversee mitigation measures.

Advertisement. Scroll to continue reading.

“For FRT systems, it is imperative that such committees are constituted and given adequate autonomy to prescribe guidelines and codes of practice to ensure compliance with RAI principles,” said the report.

According to the think tank, such committees should be responsible for:

  1. Drafting guidelines for explainable and transparent FRT within the proposed use case.
  2. Drafting standards for training database representativeness, public audits for fairness and acceptable error rates for the facial recognition system.
  3. Serving as the first layer of oversight regarding the use of FRT, to ensure compliance with the proposed SOPs.
  4. Developing the document, establishing accountability structure, including details of grievance redressal frameworks, possible remedies available, and other pertinent details.
  5. Publishing annual report(s), inter alia, setting out details around procurement processes and use of FRT in a year.
  6. Having residuary powers to prescribe standards, guidelines, or measures with the evolving use of FRT.

Recommendations for procurement

NITI Aayog suggested a transparent procurement process with periodic public disclosures of the criteria and processes followed.

Transparency in Request for Proposals (RFPs)

Issues to be clearly stated in RFPs: It said that the procuring entity must provide a clear problem statement while issuing RFPs as opposed to seeking a specific solution. The RFP must set out the need for AI and clearly show how public benefit is better achieved through the AI. Further, the RFP must be informed by an initial risk and impact assessment before starting the procurement process, which must be revised at future decision points. The overall error rate and those for different demographics for the FRT must be continuously evaluated and disclosed to the public, said the report.

“The RFP must highlight susceptible risks and ethical issues in the potential operations of the AI system and seek mitigation strategies from vendors as part of the proposal. In selecting the vendor, the procuring entity must ensure that the AI system is interoperable with current and future system upgrades,” said the report.

Define access terms: The think tank said the procuring entity must define data governance and access terms for the project prior to selecting a vendor. The access control terms will determine how data will be shared, while the data governance aspect shall provide greater accountability and transparency on how the shared data is processed by the vendor.

Compliance with government rules and regulations: The procuring entity must ensure that the RFP and the AI system being deployed under this project is in line with government strategy papers such as the National Strategy for AI, 2018, the Responsible AI 2021 papers, allow for scrutiny into the AI system during its life cycle. The performance and use of the FRT system must be monitored by governmental and independent agencies regularly against a set of defined criteria, said the report.

Advertisement. Scroll to continue reading.

Recommendations for Impacted consumers

Create grievance redressal frameworks: For ensuring accountability in the development and deployment of an FRT system, the report called for easy-to-use and accessible grievance redressal system. IT said that on-going and future applications of FRT systems must ensure that their deployment is accompanied with adequate grievance redressal frameworks, facilitating meaningful accountability and a system of checks and balances.

Create feedback loops: As per the report, any application of FRT systems, especially in the public sector, must be accompanied with trust building measures. For this, it suggested feedback loops and surveys, which “feed into periodic impact evaluations of such systems.”


This post is released under a CC-BY-SA 4.0 license. Please feel free to republish on your site, with attribution and a link. Adaptation and rewriting, though allowed, should be true to the original.

Also Read:

Written By

I'm interested in the shaping and strengthening of rights in the digital space. I cover cybersecurity, platform regulation, gig worker economy. In my free time, I'm either binge-watching an anime or off on a hike.

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.

Views

News

Amazon announced that it will integrate its logistics network and SmartCommerce services with the Open Network for Digital Commerce (ONDC).

News

India's smartphone operating system BharOS has received much buzz in the media lately, but does it really merit this attention?

News

After using the Mapples app as his default navigation app for a week, Sarvesh draws a comparison between Google Maps and Mapples

News

In the case of the ‘deemed consent' provision in the draft data protection law, brevity comes at the cost of clarity and user protection

News

The regulatory ambivalence around an instrument so essential to facilitate data exchange – the CM framework – is disconcerting for several reasons.

You May Also Like

News

Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...

Advert

135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...

News

By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

News

Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ