wordpress blog stats
Connect with us

Hi, what are you looking for?

IFF’s response to Niti Aayog’s draft discussion paper on Facial Recognition Technology

IFF flagged concerns about Niti Aayog’s failure to address potential threats posed by law enforcement agencies regarding FRT usage

The Internet Freedom Foundation (IFF) has flagged concerns about Niti Aayog’s failure to address potential threats posed by law enforcement agencies’ use of facial recognition tech, implementation drawbacks and policy issues with the Digi Yatra scheme in its response to the body’s draft discussion paper titled ‘Responsible AI for All: Adopting the Framework – A use case approach on Facial Recognition Technology’. The deadline for the submission of comments on the paper was November 30.

Niti Aayog’s discussion paper looks at the Digi Yatra Programme as a case study to lay out procedures and recommendations on the use of Artificial Intelligence through FRTs in India. Based on the ‘Responsible AI principles’, the paper looked at—legislation and policy-making, design and development of FRT systems, procurement processes and consumers impacted, MediaNama reported.

The IFF’s response broadly focuses on the policy and implementation issues that may pose a risk to people’s data under the FRT applications.

Why it matters?

The Indian government has been increasingly encouraging the use of facial recognition systems and artificial-intelligence based technologies for varied purposes in the country. The use facial recognition techniques is becoming common especially amongst the police in different states, raising concerns about surveillance and data privacy of individuals. Recommendations by digital rights groups such as IFF can prove to be significant for deliberations on future policies on FRT systems, and also highlight problems that might otherwise be overlooked.


FREE READ of the day by MediaNama: Click here to sign-up for our free-read of the day newsletter delivered daily before 9 AM in your inbox.


What are the major unaddressed policy issues?

A. Failure to assess potential harms of FRT usage by law enforcement agencies:

  1. Surveillance: IFF’s response says that the paper fails to satisfactorily address the issue of “surveillance” at a time when there are at least 29 ongoing FRT projects for investigative purposes by the state and city police departments throughout the country. According to the statement, India’s national FRT project developed by the National Crime Records Bureau is considered to be the world’s biggest FRT system.
  2. Government exemptions: While the paper suggests setting a data protection regime, India’s draft Digital Personal Data Protection (DPDP) Bill, 2022 is still in the consultation process. It is important to note that “blanket exemptions” for select government agencies under Clause 18(1)(a) provision the exemption of law enforcement agencies from the Bill’s purview under factors such as interests of the sovereignty and integrity of India, and security of the State. Moreover, IFF highlights that the use of FRT by the police is likely to harm the fundamental rights of citizens due to systemic flaws in the country’s policing system.
  3. Privacy risks: The Niti Aayog paper states rigorous standards for data processing of sensitive data collected through FRT should be adequately addressed in any proposed data protection regime to address privacy risks. IFF points out that it fails to clearly state what these “rigorous standards” mean, and that the DPDP Bill, 2022 does not contain any such phrase as envisaged here.

B. Function creep:

The IFF notes that the paper mentions the phenomenon of “purpose creep”, noting the fact that facial image/video data collected for one purpose have historically been abused by the state for other purposes that the data providers did not consent for. However, IFF states that, “Function creep of FRT is not just limited to violations of personal privacy and the shifting use and abuse of just the datasets, there is the issue of FRT systems being used in spaces and contexts it was not meant for, a technical problem connected to the issue of brittleness but more so a social problem where the normalisation of FRT in public spaces itself becomes the problem instead of a deliberative process which takes into account why specific use cases of FRT are harmful.”

C. The Digi Yatra programme

In the absence of a data protection regime, there is no mechanism to regulate how the scheme will collect, process and store data. Moreover, IFF cautions that the DPDP Bill may not be sufficient to “satisfactorily address the privacy concerns of the scheme”. Given that Clause 18 still grants exemptions to the government for data processing for security reasons, it can very well have an effect on the Digi Yatra scheme as the policy itself discusses “non-consensual sharing of data with security agencies and other government agencies”.

What are the implementation issues?

A. Mischaracterisation of explainable FRT:

Niti Aayog states that the FRT systems should be able to provide explanation, evidence or reasoning for its outputs to be accessible to an auditor or judge. IFF notes that “hypothetical explainable FRT systems will still be arcane to end users and policymakers”. Additionally, while it may be of use for researchers, “explainability is an extremely narrow and new area of research and practically no deployment of explainable deep learning systems exists in the real world, let alone in FRT”.

B. Unaddressed technical aspects which cause harm to users

Stochasticity: The discussion paper does not include the stochastic nature of machine-learning technologies. Stochasticity fundamentally means that “all machine learning systems involve a certain degree of probabilistic and statistical reasoning based on pseudo-random processes as opposed to a deterministic system. A deterministic system is one where for each set of inputs one and only one output is possible.” This means there will always be room for errors as all the results are based on probability, especially in higher-level statistical processes meant for observable purposes, such as the FRT system.

Brittleness: This is another cause of errors in the results produced by the FRT systems. It is also called over-fitting wherein machine learning systems fail to provide a reasonable output in cases where the test data is qualitatively different from the data that the system is trained on.

These technical aspects further hamper the process of facial recognition, thereby having an impact on an individual’s rights in cases of security and policing.

C. No mention of risks caused by ‘Emotion Recognition’ and other physiognomic practices related to FRT

According to IFF, one of the issues that the think-tank fails to address in its paper is the risks caused by the unchecked use of machine learning technologies, computer vision in general and FRT in particular, in the market. These cases include dubious emotion recognition from facial features, which IFF explains is dubious and pseudoscientific because, “human emotions do not have simple mappings to their facial expressions across individuals and especially cross culturally. Despite it being baseless and racist, technologies like emotion detection are popular because the spread of FRT makes the acquisition of large datasets of face images possible which is what emotion detection algorithms work on.”

Implementation issues with Digi Yatra

The Digi Yatra policy is aimed at enhancing the passenger experience for all air travellers. IFF raises fundamental questions over this claim as the failure of the FRT systems and concerns of unverifiability are bound to impact passenger experience at the airports. “This is due to the simple fact that facial recognition technology is inaccurate, especially for people of colour (which includes Indians) and women,” IFF states. These can result from failure to match pictures from the Aadhaar database in real-time at the airport Digi Yatra kiosk, ultimately compromising users’ privacy and affecting travel time.

IFF’s recommendations on the discussion paper

  1. A blanket ban on the use of FRT by law enforcement agencies by highlighting the harms of its usage in the discussion paper.
  2. A blanket ban on the use of emotion recognition via the FRT and highlight its harms.
  3. Create a framework to analyse use cases of FRT in public spaces on a case-to-case basis. Issues of stochasticity, brittleness and impact on constitutional principles must be taken into account for building a framework with filters for the usage of FRT in public space.
  4. The discussion paper should revise its recommendations regarding “Explainable FRT systems” and must create explicit provisions for preventing function creep.
  5. Lastly, the discussion paper should “revise its recommendations with regard to the establishment of a data protection regime in light of the draft Digital Personal Data Protection Bill, 2022”.

This post is released under a CC-BY-SA 4.0 license. Please feel free to republish on your site, with attribution and a link. Adaptation and rewriting, though allowed, should be true to the original.

Also Read:

Written By

Curious about the intersection of technology with education, caste and welfare rights. For story tips, please feel free to reach out at sarasvati@medianama.com

Free Reads

News

According to Russian investigators, Stone had published online comments that defended hostile and violent actions against Russian military personnel.

News

bank-owned P-PA services do not require any authorization, but will also have to ensure compliance with other requirements for P-PAs.

News

However, it is possible to opt-out of the clause by emailing an opt-out notice to arbitration-opt-out@discord.com within 30 days of April 15, 2024, or...

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.

Views

News

NPCI CEO Dilip Asbe recently said that what is not written in regulations is a no-go for fintech entities. But following this advice could...

News

Notably, Indus Appstore will allow app developers to use third-party billing systems for in-app billing without having to pay any commission to Indus, a...

News

The existing commission-based model, which companies like Uber and Ola have used for a long time and still stick to, has received criticism from...

News

Factors like Indus not charging developers any commission for in-app payments and antitrust orders issued by India's competition regulator against Google could contribute to...

News

Is open-sourcing of AI, and the use cases that come with it, a good starting point to discuss the responsibility and liability of AI?...

You May Also Like

News

Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...

Advert

135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...

News

By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

News

Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ