The Internet Freedom Foundation (IFF) has flagged concerns about Niti Aayog’s failure to address potential threats posed by law enforcement agencies’ use of facial recognition tech, implementation drawbacks and policy issues with the Digi Yatra scheme in its response to the body’s draft discussion paper titled ‘Responsible AI for All: Adopting the Framework – A use case approach on Facial Recognition Technology’. The deadline for the submission of comments on the paper was November 30.
Niti Aayog’s discussion paper looks at the Digi Yatra Programme as a case study to lay out procedures and recommendations on the use of Artificial Intelligence through FRTs in India. Based on the ‘Responsible AI principles’, the paper looked at—legislation and policy-making, design and development of FRT systems, procurement processes and consumers impacted, MediaNama reported.
The IFF’s response broadly focuses on the policy and implementation issues that may pose a risk to people’s data under the FRT applications.
Why it matters?
The Indian government has been increasingly encouraging the use of facial recognition systems and artificial-intelligence based technologies for varied purposes in the country. The use facial recognition techniques is becoming common especially amongst the police in different states, raising concerns about surveillance and data privacy of individuals. Recommendations by digital rights groups such as IFF can prove to be significant for deliberations on future policies on FRT systems, and also highlight problems that might otherwise be overlooked.
FREE READ of the day by MediaNama: Click here to sign-up for our free-read of the day newsletter delivered daily before 9 AM in your inbox.
What are the major unaddressed policy issues?
A. Failure to assess potential harms of FRT usage by law enforcement agencies:
- Surveillance: IFF’s response says that the paper fails to satisfactorily address the issue of “surveillance” at a time when there are at least 29 ongoing FRT projects for investigative purposes by the state and city police departments throughout the country. According to the statement, India’s national FRT project developed by the National Crime Records Bureau is considered to be the world’s biggest FRT system.
- Government exemptions: While the paper suggests setting a data protection regime, India’s draft Digital Personal Data Protection (DPDP) Bill, 2022 is still in the consultation process. It is important to note that “blanket exemptions” for select government agencies under Clause 18(1)(a) provision the exemption of law enforcement agencies from the Bill’s purview under factors such as interests of the sovereignty and integrity of India, and security of the State. Moreover, IFF highlights that the use of FRT by the police is likely to harm the fundamental rights of citizens due to systemic flaws in the country’s policing system.
- Privacy risks: The Niti Aayog paper states rigorous standards for data processing of sensitive data collected through FRT should be adequately addressed in any proposed data protection regime to address privacy risks. IFF points out that it fails to clearly state what these “rigorous standards” mean, and that the DPDP Bill, 2022 does not contain any such phrase as envisaged here.
B. Function creep:
The IFF notes that the paper mentions the phenomenon of “purpose creep”, noting the fact that facial image/video data collected for one purpose have historically been abused by the state for other purposes that the data providers did not consent for. However, IFF states that, “Function creep of FRT is not just limited to violations of personal privacy and the shifting use and abuse of just the datasets, there is the issue of FRT systems being used in spaces and contexts it was not meant for, a technical problem connected to the issue of brittleness but more so a social problem where the normalisation of FRT in public spaces itself becomes the problem instead of a deliberative process which takes into account why specific use cases of FRT are harmful.”
C. The Digi Yatra programme
In the absence of a data protection regime, there is no mechanism to regulate how the scheme will collect, process and store data. Moreover, IFF cautions that the DPDP Bill may not be sufficient to “satisfactorily address the privacy concerns of the scheme”. Given that Clause 18 still grants exemptions to the government for data processing for security reasons, it can very well have an effect on the Digi Yatra scheme as the policy itself discusses “non-consensual sharing of data with security agencies and other government agencies”.
What are the implementation issues?
A. Mischaracterisation of explainable FRT:
Niti Aayog states that the FRT systems should be able to provide explanation, evidence or reasoning for its outputs to be accessible to an auditor or judge. IFF notes that “hypothetical explainable FRT systems will still be arcane to end users and policymakers”. Additionally, while it may be of use for researchers, “explainability is an extremely narrow and new area of research and practically no deployment of explainable deep learning systems exists in the real world, let alone in FRT”.
B. Unaddressed technical aspects which cause harm to users
Stochasticity: The discussion paper does not include the stochastic nature of machine-learning technologies. Stochasticity fundamentally means that “all machine learning systems involve a certain degree of probabilistic and statistical reasoning based on pseudo-random processes as opposed to a deterministic system. A deterministic system is one where for each set of inputs one and only one output is possible.” This means there will always be room for errors as all the results are based on probability, especially in higher-level statistical processes meant for observable purposes, such as the FRT system.
Brittleness: This is another cause of errors in the results produced by the FRT systems. It is also called over-fitting wherein machine learning systems fail to provide a reasonable output in cases where the test data is qualitatively different from the data that the system is trained on.
These technical aspects further hamper the process of facial recognition, thereby having an impact on an individual’s rights in cases of security and policing.
C. No mention of risks caused by ‘Emotion Recognition’ and other physiognomic practices related to FRT
According to IFF, one of the issues that the think-tank fails to address in its paper is the risks caused by the unchecked use of machine learning technologies, computer vision in general and FRT in particular, in the market. These cases include dubious emotion recognition from facial features, which IFF explains is dubious and pseudoscientific because, “human emotions do not have simple mappings to their facial expressions across individuals and especially cross culturally. Despite it being baseless and racist, technologies like emotion detection are popular because the spread of FRT makes the acquisition of large datasets of face images possible which is what emotion detection algorithms work on.”
Implementation issues with Digi Yatra
The Digi Yatra policy is aimed at enhancing the passenger experience for all air travellers. IFF raises fundamental questions over this claim as the failure of the FRT systems and concerns of unverifiability are bound to impact passenger experience at the airports. “This is due to the simple fact that facial recognition technology is inaccurate, especially for people of colour (which includes Indians) and women,” IFF states. These can result from failure to match pictures from the Aadhaar database in real-time at the airport Digi Yatra kiosk, ultimately compromising users’ privacy and affecting travel time.
IFF’s recommendations on the discussion paper
- A blanket ban on the use of FRT by law enforcement agencies by highlighting the harms of its usage in the discussion paper.
- A blanket ban on the use of emotion recognition via the FRT and highlight its harms.
- Create a framework to analyse use cases of FRT in public spaces on a case-to-case basis. Issues of stochasticity, brittleness and impact on constitutional principles must be taken into account for building a framework with filters for the usage of FRT in public space.
- The discussion paper should revise its recommendations regarding “Explainable FRT systems” and must create explicit provisions for preventing function creep.
- Lastly, the discussion paper should “revise its recommendations with regard to the establishment of a data protection regime in light of the draft Digital Personal Data Protection Bill, 2022”.
This post is released under a CC-BY-SA 4.0 license. Please feel free to republish on your site, with attribution and a link. Adaptation and rewriting, though allowed, should be true to the original.
- NITI Aayog Recommendations: Make “Explainable” FRT Systems, Establish PDP Laws
- How Can Digi Yatra’s Biometrics-Based Boarding System Comply With ‘Responsible AI Principles’?
- Tamil Nadu To Upgrade Criminal Tracking Architecture, Integrate With External Databases Like Facial Recognition System
- Bihar Looking To Deploy Facial Recognition System In Bhagalpur And Muzaffarpur, Connect It To CCTNS
- RTI: Kolkata, Delhi Police Refuse To Give Information On Facial Recognition Systems
- India’s NCRB To Test Automated Facial Recognition System On ‘Mask-Wearing’ Faces