Privacy, transparency, and non-discrimination are some of the responsible Artificial Intelligence principles that need to be embedded and enhanced in the proposed Digi Yatra scheme, says NITI Aayog. In a new discussion paper on “Responsible AI for All”, the government think tank lays out its recommendations for how the proposed biometrics-based boarding programme can better comply with responsible AI principles.
What is Digi Yatra?: The Digi Yatra policy, first released in 2018, proposes a biometrics-based boarding system for Indian airports with “minimal human involvement”. Facial Recognition Technology (FRT) will be used to authenticate a passenger’s credentials and create a digital identity for them—which will then be used to verify their identity at different airport checkpoints.
Why it matters: Governments across the country are gunning to use FRT more and more, be it for law and order purposes, to verify gamers, to mark attendance, or “improve” flying in India. However, all these policy shifts are happening in the absence of a privacy protection regime, opening the door for citizens to be surveilled and profiled en masse. NITI Aayog grounds the enthusiasm for FRT and Digi Yatra in this context—reminding developers, consumers, and government agencies of the need to keep in mind privacy and transparency when deploying these systems. Whether these recommendations will reflect in Digi Yatra systems once implemented, however, remains to be seen.
Why use FRT?: Currently, passenger identity verification is manually performed by airline staff and Central Industrial Security Force (CISF) personnel, which leads to human errors and congestion, argues the think tank. Using FRT can create a “seamless, paperless, and contactless” boarding experience. It may also lower operational costs for airport operators, airlines, and State agencies deployed for passenger identity verifications. Airports may also cater to larger passenger volumes as a result of greater efficiency and lower congestion. Such contactless systems may be useful given the COVID-19 pandemic (or other situations) where “health-risk free” systems are preferable, says the think tank.
Is this mandatory?: The policy is currently “purely voluntary”. Alternate boarding systems relying on physical verification by CISF personnel will continue to operate for domestic and international travel. While Digi Yatra will initially supplement physical verification, the think tank adds that it “may be upscaled to all airports, with necessary legal frameworks in place”.
FREE READ of the day by MediaNama: Click here to sign-up for our free-read of the day newsletter delivered daily before 9 AM in your inbox.

How the Digi Yatra system seeks to replace physical verification at airports. | Extracted from the NITI Aayog report.
Which privacy-protecting laws and practices will Digi Yatra aim to follow?
Data privacy
- Compliance with existing laws: The legal agreement binding passengers signing up for the service and Digi Yatra must comply with existing data protection regulations, like the Information and Technology Act, 2000, and the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data and Information) Rules, 2011 (SPDI Rules). It should also comply with the privacy-protecting principles laid down in Puttaswamy v Union of India (2017).
Aadhaar-based authentication
- Compliance with Aadhaar Act: The Foundation will obtain an “Authentication User Agency” under the Aadhaar Act, 2016. It must comply with the Act’s provisions, as well as the Aadhaar (Authentication) Regulations, 2016, for issues relating to data storage, consent, data security, and logs maintenance.
Information Security
- Security practices: In accordance with SPDI Rules, the Digi Yatra policy states that the service will use “end-to-end, peer-to-peer encrypted communication which complies with existing legal standards”.
- Consumer control over digital identities: “Successful passenger enrolment on the Digi Yatra app shall create a secure digital identity wallet on the smartphone of the user, using public-private key pair encryption,” says the think tank. “Measures such as the use of self-sovereign identity to provide for greater individual control over digital identities, and the use of blockchain technology to help verify the credentials provided by Indian passengers (..) seek to improve the security and reliability of the Digi Yatra process.”
How does Digi Yatra match up with Responsible AI Principles?
These principles were distilled by locating “systemic considerations prevalent among AI systems across the world, and identifying principles that may be used to mitigate the identified considerations,” notes NITI Aayog.
Principle of Safety and Reliability
What’s the principle?: The reliability of an AI system’s functions must be ensured, and the system must have in-built safeguards to protect stakeholder safety.
Current risk mitigation measures: FRT readers must be compliant with ISO/IEC 19794- 5:2011—an ISO standard for qualitative aspects of facial biometric data.
Recommendations to mitigate further risks: Identify an agency responsible for publishing FRT standards on explainability, bias, and errors. Creating high-quality, standardised, and annotated images is necessary to train FRT in the Indian context, to avoid false negatives as a result of mislabelled data. Customer feedback mechanisms should be implemented.
Principle of Equality
What’s the principle?: Similar people in similar circumstances should be treated equally by the AI system.
Current risk mitigation measures: The travels of passengers with valid travel credentials should not be impacted by the introduction of Digi Yatra. Ecosystem alternatives exist—for example, manual checking can be relied on in the case of technical failures or non-enrolment. Digi Yatra also has “exceptional handling processes” for senior citizens and persons with disabilities.
Recommendations to mitigate further risks: First, a prospective data protection law should explicitly clarify the meaning and requirement of explicit consent. Second, obtaining the explicit consent of spouses or adult dependents should be a prerequisite before creating their passenger credentials and processing their sensitive personal data under Digi Yatra. Third, consent to create credentials should not just be given by the “head of family” on behalf of dependent adults, as the latter is capable of asserting their “right to consent” too.
Principle of Inclusivity and Non-Discrimination
What’s the principle?: AI systems must be inclusive of any and all stakeholders. They should not discriminate against individuals on the basis of “religion, race, caste, sex, descent, place of birth or residence in matters of education, employment, access to public spaces etc”.
Current risk mitigation measures: Under Digi Yatra, the first level of authentication is by CISF personnel—to avoid potential biases in AI decision-making, an individual can also choose to opt-out of the program for a non-biometrics physical authentication process. Additionally, NITI Aayog also held a challenge inviting entities to submit FRT algorithms for evaluation. To develop the algorithms, the think tank provided “training sets and validation sets from the Disguised Faces in the Wild dataset, containing 11,157 face images of 1,000 subjects with varying levels of intentional and unintentional distortions to mimic real-world scenarios and improve accuracy.”
Recommendations to mitigate further risks: A body must be identified to create and maintain standards to ensure representative training sets are used for FRT, and to ensure systems avoid biases. To ensure meaningful consent is obtained from passengers choosing to opt into the service, clear and concise notices in English or a regional language should be provided on how data is collected and processed across the Digi Yatra life cycle. Finally, given India’s digital divide, and to ensure Digi Yatra does not become exclusionary, manual verification at airports should be maintained as an alternative to the policy in the long run.
Principle of Privacy and Security
What’s the principle?: The personal data of data subjects must be safe and secure. “Only authorised persons must access personal data for specified and necessary purposes, within a framework of sufficient safeguards to ensure this process,” adds the think tank.
Current risk measures: The policy “explains its compliance” with prevailing data protection law in India. Additionally, travel credentials are locally stored on the passenger’s phone, while travel data is deleted from the airport’s database 24 hours after their departure. Finally, passengers can opt-out of using Digi Yatra.
Recommendations to mitigate further risks: Internal SOPs on how to handle personal and sensitive personal data must be clarified. These should lay down purposes and periods of data retention within the Digi Yatra ecosystem, after which the data will be deleted. Security-based exceptions should be identified and included in the SOP by the entity’s ethics committee for the responsible deployment of AI. For example, under the 24-hour deletion bracket, Digi Yatra’s privacy guidelines currently indicate “an ability to change the data purge settings based on security requirements on a need basis”.
Finally, passengers should be able to opt-in to the use of their facial recognition data (and other data) to provide third-party value-added services, and be able to revoke consent at any time. “While the Digi Yatra policy provides this presently, such status should remain consistent,” notes NITI Aayog.
Principle of Transparency
What’s the principle?: AI systems “must be audited and capable of external scrutiny” to ensure that they are accountable, impartial, and free from inaccuracies and biases.
Current risk measures: Independent and government audits to scrutinise a system’s security, privacy, and resilience are included in the Digi Yatra Policy.
Recommendations to mitigate further risks: The nature of the independent audit teams should be specified, as well as provisions for non-government audits. Levels of transparency required must be identified. SOPs should be specified in the policy on the likelihood of errors and what to do when they arise.
Principle of Accountability
What’s the principle?: Accountability structures for harms or damages caused by the AI system should be published in a “publicly accessible and understandable manner”.
Current risk measures: A passenger complaints system towards airlines or online travel agencies has been laid out. This will be enacted through a complaints API under Digi Yatra’s open API ecosystem.
Recommendations to mitigate further risks: In addition to the API complaints system, Digi Yatra should have an “adequate grievance redressal mechanism” with a clear framework for first-instance complaints and appeals. The policy should also specify provisions to monitor Digi Yatra’s overall performance.
Additionally, vendors providing consented-to third-party value-added services should ensure that facial data and other relevant data are protected. This can be done by setting out clear licensing and data security agreements with Digi Yatra.
Principle of Protection and Reinforcement of Positive Human Values
What’s the principle?: The principle focuses on the use of collected personal data for profiling, as well as the use of AI systems “in manners contrary to fundamental rights guaranteed by the constitution of India”.
Current risk measures: None specified.
Recommendations to mitigate further risks: In line with principles of purpose limitation, passengers should be able to revoke their consent or delete their data from Digi Yatra if new data processing purposes emerge.
Additionally, Digi Yatra’s privacy guidelines allow for passenger data to be shared with the Centre and any security or government agency. Such sharing should comply with the three-pronged test laid out in Puttaswamy (2016) determining when infringements on the right to privacy are permissible. To that end, SOPs should specify norms and protocols for passenger data sharing between agencies. These would be best drafted by the ethics committee.
This post is released under a CC-BY-SA 4.0 license. Please feel free to republish on your site, with attribution and a link. Adaptation and rewriting, though allowed, should be true to the original.
Read More
- Deep Dive: Nagaland To Use Facial Recognition For Teacher’s Attendance, But What Are The Issues At Stake?
- Digi Yatra: Aadhaar-Based Paperless Boarding For Domestic Flights At Airports Soon
- The Use Of Facial Recognition Technology For Policing In Delhi: An Empirical Study Of Potential Religion-Based Discrimination
- Allahabad High Court Plea Challenges Use Of Facial Recognition In Kanpur University
- Noida Man Wrongly Identified As “Wanted Criminal” By Abu Dhabi Airport’s Facial Recognition System
I'm interested in stories that explore how countries use the law to govern technology—and what this tells us about how they perceive tech and its impacts on society. To chat, for feedback, or to leave a tip: aarathi@medianama.com
