Human Rights Watch (HRW) has submitted a report to the United Nations Special Rapporteur on Extreme Poverty and Human Rights in which it examines how the use of Artificial Intelligence and other data-driven technologies in welfare programs – including Aadhaar – affects people’s human rights. Aadhaar is one of eight such programs that were studied. The report calls Aadhaar’s biometric identification and data collection requirements ‘invasive’, and says the Aadhaar database “increases the risk of unnecessary and disproportionate surveillance”.
What the report does:
- Explains how states delegate key welfare functions to automated decision-making models that use technologies associated with AI such as data mining and machine learning. These include determining who is eligible and deciding benefit levels.
- Assesses how automated decision-making compromises people’s rights to privacy and social security, and interferes with the obligation of states to guarantee these rights without discrimination.
HRW’s concerns with Aadhaar
After a brief precis on the status of Aadhaar, the report lists HRW’s various concerns with the scheme and how it is implemented. Here is a summary:
- HRW says that Aadhaar’s “invasive” biometric identification and data collection requirements have created the world’s largest database of biometric identity information, which “increases the risk of unnecessary and disproportionate surveillance”.
- Though the Supreme Court ruling on Aadhaar imposed several restrictions to safeguard people’s data, these changes did not address the scope of biometric data and personal information collected under the program.
- HRW also raised concerns about the numerous data breaches associated with Aadhaar.
- It said Aadhaar’s “interferences with privacy” disproportionately affected minorities such as transgenders, as forcing them to disclose their gender identity to the government could expose them to greater risk of discrimination and persecution.
Eligibility, authentication and infrastructure issues
- Eligible families have been denied access to subsidised food grains and other benefits because they did not have an Aadhaar number, had not linked it to their ration cards, or experienced failures in authenticating their fingerprints.
- Authentication failures disproportionately affect manual labourers, older persons and other individuals with worn fingerprints. Local activists have found that Aadhaar-related denials of food rations have led some to starve to death.
- Poor connectivity in rural areas has also led to disruptions in food distribution schedules as Aadhaar machines require an internet connection.
‘Public-private partnerships make accountability difficult’
The report concludes with a section on the role of the private sector, which is key in developing and operating automated welfare systems. Here what is says:
- Need for policies and processes: It is unclear whether private companies involved in welfare schemes have policies or processes that meaningfully address their human rights impacts.
- Accountability: Public-private partnerships make it difficult to hold both state and non-state actors accountable for failures. Companies should, at a minimum:
- provide accessible explanations of how AI and other data-driven technologies are integrated into welfare decision-making
- disclose and address automation errors quickly
- submit to audits of algorithms and training data by external assessors
- develop processes for identifying, correcting and mitigating discrimination in system inputs and outcomes.
- Secrecy: Risk assessment models and other automated decision-making tools are typically hidden behind broad assertions of intellectual property and trade secrets. This makes it hard for people to scrutinise them.
- No pressure from governments: There does not appear to be much pressure on companies to conduct human rights impact assessments or consultations with welfare recipients, since governments are not insisting on adherence to the UN Guiding Principles.
- Due diligence: Human rights due diligence is a central component of companies’ responsibilities. This requires:
- impact assessments that address issues of privacy, discrimination and exclusion early on
- internal training, dialogue and collaboration on these issues
- regular consultations with civil society and affected rights holders
- Transparency: Companies should also establish meaningful transparency measures, such as policies to disclose the outcomes of impact assessments and the concrete steps they have taken to prevent or mitigate human rights risks.
- Remedies: Companies have a responsibility to provide access to effective remedies when they have “caused or contributed to adverse human rights impacts”.
- UN Guiding Principles: States should establish implementation of the UN Guiding Principles as a mandatory condition for the sale of identity verification, benefits assessment and fraud detection products and services to welfare agencies and other relevant authorities.