Biased training models, lack of algorithmic transparency, absence of a grievance redressal system, inadequate privacy protections, and challenges to the right to privacy as guaranteed by the Puttaswamy judgement are some of the risks and challenges posed by facial recognition technology (FRT), the government think tank NITI Aayog identified in its latest discussion paper on Artificial Intelligence (AI) and FRT. What is FRT: "FRT is a sophisticated data-driven aspect of artificial intelligence technology that primarily seeks to accomplish three functions—facial detection, feature extraction, and facial recognition. Its applications generally operate through the identification or verification of particular persons against a gallery of facial images, necessitating the presence and use of large facial datasets for wider use," the report explains. Why does this matter: The role of FRT is rapidly rising around the world. In India, various government and law enforcement agencies are implementing FRT for security purposes. "India is home to some of the most surveilled cities in the world, with the use of CCTV cameras in Delhi, Chennai, Hyderabad, Indore and Bangalore ranking among the highest across the world, and an annual growth of 20-25% in India’s surveillance units markets," the paper points out. Given this, it is important to understand the risks such systems pose "to basic human and fundamental rights like individual privacy, equality, free speech and freedom of movement, to name a few." What are the design-based risks? 1. Inaccuracy due to technical factors: FRT systems work in a three-step process, which "involves detection of a face through image…
