The New York City Police Department (NYPD) has finally introduced a policy on using facial recognition technology in its investigations, after almost a decade of its usage. Any matches through facial recognition will only be an investigative lead, and will not be enough as “to make an arrest, or obtain an arrest or search warrant”; corroborating information will be needed, the policy clearly states. This policy comes two weeks after BuzzFeed News reported that NYPD officers ran more than 11,000 searches on Clearview AI’s controversial facial recognition software — “the most of any entity” — and that more than 30 NYPD officers have Clearview accounts.

When can facial recognition be used? As per the policy, Facial recognition can only be used for “legitimate law enforcement purposes”, limited to mitigating an imminent threat to health or public safety (such as a terrorist plot), or to identifying:

  • An individual who has committed, is committing, or is about to commit a crime
  • A missing person, a victim, or a witness
  • A deceased person
  • A person unable to identify themselves
  • An arrested person who does not have their ID, or is using a false identity

The NYPD has been using facial recognition since 2011. According to its website, its Facial Identification Section received 9,850 requests for comparison and identified 2,510 possible matches, including possible matches in 68 murders, 66 rapes, 277 felony assaults, 386 robberies, and 525 grand larcenies, in 2019. It states that it “knows of no case” in New York City where a person was falsely arrested on the basis of a facial recognition match.

This policy comes after multiple cities in the US — including San Francisco, Oakland, Cambridge, Berkley, and Somerville —  banned the use of facial recognition technologies. In February, US Senators Jeff Merkley and Cory Booker had introduced a Bill to temporarily restrict the use of facial recognition by all government agencies until limits are placed on the use of the technology.

NYPD’s procedure to use facial recognition technology in investigations

  1. Case investigator submits a request to use facial recognition to the Real Time Crime Center, Facial Identification Section (RTCC-FIS). The request will include the image (“probe image”) of the unidentified person obtained from witnesses, victims, or other “reliable sources”. On its website, NYPD clarified that footage from body cameras is not used for facial recognition, until and unless the police officer saw a crime being committed but could not apprehend the suspect.
  2. Request is approved: The RTCC-FIS supervisor will direct an RTCC-FIS investigator to review the request. This investigator will ensure that the “underlying basis for request is in compliance with authorized uses of facial recognition technology”. The only ground for rejection defined in the procedure is a probe image of an “unsuitable quality”. But that will allow the case investigator to submit more images.
  3. Run image against “photo repository” that contains only arrest and parole photographs, and is stored in a “designated, and approved, law enforcement database, and access is restricted to authorized users”. It is clear that only RTCC-FIS investigators have access to the photo repository, not all of NYPD including case investigators.
  4. Send report to case investigator: The RTCC-FIS investigator will submit the possible match candidate to their supervisor, who will, if in agreement, approve the match report and send it to the case investigator. The supervisor can also direct the RTCC-FIS investigator to continue the search, or send a “No Match Report” to the case investigator.

On its website, NYPD said that video from “city-owned and private cameras” is not analysed unless it is relevant to the crime committed, and facial recognition is not used to monitor/identify people in crowds and at rallies. The new policy also prohibits running facial recognition against any image outside the photo repository, such as other government photo databases, like drivers’ license photos from the NYS Department of Motor Vehicles, or to social media, except in specific cases where approval must be given by the Chief of Detectives or Deputy Commissioner, Intelligence and Counterterrorism.

The procedure is not as secure as NYPD makes it out to be

  1. Too much room for abuse: Identifying people who are “about to commit a crime” or mitigating threats to public or health safety are overly broad conditions that can be easily abused, especially when you factor in numerous racial biases that the NYPD has repeatedly been guilty of (read about it here, and here).
  2. Limited effectiveness of facial recognition technology: The NYPD claims that to counter racial biases in software, “hybrid machine/human systems”, where findings of software are reviewed by human investigators, allow “erroneous software matches” to be “swiftly corrected by human observers”. However, NYPD’s racial biases and lack of diversity limit the effectiveness of human oversight. According to the latest NYPD Transparency report, 47% of its officers are white, while 29% and 15% are Hispanic and black, respectively. Only 18% of its officers identify as female.
  3. Not clear who can make a request to use facial recognition: The policy does not define if only NYPD can make such a request or any law enforcement agency in the country. Worryingly, a facial recognition request can be made from outside law enforcement agency as well, and the policy doesn’t clarify who all can make it.
  4. Scale of photo repository not clear: It is not clear if the repository of arrest and parole photographs is a city-wide, state-wide or a national database, and what the penalties for unauthorised access to it are.
  5. Can an facial recognition request be rejected? It is not clear. The policy also does not specify what kind of training the RTCC-FIS investigator will have to review legal compliance of the request itself.

Read more: Personal Data Protection Bill, 2019: Looking at use of video recordings, facial recognition software and drones by police


Indian authorities use facial recognition technology unabated, sans policy

This policy comes as Indian law enforcement agencies continue to use facial recognition technology indiscriminately at rallies, polling booths and protests without any law or policy governing its use. The proposed Personal Data Protection Bill, 2019, is also not clear on what kind of controls will be there on processing biometric data. Moreover, the broad exemptions for law enforcement and government agencies are ripe for government overreach.

  • A few days ago, Home Minister Amit Shah said in Lok Sabha that the government was using facial recognition to identify perpetrators of riots that broke out in Delhi in February, and had identified over 1,100 people. The footage that Delhiites sent to Delhi Police was compared against voter ID data, driver’s licence, and “other government data”. Aadhaar data was not used.
  • The National Crime Records Bureau (NCRB) is in the midst of reviewing applicants for the implementation of a centralised Automated Facial Recognition System (AFRS) that will be a platform of facial images accessible to all police stations of the country.
  • In February 2020, we reported that the Vadodara City Policy is planning to use Clearview AI’s controversial facial recognition software in public places to track “property offenders”. This software could also be used in CCTVs installed at “specific locations” in the city.
  • In January 2020, the Telangana State Election Commission (TSEC) had announced that a facial recognition app would be used on a pilot basis at 10 polling stations in the Kompally Municipality in the then upcoming civic elections in the state.
  • In January 2020, The Indian Railways said that it was in the process of installing Video Surveillance Systems (VSS), equipped with a facial recognition system, in 983 railway stations across the country.
  • In December 2018, Delhi Police used facial recognition technology at Prime Minister Narendra Modi’s rally to screen crowds, News18 had reported.
  • In November 2019, the Hyderabad Police randomly collected people’s fingerprints and facial data to identify “potential” criminals using the TSCOP app which was launched in January 2018. Syed Rafeeq, Additional DCP, South Zone, Hyderabad, had told MediaNama that the police was approaching people to verify if they were “suspects” mostly based on intuition.