In potentially the first case of its kind, the High Court in Cardiff, UK, on September 4 ruled (judgement available below) that it is lawful for police to use facial recognition technology to search for people in crowds. The judges said that the use of Automatic Facial Recognition (AFR) technology by South Wales Police (SWP), which has deployed this technology on 50 occasions, was lawful and did not breach human rights or data protection laws. The bench also said that the UK’s current laws were enough to deal with any concerns about breach of privacy rights. The petitioner will appeal the decision, according to multiple reports.

What did the court say? The court ruled:

  1. SWP’s use of the technology complied with the Human Rights Act.
  2. Although there was potential of breaching the rights of the people whose images were taken, SWP’s actions of collecting and processing were legal and fulfilled the conditions set out in the Data Protection Act 2018.
  3. Unless a person matched a person on a watchlist, all data related to them was deleted immediately after processing.
  4. “[T]he current legal regime is adequate to ensure appropriate and non-arbitrary” use of AFR Locate.

What brought up this case? An activist, Ed Bridges, brought up this case after he believed that his face was scanned by the SWP twice. His case was backed by human rights group, Liberty, that said that the use of the technology breached his rights and data protection laws. Bridges argued:

  1. No proper legal safeguards governing the use of AFR technology, which compares biometric data of individuals to a police database.
  2. Scanning people’s faces using FRT is akin to taking someone’s DNA or fingerprints without their consent
  3. Deployment of facial recognition technology could affect the human rights of thousands of people.

What is the technology in question? This case was concerned with AFR Locate which, when deployed, takes “digital images of faces of members of the public” from live CCTV feeds and process them in “real time to extract biometric information”. This information is then compared with facial biometric information of people on a watchlist.

How did the South Wales Police argue? They said that the use of technology did not violate privacy or data rights because:

  1. The force did not retain biometric data of people who were not on a watchlist.
  2. Data associated with a facial recognition match is retained by the police for up to 24 hours.

Is facial recognition technology common in the UK? According to a Financial Times report, three British police departments use FRT — South Wales, Leicestershire, and London’s Metropolitan Police. London has a network of 420,000 CCTV cameras and is arguably the most monitored city in the world after Beijing.

Has UK unconditionally accepted the implementation of facial recognition technology? There have been critics. When private developers of a 67-acre area in King’s Cross deployed FRT without public’s knowledge, as reported by the Financial Times. there was a furore. Different MPs on the House of commons science and technology select committee, AI researchers at the Ada Lovelace Institute, the biometrics commissioner, and different civil liberty organisations have called for a prohibition on the use of FRT in the UK until regulations are established.

What was the response to the judgement? 

  • Information Commissioner’s Office: The ICO, which has been critical of police and private use of facial recognition technology, said it welcomed the court’s finding that at least acknowledged that “the police use of live facial recognition (LFR) systems involves the processing of sensitive personal data of members of the public”. It is now reportedly finalising its recommendations to police forces on “how to plan, authorise, and deploy any future LFR systems”.
  • Petitioner: Liberty, which supported Bridges’s case, said that the judgement was “disappointing” and “does not reflect the very serious threat that facial recognition poses to our rights and freedoms,” the Financial Times reported. Bridges plans to appeal.

What is the status of facial recognition technology in India? There are no laws to govern the use and deployment of facial recognition technology in India. Despite that, facial recognition technology is gaining momentum across the country:

  • On September 6, 2019, the Delhi airport started processing passengers’ entry at all check points of the Delhi airport using facial recognition systems on a “trial basis”, under the Civil Aviation Ministry’s Digi-Yatra Initiative. Bengaluru’s Kempegowda International Airport will also introduce a similar system by the end of this year under the same initiative.
  • On September 5, 2019, Gujarat Chief Minister Vijay Rupani launched a facial recognition based attendance system to keep track of 2.5 lakh government teachers in the state.
  • In July 2019, the National Crime Record Bureau (NCRB) called for applicants to bid for the implementation of a centralised Automated Facial Recognition System (AFRS) which will be a centralised repository of photographs made available to police stations across the country for purposes of facial recognition. The aim is to identify and track criminals and it will be integrated with existing AFRS systems in different states.
  • Through an RTI, MediaNama learnt that the Delhi Police owns two “facial recognition softwares [sic]” for use in Crime Branch, and has installed 6,372 CCTV cameras across the city.
  • The Delhi government has been installing 1.4 lakh CCTV cameras across the city, and an indeterminate number of CCTV cameras inside government school classrooms. Despite repeated queries and RTIs by MediaNama, the government has not clarified if it will deploy facial recognition technology through these cameras.

Read more: Caught on Camera: India is woefully unprepared for facial recognition technology