The American Civil Liberties Union on May 28 sued controversial facial recognition company Clearview AI, “to put a stop to its unlawful surreptitious capture and storage of millions of Illinoisans’ sensitive biometric identifiers”. Clearview’s face surveillance activities violate the Illinois Biometric Information Privacy Act (BIPA), and represent an unprecedented threat to our security and safety, ACLU said in a statement. Their lawsuit, filed on behalf of organisations representing survivors of sexual assault and domestic violence, undocumented immigrants, and other vulnerable communities, seeks an order declaring that Clearview’s conduct violates Illinois’ privacy law, and for the company to “cease its unlawful activities”.

BIPA, which was enacted in 2008, requires companies to obtain written permission from subjects before the collection of a person’s biometric information. Illinois residents can sue companies for up to $5,000 for every privacy violation. The lawsuit was prepared by ACLU, ACLU of Illinois, and law firm Edelson PC.

Clearview AI’s face recognition service is built on a database largely constructed by scouring through the millions of images available on the internet.  Once you feed a person’s image into it, it pulls out all matching faces from its database. The software pulls facial data from all publicly available images online, including from Twitter, Facebook, Google, Instagram, YouTube, news articles, and more. The result is a database of unprecedented scale — over 3 billion images to be exact — to readily identify any person walking on the street, with just a single image.

Clearview’s business model is a ‘nightmare scenario’

Images collected without people’s consent: Clearview has violated and continues to violate the BIPA “at staggering scale”, ACLU said in the lawsuit. It has captured more than three billion faceprints — unique biometric identifiers drawn from pictures — “all without the knowledge—much less the consent—of those pictured”, and in doing so, has failed to take the basic steps necessary to ensure that its conduct is lawful, it said. Neither does Clearview AI take consent from people before adding their images to its database, nor does it disclose to them that it would sell their images to other parties, ACLU alleged. By doing all this, Clearview has set out to do what many companies have “intentionally avoided out of ethical concerns”, the lawsuit said.

“At no point does Clearview—on its own, through its clients, customers, or through any other party—even attempt to inform individuals that their images and sensitive biometric data are being collected. It does not obtain (or even try to obtain) those individuals’ consent.” — ACLU lawsuit

Purpose of data collection and retention not disclosed: The company’s business model, “appears to embody the nightmare scenario” of a “private company capturing untold quantities of biometric data for purposes of surveillance and tracking without notice to the individuals affected, much less their consent”, ACLU said.  It “fails to provide individuals with a written, publicly available policy identifying its retention schedule, and guidelines for permanently destroying the faceprints in its database, as required by the BIPA,” the lawsuit said.

Excluding images with geolocation is not enough: The lawsuit also alleged that the measures that Clearview claims to have taken are not enough. While the company says it will not capture any image that contains geolocation in its metadata, that excludes a number of pictures appearing on social media, or those which are taken from a phone with location settings disabled, ACLU said. “A significant proportion of photos of Illinois residents that appear online will not contain geolocation information and so will not be excluded from Clearview’s system,” it said. Similarly while Clearview claims to exclude photos from its system if those were uploaded from an internet protocol address in Illinois, ACLU said that “few photos on publicly available websites include metadata showing the IP address from which they were uploaded”.

Decision to not sell to non-govt entities can be reversed: While Clearview said that it would cancel accounts of all non-government users, as well as all accounts belonging to any entity in Illinois, ACLU alleged that Clearview could unilaterally reverse this decision at any time. This also doesn’t address any rights’ violation, or end the nonconsensual capture of people’s images from the internet, it said. “Further, Clearview’s past conduct casts doubt on the reliability of the company’s claims”.

Clearview’s service can recognise faces even from ‘imperfect pictures’: Clearview’s database contains more than just people’s faceprints, including links to the webpages from which Clearview obtained those photographs, ACLU said. Those webpages often contain additional information about the individual, “furthering the privacy and security harms”, it alleged. Clearview’s neural engine can recognise faces even from pictures “imperfect images” as it can pair its face recognition technology with other technology—like augmented-reality glasses—which could potentially identify almost anyone, the lawsuit alleged.

Size and nature of facial dataset ‘harmful’: Due to the sheer size and nature of the database that Clearview has amassed, the the consequences of any data breach would be “harmful” because unlike numerical identifiers (e.g., Social Security numbers), which can be replaced or re-assigned, biometrics are biologically unique to each person and therefore, once exposed, an individual has no recourse to prevent falling prey to misconduct like identity theft and unauthorised tracking, the lawsuit said.

It is worth mentioning that last month, a misconfigured server at Clearview AI left its source code publicly accessible, which could potentially have been used to run its app from scratch. Before that, the company’s entire client list was stolen in a breach.

Clearview under heavy scrutiny

Clearview AI had first come under the scanner when New York Times reported in January that the service was secretively built by collecting images from across the web. Following the revelation, Twitter had sent a cease and desist letter to Clearview AI, asking it to stop scraping content from the platform, and delete existing photos it had stored, followed by similar notices by Google, YouTube and Facebook.

The company had constantly maintained that its service is only used by law enforcement agencies and “select security professionals”. But, after the company’s entire client list was stolen, a BuzzFeed News report found that its client list included both government and private entities. Even among its government client, the company was found selling its technology to law enforcement agencies, and police forces in 27 countries including Saudi Arabia, the United Arab Emirates and India.

US Senator Edward Markey, in March, had asked Clearview AI if it plans to sell its facial recognition software outside of the US, and how it would guarantee that its technology is not used to abuse human rights. Selling to authoritarian regimes “raise[s] a number of concerns because you would risk enabling foreign governments to conduct mass surveillance and suppress their citizens”, Markey had written.