Facebook is reportedly testing a facial recognition system on its mobile app to verify whether users are humans or bots, researcher Jane Manchun Wong said on Twitter. She noted that the facial recognition feature “recognises my [her] face as a face,” and doesn’t associate it with an identity. The interface is similar to Apple Face ID’s where users have to rotate their faces to give a more complete view of their faces while setting up the system.

A message on the interface said that no one else will be able to see the video selfies and these will be deleted 30 days after it confirms that a human is accessing a particular account. However, Wong asked, “if Facebook doesn’t store/remember the face 30 days after the identity verification, does that mean people can create new fake accounts and pass the video selfie test once a month?” It is also unclear why Facebook would store a user’s video selfie for a month after confirmation.

Wong told MediaNama that she discovered this prototype interface while “reverse engineering the [Facebook] app”. She also said that she wasn’t sure if Facebook would store this data locally on a device (like Apple does) or on a server. She wasn’t sure about it because she “didn’t and didn’t want to submit my [her] facial data”.

While this feature looks similar to Apple’s Face ID, it is different from it because “this is designed for testing whether a user is a real human, while Face ID is used for authentication and unlocking the phone”, Wong told us. MediaNama has asked Facebook if this feature would be used to sift out bots, as Wong has speculated, and if it could eventually replace CAPTCHA verification.

As of now, it is unclear what Facebook will do with the facial data it collects if this feature goes public. We have reached out to Facebook for more details.

Facebook’s patchy history with facial recognition

This isn’t the first time that the company has dabbled with facial recognition technology:

  • Facebook used facial recognition software on photos for tagging suggestions, but was taken to court in 2015 under the Illinois Biometric Information Privacy Act which states that companies are to come up with a public policy before collecting biometric data of its users. The company lost the lawsuit in August 2019.
  • In July 2019, US’ Federal Trade Commission (FTC) slapped a $5 billion penalty on Facebook for misrepresenting users’ ability to control the use of facial recognition technology with their accounts, among other things. The FTC had said that Facebook’s facial recognition setting called “Tag Suggestions” was turned on by default, while the updated data policy suggested that users would have to opt-in to enable facial recognition for their accounts.
  • In September 2019, the company said that users would have to to opt-in to its ‘face recognition’ feature  which was used by default  to provide tag suggestions. The default ‘tag suggestions’ feature was rolled back. However, it only meant that Facebook could no longer suggest your friends to tag you in photos, unless you wanted to. Facial recognition, as a feature, continues to be present on the platform (Settings > Face Recognition > Toggle ON or OFF)

Big Tech’s continued trysts with facial recognition

  • Apple launched its facial recognition system ⁠— called Face ID ⁠— in 2017 on the iPhone X. Former American Senator Al Franken raised privacy concerns associated with capturing a sophisticated three dimensional model of a user’s face, such as how and where such data is stored (locally or remotely), which apps is it shared with, and how Apple would deal with law enforcement requests for Face ID data.
  • Amazon is potentially creating a “database of suspicious persons” using facial recognition technology. Amazon Rekognition, the company’s face identification software, is already licensed by several law enforcement agencies in the US.
  • Google was earlier this year carrying out a “field research” to improve the facial recognition algorithm on its Pixel 4 device. It was later reported that the contractors carrying out this field research for Google were specifically targeting dark skinned homeless people and college students in the US, thereby highlighting the social biases that underpin the data sets for such technologies.