Boston has banned the use of facial recognition technology by the city’s government, and prohibited any city official from obtaining the technology from third parties. This came after the city’s council, on Wednesday, unanimously voted in support of an ordinance sponsored by councillors Michelle Wu and Ricardo Arroyo. The measure will now go to the Boston’s governor to be signed into a law. With this, Boston joins cities such as San Francisco, Oakland, Cambridge, Berkley, and Somerville, which have already banned government use of the technology.
The ban comes in the backdrop of the arrest of a black man, Robert Julian-Borchak Williams, following an incorrect facial recognition match. Williams was accused of a shop theft that took place in October 2018, a still from the shop’s surveillance cameras was uploaded to Michigan state’s facial recognition database. This led to Williams’ picture being included in a photo lineup that was shown to the store’s security guard, who identified Williams as the culprit. However, during the interrogation, Williams held up a photo of the shoplifter next to his own face, following which one of the detectives said that “the computer must have gotten it wrong,” Williams wrote for Washington Post, recounting the incident. Williams’ might well be the first incident where am incorrect facial recognition match led to real world consequences for an individual, but chances are, it won’t be the last.
“Boston should not use racially discriminatory technology that threatens the privacy and basic rights of our residents. This ordinance codifies our values that community trust is the foundation for public safety and public health,” Wu said in a statement. “While face surveillance is a danger to all people, no matter the color of their skin, the technology is a particularly serious threat to Black and brown people,” Arroyo said.
The protests in the US against racial discrimination have forced companies to take a stand against facial recognition, especially because the technology is known to be biased, particularly against people of colour and other underrepresented communities. Microsoft has said that it will not sell the tech to police in the US until a federal law, while Amazon has committed to doing the same, albeit just for a year. IBM has said that it will altogether stop offering “general-purpose facial recognition and analysis software”. Amazon’s facial recognition tool Rekognition, for instance, misidentified 28 members of Congress as criminals. Research, in general, has shown that facial recognition tools are worse at detecting and identify faces of darker-skinned people, thereby creating ample room for discrimination and persecution.