What’s the news: Four US federal agencies on April 25, 2023, issued a joint statement raising concerns about the potential bias and discrimination when using automated systems. This statement, which also warns against the potential harms of Artificial Intelligence (AI) use, comes months before the release of the United Nations’ toolkit for law enforcement agencies everywhere keen on using AI. The Consumer Financial Protection Bureau, the Justice Department’s Civil Rights Division, the Equal Employment Opportunity Commission, and the Federal Trade Commission described automated systems as “software and algorithmic processes, including AI, that are used to automate workflows and help people.” The agencies noted that the use of automated systems has become common in daily life. “These automated systems are often advertised as providing insights and breakthroughs, increasing efficiencies and cost-savings, and modernizing existing practices. Although many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes,” said the joint statement. Why it matters: As per the above agencies, private and public entities use such systems to provide access to a job, housing, credit opportunities, etc. Even in India, law enforcement agencies have developed a penchant for using CCTVs, facial recognition, and other surveillance technologies. Meanwhile, the government has confirmed the use of facial recognition for specific e-governance purposes like the Meghraj Cloud, UIDAI, etc., even though we do not have laws in place to regulate such AI. The concerns flagged by the US authorities…
