In an ironic move, Google is planning to launch new AI ethics services this year through which it will advise clients on how to spot racial bias in computer vision systems and to develop ethical guidelines for their AI projects, Wired reported. At a later date, the company “may” offer to audit clients’ AI systems for ethical integrity and charge for dispensing ethics-related advice.

The company’s first set of services will include training courses on topics such as how to spot ethical issues in AI systems, similar to one offered to Google employees, and how to develop and implement AI ethics guidelines, according to Tracy Frey, the director for product strategy and operations at Cloud AI at Google, whom Wired quoted. It may offer review and audit services later. The company has reportedly not decided whether it will charge for some of these services.

Given its chequered past with use of AI in general (see below), the company had released its principles for responsible use of AI in 2018. As per Wired, adhering to these principles has also slowed down the launch of Google’s products, at times at the cost of revenue. For instance, in 2019, Google launched a facial recognition service limited to celebrities whose ethical review and design process took 18 months. During the review process, the company reportedly had to fix a problem with the training data that led to lower accuracy for black male actors. As a result, by the time Google started its service, Amazon’s similar service had been open to all for more than two years.

It is not clear if Google would need access to proprietary algorithms of different companies as it advises them on ethical AI or audits their systems. We have reached out to the company for more information.

Ethical for who? Google’s spotty history of using technology ethically

In case readers have forgotten, this is the same company whose racist search algorithms are the subject of “Algorithms of Oppression: How Search Engines Reinforce Racism” by UCLA professor Safiya Umoja Noble as per which the AI and algorithms behind Google’s search and other products perpetuate racism based on data discrimination. This is also the company that couldn’t find a solution to its algorithm’s racism that labelled black people as gorillas and thus, it prevented Google Photos and other Google products from ever labelling any image as a gorilla, chimpanzee or monkey, even when they were pictures of the primates themselves.

Instinctively, Google is drawn towards arguably unethical use of technology and AI as it is driven by corporate interests. Case in point: Project Maven with Pentagon that analysed surveillance footage from drones. Google had refused to renew the contract in 2019 only after facing significant push back from employees. Despite that, Google has continued to provide support to start-ups that deploy AI for law enforcement purposes though its venture capital arm, Gradient Ventures.

After IBM stopped offering general facial recognition and analysis software in response to the Black Lives Matter movement earlier in June, it was quickly followed by Amazon’s moratorium on police use of its controversial facial recognition system, Rekognition, and Microsoft’s refusal to sell such technology to police departments in the absence of a law. Google, however, was conspicuously absent despite 1,666 employees demanding it to stop selling its technology to police departments. Many academics and journalists have raised concerns about the accuracy of facial recognition technology, especially for people of colour. The two most well known projects are MIT’s Gender Shades project and ACLU’s report on Amazon’s Rekognition.

Another case in point: Google was developing a China-specific search engine — codenamed Project Dragonfly — that would have abided by the Chinese rules of censorship. It confirmed the project’s termination only in July 2019 after being criticised by its own employees and global rights organisations such as Amnesty International.