The Bombay High Court has asked the government for information on a report about an AI bot which was used to convert images of underage and adult women into fake nude pictures, news agency PTI reported. The court cited an article by the Hindustan Times on this, which had reported on findings by a Dutch cybersecurity firm called Sensity. The report suggested that more than 104,000 women had been targeted using this AI-bot as of July 2020. A poll on the geographic location of over 7,200 of the bot’s users revealed that around 2% its users were located in India and neighbouring countries. Sensity found that the bot received significant advertising via the Russian social media website VK. While hearing petitions against media trial in actor Sushant Singh Rajput’s death, the court reportedly asked Additional Solicitor General Anil Singh to get in touch with the Ministry of Information and Broadcasting to check about any “malice” in the report. Singh reportedly told a division bench of Justices Dipankar Datta and GS Kulkarni that he had read the report, and that appropriate action would be taken under the Information Technology Act. The report by Sensity uncovered an entire deepfake ecosystem — an AI bot, thousands of users, multiple channels — on the messaging platform Telegram. At the heart of this ecosystem is an artificial intelligence powered bot which allows users to photo-realistically “strip naked” clothed images of women. These manipulated images can then be shared in private or public channels beyond Telegram…
