There has been a 13% increase in content that was deleted through automated detection, said Google in its Information Technology (IT) Rules, 2021 compliance report for the month of August. The IT Rules mandate significant social media intermediaries (SSMIs) to publish periodic compliance reports. Like in the compliance report for July, Google mentions that the content has been removed from "all its platforms", but does not specify which platforms are covered by the report. In August, Google removed 651,933 content through automated detection, while in July it removed 576,892. Google claims that the automated detection technology is being used to detect content such as “child sexual abuse, violent extremist content’. For the automated detection process, Google uses — Location data of the sender or creator of the content Location of account creation IP address at the time of video upload User phone number Content removal based on user complaints decreases marginally According to the compliance report for August, there has been a 2.22% reduction in the content that was removed for the month, when compared to that of July. The decrease in content removal can be attributed to a fewer number of complaints received from users. In August, Google removed 93,550 content based on 35,191 user complaints, in July, the platform removed 95,680 items based on 36,934 complaints. While most of the now-removed content was regarding copyright violations (99.1%), other items that were removed were based on court orders, graphic sexual content, and so on. Why more actions removed versus…
