Google said that it removed over 576,892 items in India based on automated detection between July 1 and July 31. The Information Technology (Intermediary Guidelines and Digital Ethics Code) Rules, 2021 mandate that significant social media intermediaries (SSMI) publish periodical compliance reports. The report mentions that the content has been removed from all its platforms, but it doesn't specify which platforms are covered under this. The report carries both Google and YouTube logos. In this monthly compliance report, the second of its kind since the IT Rules came into force on May 25, Google said that it is using automated technology to detect harmful content such as "child sexual abuse, violent extremist content'. For detection, Google is using data such as — Location data of the content creator or sender. Note that Google says that senders or creators of content may attempt to evade detection through location-concealing mechanisms, and that reporting based on location attribution should be interpreted as a directional estimate. Data signals such as location of account creation IP address at the time of video upload User phone number 95,680 items removed based on user made complaints; 99.1% removed on copyright violation In addition to the content removal based on automated detection, Google removed 95,680 'items' based on 36,934 user-made complaints. While most of the content that was removed were regarding copyright violations (99.1%), other items that were removed were based on court orders (0.0%), graphic sexual content (0.0%) and so on. Majority of the complaints received, too,…
