India has earned itself the dubious distinction for getting the maximum number of TikTok videos removed between January and June 2020 — a whopping 37,682,924 — four times the country with the second highest removals — USA with 9,822,996 (9.82 million). Of the 37.68 million videos removed in India, only 225 videos and 8 accounts were removed or blocked at the government’s request; all others were removed by TikTok of its own volition, indicating that India led in the creation of violative content. It is unlikely that India’s ban on the app had a significant impact on the numbers (more on that below).
India and USA were followed by Pakistan (6.45 million), Brazil (5.53 million) and United Kingdom (2.95 million) in terms of total videos removed. It removed a total of 104,543,719 (104.54 million) videos globally in the first half of 2020, less than 1% of all videos uploaded on TikTok. Of these, 96.4% were removed before a user reported them and 90.3% were removed before they received any views.
Reasons for removal: Of the more than 104.54 million videos removed globally, 30.9% (~32.3 million) were removed for showing adult nudity and sexual activities, 22.3% (~23.3 million) for being child sexual abuse material (CSAM) and endangering safety of children, 19.6% (~20.5 million) for showing illegal activities and sale or promotion of regulated goods, 13.4% (~14 million) for showing suicide, self-harm and dangerous acts, and 8.7% (~9 million) for showing violent and graphic content. Other reasons for removal include harassment and bullying (2.5%), hate speech (0.8%), dangerous individual and organisations, and deceptive activities such as spamming, impersonation or disinformation campaigns. (1.2%).
Impact of India’s ban on the app?
India’s ban on the app due to national security reasons would have had minimal impact on the numbers since the ban was imposed on June 29, just a day before the first half of the year ended. If the removal of 37 million videos was distributed equally across all days of the first six months, then, on the June 30, another 209,349 videos could have been removed.
India leads in govt, legal requests for content removal as well
TikTok’s transparency report gives country-specific data for two parameters — law enforcement requests for user information, and government requests for content restrictions.
TikTok complied with 79% requests for user information in India: Indian law enforcement agencies (LEAs) sent a total of 1206 requests for user information — 1187 normal requests and 19 emergency requests. These requests sought information about 1851 accounts and for 79% of these requests, TikTok gave LEAs some information. For a normal request for user information, LEAs have to provide a valid legal document, such as a subpoena, court order, or warrant, which is then reviewed by TikTok for legal sufficiency. In case of emergency requests, TikTok discloses user information without legal process when it believes there is “imminent risk of death or serious physical injury to any person”.
|Period||Number of Legal Requests||Number of Emergency Requests||Total Requests||Total Accounts Specified||Percentage where some information was produced|
- Between 2019 H2 and 2020 H1, legal requests from Indian LEAs grew almost 4 times while the number of accounts specified grew by 4.5 times. TikTok’s rate of compliance fell from 90% to 79% but was still better than the 47% of 2019 H1.
United States was a distant second in the number of requests for user information with a total of 290 (222 normal and 68 emergency) requests. TikTok complied with 85% of them.
TikTok removed/blocked 255 videos at Indian government’s request: The Indian government made 55 requests to remove content that specified 244 accounts. TikTok removed or blocked access to 8 such accounts and removed/blocked 225 videos. Although Indian government agencies made the maximum number of requests, Russian government’s requests led to maximum removals or blocking. 15 requests that specified 259 accounts led to blocking/removal of 9 accounts and 296 videos in the case of Russia.
|Period||Government Requests||Total Accounts Specified||Accounts
When governments request TikTok to remove content, TikTok reviews whether or not the content complies with its Community Guidelines, Terms of Service and applicable law. If it doesn’t comply, it is removed. If it does comply, TikTok may still restrict access to such content in the country where it is allegedly taken. In case of compliance, the platform may choose to take no action as well.
Copyright notices: The short video platform gives bulk data for copyright-related takedown notices. It got 10,625 notices globally to remove content that violates different copyright laws and in 89.6% of these requests, it removed “some” content.
‘Create an inter-platform hash database of violent content’: TikTok
TikTok’s interim head, Vanessa Pappas, who stepped in after Kevin Mayer quit as the CEO in August, wrote to nine unspecified social and content platforms, that they should create an inter-platform database with hashes of violent and graphic content that can be used to reduce the prevalence of such content on different social media platforms. A hash is a unique digital fingerprint of electronic content that can be indexed. In this case, it would mean creating a database of hashes of all identified violent content across the 10 platforms. The platforms would then keep scanning for recurrence of these particular hashes and automatically take the problematic content down. We have reached out to TikTok to find the 9 companies the letter was sent to.
Big Tech have already started multiple industry-wide coalitions to fight CSAM. In June 2020, Technology Coalition, a group that includes Amazon, Apple, Microsoft, Facebook, Google, etc., announced Project Protect.
The industry also uses a hash-based technology to remove CSAM. PhotoDNA, developed by Microsoft and Dartmouth University, is used by Google, Facebook, and multiple other technology companies to scan for CSAM. The service, which is also available to child rights organisations and as a service on Microsoft Azure, creates a hash of the illegal image and compares it against hashes of other photos to find copies and take them down. WhatsApp also uses PhotoDNA to scan profile photos of groups for CSAM and remove all accounts within that group.