Popular short video app TikTok has joined the European Union’s Code of Conduct on Countering Illegal Hate Speech, making it the ninth platform to abide by the code, the company announced on Tuesday. “Our ultimate goal is to eliminate hate on TikTok. We recognise that this may seem an insurmountable challenge as the world is increasingly polarised, but we believe that this shouldn’t stop us from trying,” said Cormac Keenan, head of trust and safety, EMEA at TikTok.

“We have a zero-tolerance stance on organised hate groups and those associated with them, like accounts that spread or are linked to white supremacy or nationalism, male supremacy, anti-Semitism, and other hate-based ideologies,” Keenan added. TikTok, however, has had a chequered history with content moderation across the world, including suppressing pro-LGBTQ content in countries where it isn’t illegal (more on that below).

What is this code, and how does it work? The code is non-legally binding, and is more of a collaboration between the EU and the tech industry to counter online hate speech. Launched four years ago along with Facebook, Microsoft, Twitter and YouTube; Instagram, Snapchat, Dailymotion, and French video game website jeuxvideo.com have also joined the code.

The aim of the code is to make sure that requests to remove content are dealt with quickly. When companies receive a request to remove illegal content from their platforms, they assess this request against their rules and community guidelines The companies are expected to review these requests in less than 24 hours and to remove the content if necessary. As per the latest compliance evaluation of the code, it was found that about about 90% of the requests made to the platforms were reviewed in less than 24 hours.

However, gaps have been found in the implementation of this code. A little over than 67% of requests sent to the platform received a feedback, but Facebook was the only platform to provide feedback to all users. In contrast, Instagram sent a feedback only 61.5% of the times, and YouTube for less than 10% of the times.

TikTok’s content moderation problems

While TikTok claimed that it wants to eradicate hate speech from its problem, its own content moderation has been criticised for censoring content that is critical of the Chinese government, and even content that is pro-LGBTQ in countries where homosexuality has never been illegal.

TikTok came under fire in India after videos promoting animal cruelty went viral on the platform, and only took them down after several users complained about them. Similarly, a popular Indian creator’s video allegedly promoting acid attacks on women was only taken down after it drew the ire from the National Commission for Women.

TikTok removed a video by popular creator Nazma Aapi’s which was critical of China, mentioning its handling of the coronavirus, and the standoff along the Line of Actual Control. The platform later reinstated the video, again, after it was criticised for removing an “anti-China” video (several lawmakers in India have questioned TikTok’s close relationship with the Chinese government). This wasn’t the first time something like this happened either.

Internal training documents from TikTok earlier this year revealed that it directed moderators to suppress videos from people deemed too “ugly”, “poor”, or “disabled”, and was found to ban content that could be seen as positive to gay people or to gay rights.

At the moment, TikTok remains banned in India after the government labeled it a national security threat in the aftermath of the India-China border skirmishes.