Social media companies like Meta and Twitter will no longer have to remove "legal but harmful" content from their platforms under the United Kingdom's proposed Online Safety Bill, reported Reuters on Monday. Social media companies could be fined up to 10% of their turnover or $22 million if they failed to take down harmful content below a criminal threshold, the British government previously said. Senior managers could also face criminal action for this failure. With this change, platforms are now banned from removing or restricting users or the content they generate if they do not violate their terms of service or British law. Only platforms failing to uphold their rules or remove illegal content could face fines of up to 10% of their annual turnover. Platforms will also have to offer tools to adult users that help them avoid likely encounters with legal "controversial content". This includes content that glorifies eating disorders or is racist, anti-Semitic, or misogynistic. "These could include human moderation, blocking content flagged by other users or sensitivity and warning screens," said the government announcement. The Bill, like India's own draft data protection law, has gone through much debate and discussion and will be placed once again before the UK Parliament next month. Why it matters: The British government's move to remove the content moderation clause comes after campaigners and lawmakers flagged the concern of platforms curtailing free speech under the provision. Similar arguments have been raised in the case of India's newly notified amendments to the IT Rules, 2021. While…
