Imagine a biscuit brand shipped harmful biscuits to customers who fell ill after consuming them. There are two options regulators have: They could either check biscuits frequently and ask the brand to recall harmful packets, or frame laws to mandate systemic protections in the company's distribution process. Which one is more efficient? Through the IT Rules 2021, the Indian government has created a regulatory infrastructure for content takedowns. Focusing on takedowns, however, is like checking for individual bad biscuits: it's inefficient and fails to address structural flaws. The Facebook Papers leaked by Frances Haugen, which I have reported on for the past month, makes it clear that Facebook's failures in content moderation are systemic, not instantial. The need of the hour is for lawmakers to understand the systems that are amplifying harmful content instead of focusing on taking down individual posts. Why regulators need to focus on harmful algorithms The intuitive approach to harmful content: Our intuitive understanding of the 'bad content' problem on Facebook is that content reviewers are not doing a good enough job of taking such content down. A criticism often levelled against Facebook is that it doesn't have nearly enough such reviewers, or more specifically in India, that it is often unwilling to take down content by influential political figures. Why that's the wrong approach: While unbiased human oversight over content is crucial, there are other ways at Facebook's disposal for reducing the spread of hateful content. Innumerable factors go into determining what content is distributed…
