How can social media platforms decide what is fake news and what isn’t when moderating content, especially when what’s fake today can turn out to be true tomorrow, Berges Malu, the senior director of public policy and communications at ShareChat pointed out during a recent discussion organized by the Council for Strategic and Defence Research (CSDR) on March 27. Malu gave the example of how during the pandemic, any questions surrounding the COVID-19 vaccines were marked as misinformation by platforms. “At the peak of the COVID wave, if you doubted the vaccine, you were cornered and you were called out. Today, there’s increasing evidence that some of these vaccines may have a problem,” he said, adding that in situations where the information is evolving over time, it becomes challenging for a platform to decide what is fake.
“How can a platform take this content down? It’s not unlawful, there’s nothing wrong with it. It’s my view that the vaccine doesn’t work. Is there a law that says I have to believe the vaccine works? No. Why should the platform take it down? Why should the platform even tag it and say, possibly misinformation? Does the platform know something I don’t? Then tell us,” Malu said.
Malu also gave the example of how in 2020, the government refuted the rumors circulating on social media that the Mughal Gardens at Rashtrapati Bhavan were being renamed. However, as of 2023, the gardens have been renamed Amrit Udyan.
When the moderator for the discussion, Aroon Deep, principal correspondent at the Hindu, pointed out that platforms in other countries were capable of taking down racist content abroad and asked whether Indian companies were capable of doing the same with misinformation, Malu said even when companies come out with artificial intelligence mechanisms for taking down certain words, users just switch to alternative words to make the same comments or claims. He explained that if platforms put in place stricter content restrictions, the discussion would shift from the need for content moderation to over-censorship by the platforms.
Are platforms responsible for the actions people take off them?
The moderator asked whether there have been any steps that have worked at getting misinformation off platforms so that it doesn’t lead to violence offline. He gave the example of mob lynchings in 2020 that were attributed to the misinformation circulating on WhatsApp. “Why do you assume that WhatsApp led to lynchings? Just like that story that came in BuzzFeed or something. It was a false story. It’s like saying, I mean, did WhatsApp kill those people?” Malu said, commenting on the example. He added that it is unclear whether the content that motivated these lynchings came from WhatsApp as well.
“The fact that media organizations themselves have fact checkers who themselves check their own news, tells you how bad the news media is. And then we blame the platform. I think it’s unfair to say WhatsApp or any other platform and say that led to an action,” he argued adding that it’s patent misinformation to say that the platforms led people to act a certain way.
Self-regulation would be an ineffective solution to curbing misinformation:
Malu explained the issues with a self-regulatory model giving the example of a self-regulation document that six tech companies agreed to with the Election Commission of India. While he did not clarify which self-regulation document he was referring to, tech companies had presented the “Voluntary Code of Ethics for the General Election 2019” which was developed by Facebook, WhatsApp, Twitter, Google, ShareChat, and TikTok. Malu explained that given that the internet is made up of more than these six players, such an agreement would be easily rendered ineffective. [Note: we asked Malu for a clarification on which document he was referring to but he has declined to comment further]
“Self-regulation is only [tech companies] saying, You [the government] don’t do it. We will try to do, as little as required, to keep you happy. That is not going to work. Why don’t you just come out with a regulation and say, Hey, listen, any company running ads during the election is required to do A, B, C, D. Why is it when the six companies who stupidly signed up for that have to do it?” he questioned, adding that self-regulation would never work in the interest of what the regulation needs to protect again, which in this case, is to keep tabs on political advertising during the election period.
Also read:
- Media Industry Stalwarts Discuss How Misinformation Can Affect Elections At SFLC.In
- Google Partners With The Election Commission To Limit AI Chatbots Use To Tackle Misinformation In 2024 General Elections
- ECI Asks Social Media Firms To Follow Voluntary Code Of Ethics Ahead Of State Polls: Report
Note: The story was updated on March 28, 2024, at 3:00 PM to correct a designation and to add a clarification about the self regulation agreement Malu referred to.
STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!