Where does the responsibility and accountability lie for content, and how who is held accountable and how? The Internet is both communication and publishing, and regulating content creators is tricky without curbing free speech. How do you treat people who share fake news differently from those who create it?
Should there be regulation? How do you ensure that the safe harbour, which provides amnesty for intermediaries and platforms, and has led to the growth of the Internet, is retained? During the #NAMApolicy discussion on Fake News, Rumours and Online content regulation, we looked into both technology and legal means of addressing Fake News.
We shouldn’t be looking at Fake News as a monolith. Nasr Ul Hadi, ICFJ Knight Fellow, suggested that we consider deal with different types of fake news differently:
– User generated misinformation “which is rumours, or citizen journalism that has gotten it wrong”…”Which part of the stakeholders can potentially sort this? One is platform, by scoring or giving a verified kind of a tick.”
– Publisher endorsed misinformation: “when a publisher is getting information wrong. Did it start at wires, did it get picked up without corroboration? Or getting it from a source”…”What you also need to look at is that newsrooms don’t have the bandwidth to corroborate everything that is flowing through their pipeline. Your also need better processes and frameworks.”
– Organised misinformation generation: “for which you need legal implications and government originating regulation to address organised efforts.”
“You won’t get a pan industry solution,” he continued, “and the moment you break it down into modules, you might have a different solution for a different module. It’s a universe of misinformation.” When it comes to addressing organised Fake News, there is, of course, the question about parody, and as Chirag Patnaik from Outlook Magazine pointed out, someone will just say that they’re writing fiction. “Even if you add a disclaimer, in the first share the disclaimer will go away.”
There are no easy answers here.
1. Legal solutions: to regulate or not to regulate
– Regulatory responses won’t be easy and we have to be careful: Chaitanya Ramachandran, Legal Counsel for Twitter India referencing the TRAI paper on OTT regulation, which considered licensing or pre-screening apps, said that “with thousands and tens of thousands of apps published, if it makes it to law, who will do the pre-screening of tens of thousands of apps every day? There was a disconnect between the idea of regulation and the on-ground idea of how regulation happens.” Apar Gupta, lawyer, pointed towards the UN rapporteur, who says that we need to moderate our responses towards Fake News, and look at longer term prescriptions for any legal response which may occur. For example, something like criminalising Fake News should not be done.”
Saikat Datta, former National Security Editor of the Hindustan Times suggested that we could tweak the Representation of People act to “undermine any political party or leader from trying to generate fake news. In many ways what political leaders are doing by making these fake statements etc actually goes agains the spirit of the Representation of People act.”
– Regulation is being done, and it’s making Facebook act: Arvind Jha, CEO of Pariksha Labs and an Aam Aadmi Party member pointed towards regulation from Germany, where following concern from the authorities, “Facebook is rolling out a tool allowing people to report the information, and they will have an independent fact-checking body.” Snehashish Ghosh from Facebook clarified that “There is no law yet in Germany on Fake News, they’re considering something but there is no law yet.”
2. Media led responses & counter speech
Arvind Jha pointed out how reporting from the media had forced Facebook to admit that trending information was manual. Thus, the media needs to play its part:
– Need more media organisations: HR Venkatesh, founder of NetaData, suggested that we need decentralisation of news organisations: “We need a thousand more newslaundry’s in India.” Saikat Datta, however, said that it won’t really help because for each fact-checking media organisations, you’ll have a hundred creating fake news. “Decentralisation will not work”, he said.
– Self regulation and a crediting body? Rishi Majumder, Associate Partner at Oiji said that self regulation for the media will be tricky because “it takes a long time to achieve consensus. What can be looked at is an accredited body that various people are a part of, which looks at a rating which can monitor, and put out information and data on the rate at which fake news is being churned out.” However, that doesn’t quite address the issue of the scale at which Fake Information is being generated.
– Can we make fact-checking cool? Abhinandan Sekhri, Founder of Newslaundry pointed towards how one might address a political leader lying: “For the leader, there is a website in Argentina which does a realtime fact-check, which has led to leaders not talking bullshit. It’s a crowdsourced fact-check, and it’s a good filter, with a 1-2 minutes lag (between what the leader says and what the facts are).
Kim Arora of the Times of India suggested that we need to make counter narratives and fact-checking cool “If you have something that is click-batty but feeds into counterspeech, by being click-baity. Like ’10 things you didn’t know about patrioticindian_69: lives in New Jersey, not in Chhindwara.’ You make counter speech cool, and cool enough to for it to get enough clicks and then becomes a business.”
But is it even possible? Nasr ul Hadi, ICFJ Knight Fellow, pointed towards the math: “For a major Hindi Newspaper on any one single day, there are about 5000 stories in the pipeline, at various stages of evolution, being reported, being changed and may not be published. Of those 5000, only around 500 make it online because the online team has limited bandwidth: the web team staff is lower, they don’t have the technology for it to automatically end up online. Of those 500 stories only around 50 end up going through the social platform because the social team bandwidth or the distribution bandwidth of the organisation is much more limited. This is the bandwidth of a truth telling organisation. Compare that with the bandwidth of a falsity spreading organisation: the technologies they’re able to use, the number of people in their digital call centre, how many people, what kind of technology, and they’re spreading stuff which speaks much more directly to human emotion. Publishers, no matter how sophisticated, will not be able to counter directly the snowball that is rolling towards them, which is why, there will not be a single solution, there will be a portfolio of solutions speaking to multiple sources of misinformation.”
Puneet Gupt, COO of Times of India (Digital) pointed out that at Navbharat Times, “we already have a section called Social Media ke jhoot (roughly, ‘false information on Social Media’). It’s not just that we’re the only ones doing this. TV publishers are also doing this. We just have to make all of us come together.”
– Let competing forces cancel each other out? Saikat Datta wondered about who would fund strong counter narratives, and suggested that it might help for the opposing parties to counter each other: “(for example) If the BJP is creating fake news, it is up to the Congress party to create a counter narrative to correct each of the fake news part. How does the market work? When competing forces start cancelling each other out. If you start looking at it as not just political parties but also news channels etc, let the market start developing the counter narrative.”
3. Platform regulation?
“Any conduct in a civil society is regulated. If it was not regulated then we would not be a society,” Rajesh Lalwani, founder of Scenario Consulting said. “To expect that we will self-regulate will not happen,” he said, adding that “The only regulation that can and should happen is at a platform level. They have the means, the wherewithal and the responsibility. To expect that the media owners are going to do it is not going to happen. To expect the readers are not going to participate, is not going to happen. We don’t want the government to regulate. The answer is at the platform.”
My own view is that while the Internet has facilitated decentralisation of content creation, over the past few years, we have seen a centralisation of content consumption. Thus, since platforms are where these issues are emerging, with what they promote to users. Perhaps we need to see if they can play a role.
Prashant Singh, co-founder of Signals, disagreed, saying “that’s going down a dangerous road. If platforms start to measure what is right and wrong, it will be driven by popular opinion”…”Where will the platform get its moral compass? By public opinion, which has been historically wrong in many instances?” Platforms taking on a quasi-regulatory approach is dangerous, since a nation, at least on paper, is still accountable to citizens. Who are platforms accountable to?
That is, in any case, what is happening within the filter bubbles of Facebook. Apar Gupta, also voiced caution: “legal responses in terms of going towards a content regulation framework in terms of censorship, incentivising platforms also to censor through automated measures which are rolled out on their platforms, technical measures which are over-broad, which result in censoring legitimate sources of content will be problematic.”
But should we regulate groups? An attendee pointed out that in Kashmir two years ago, “the state government said that all news whatsapp groups should be registered. They didn’t define ‘news[, and a district magistrate enforced it, saying we want all Whatsapp news groups to be registered.”
HR Venkatesh suggested that Facebook hire editors: “When it comes to fake news on social media, we need journalists and editors in Facebook. Facebook needs to appoint editors so they can regulate fake news, which they are doing in the US if I’m not mistaken.” Snehashish Ghosh clarified that Facebook has not hired any editors in India.
Adnan Hasnain Alam from Netsil suggest that platforms provide users with a down-vote for reporting spam, so that it doesn’t spread. Remember that Reddit has a downvote option, and beyond a certain threshold, information is not directly visible to users. Punit Gupt, COO for Times of India (Digital), said that platforms, consumers and publishers will have to work together. “We can get into what those signals to algorithms should be, short term moving averages, long term moving averages, AI and all of that, but the real work that publishers have to do is to start reporting fake news. We can start doing it, once we know that it has spread. Platforms will have to play a role in reducing the spread, and the publishers have to play a role in debunking.”
4. Consumer led responses
“The real time-bomb of fake news,” Punit Gupt warned, “is that the next 200 million people who come online, will have fewer resources to figure out what is fact and what is fiction.”
“The ultimate responsibility needs to lie with the consumer,” Meghnad Sahasrabhojanee, from the Office of Tathagata Satpathy, MP, said. “There needs to be an effort to develop some sort of critical thinking for the consumer because right now, whatever you get on whatsapp and other media platforms, very few people question whether there is a legitimate source behind the information. So no matter how many laws you make, the implementation will always be a problem.”
One solution, Meghnad suggested, is to take a Wikipedia type of approach: “you trust it because a lot of people have questioned it, and there have been sources provided.” HR Venkatesh suggested that the Media needs to do media literacy programs, and people need to question those spreading fake information. “If India is a land of a million mutinies, we need a million revolts against Fake News as well.”
Apurva Viswanath from Mint suggested that if we make data available, there will be people who might check it, and thus not spread misinformation further.
“Do you think Name and Shame would help?” Rahul Narayan, Lawyer, asked. Frankly, while it might work in smaller Whatsapp groups, some people might actually thrive on it. The challenge is that Whatsapp has end to end encryption, and it’s impossible to deal with this at scale. The other issue, highlighted by Cyril Sam of Scroll is that users who are banned from one platform will just switch to another. 4Chan gave people a time-out of 24-48 hours, but they just switched to other platforms. Frankly, what stops users from maintaining multiple profiles on a platform?
Tech led responses
It was suggested, by Javed Anwer, Technology Editor (India Today Online) that platforms need to open their algorithms in some way, so that they can be checked by some organisations. That way, there can be a check on platforms overregulating. I doubt that any organisation will open up its algorithm, but this point did come up a couple of times times.
Arvind Jha felt that we will eventually find a solution, and it might be a technology solution. “If you remember the first wave of viruses, every body got affected. Nobody knew what a virus was. Today, the instances of people getting affected by viruses is much lower because antidotes have come, and we’re much more aware. Right now the first wave of this virus – the fake news virus, so obviously we are on the losing side, but it will not be the same. A friend of mine runs this handle called SMhoaxslayer. I’m an entrepreneur, so lets profit from it, and build an automated mechanism. He does some very smart reverse searches, image searches to figure out whether the image has been used before. Right now the biggest tool the opposition (creators of Fake News) has is Photoshop. But it is a very very complicated problem to figure out even from a reverse search perspective. The technology to get an automated API to say whether this news is fake is not there right now, but it will grow up.”
Miten Sampat, VP Corporate Development at Times Internet, pointed out the nutrition industry was regulated to address issues of misinformation: they need to disclose nutrition facts on packages. To address issues with behavioral advertising online, the IAB suggest a logo “at the bottom of every ad, which a user can use to check who’s tracking them. Based on this, they can use adblock and other tools, and not engage with that site. There has to be an industry led, product led solution that informs the user.”
Snehashish Ghosh from Facebook had this to say: “In terms of what we are doing, we understand this is an issue and a concern, we have rolled out certain products in the US and in Germany, we’re looking at what we can do in other places and other countries, but as of now we understand that these are concerns and need to be solved as of now. In terms of what we’ve done in the US, I think one main thing we don’t want to be is arbiters of truth at any level, and that is clear to us. Right now the way we deal with spam, the same way we’re trying to deal with hoaxes, with giving the users a clear reporting mechanism in terms of clear hoaxes. In terms of news items, we have fact checkers. When a story gets flagged it is sent to fact checkers, who go by Poynters principles. And then we mark a story to be disputed or undisputed. The third thing is that we are sucking out the specific domains, which are news websites specifically created to manufacture fake news, and we’re taking it out the way we deal with spam, to ensure that spoofing websites are not promoted on the platform.”