On 21st February 2018, Medianama held a discussion in Delhi, on Fake News and Online Content Regulation, with support from Facebook. The following is notes from the discussion on ‘What platforms can do to address Fake News’. The discussion covered possible regulations that could be undertaken to control fake news and features that could be implemented by platform holders to aid in the process.
Any discussions on the issue of eliminating fake news have had an impact on the preexisting debate on online content regulation. Amba Kak of the Mozilla Foundation said, “For better or for worse, it seems that the question of what do we do with fake news and the separate question of how do we curb the power of platforms is often conflated. The reason why policy people like me find this confusing is the latter question which is about what do we do with platforms, in general, intersects with many issues, not just fake news. It intersects with privacy, competition, defamation, copyright law, hate speech among others. Of course, it makes sense that the conversation is around platforms as they are the primary sites where fake news is amplified and propagated. In the recent past, there have been some evidence-based claims that both the technology and the design of these platforms are actually contributing to the spread of fake news.”
So what are the policy proposals that people suggest when it comes to regulating fake news on platforms? Kak said, “One bucket of this is what theory people call ‘reflexive regulation’, tweaking the design or architecture to fix the problem in an indirect way. It could involve having an accreditation system or looking at the role of bots and see if the can be limited or labelled. Some of the work that I’ve done looks at how can we give users more control and visibility over how they are profiled and how it affects what they see.”
But maybe that’s not enough, there’s an argument in favour of hard-handed regulation that is designed to get fake news off platforms altogether. Kak adds, “This brings us to the second bucket of regulation which says that platforms need to purge this content and they shouldn’t host this content.”
The question then arises of who is responsible for this regulation of such content? “Users don’t really want platforms to have the power of arbitrating content. Because if platforms make a wrong decision on this we can’t hold them accountable the same way we do public institutions. If carrying fake news is made an offence and platforms are made responsible for removing this content, we have to understand the kind of discretion we are putting into their hands.” Kak added.
Nikhil Pahwa from Medianama echoed a similar opinion the issue, “For me, it’s problematic to think that either an algorithm or Facebook is going to decide whether something is true or not. Do we really want to delegate the regulation of free speech to private parties? A government which censors something is still more accountable to me as a citizen as compared to a private party who is absolutely incapable of dealing with complaints because of the scale that they have.”
When should a platform be considered liable for content being consumed on its platform and when should a user take responsibility. Adnan Alam said, “Unless the platform allows a user to choose their editor or their algorithm they should not be waived off the liability of what content is being served on them. The user should be allowed to make the choice of what content surfaces for them. This way Twitter, for example, may put me in a bubble, but I should be able to choose content from outside of my points of views.”
But what should platforms do?
The floor opened to suggestions from the participants on what specific steps would they like to see be taken by platforms when it comes to the handling of fake news.
Cede more control to users
1. “What I want platforms to do is give me my data back. So I can decide how my content is regulated or served,” Adnan Alam said. “A platform is supposed to be neutral and is supposed to host a number of applications and a number of algorithms. Why are you calling yourself a platform if you are going to be monoculture or mono-algorithm.”
2. Mahesh Uppal from Com First (India) expanded on the same argument saying that, “The focus should be on making the right thing easy to do. Essentially, if I want to hear news from OpIndia or Russia Today or wherever, I should have the option to choose all of them but at the same time, the option that is chosen for me by default should be the sensible one. We have to acknowledge that top-down regulation may not work but there is a huge space for creative design. The Economist argued that while there is no case there for content regulations there is a case for the right defaults to be made present. This was we are erring on the side of caution without undermining any of our rights.”
Highlighting counters to fake news
3. One approach by platforms could be the prioritisation of good engagement especially engagement that calls out a piece of fake news. Journalist Akash Banerjee said, “When a piece of fake news or statement is put on Twitter, some random comments follow. The whole point about the best comment floating up doesn’t happen, the guy who did the first comment stays the first comment. The bubbling up of the right comment needs to happen.”
Tell the user why this
4. The content served on a user’s news feed is dictated by data collected from them. The Wire’s Karnika Kohli said, “If you see an ad on your Facebook timeline, you can click the drop-down button on top to see why you were served that particular ad. I have made it a habit to check that out. I think that option should be expanded to all content. That will make me feel much better about the content I’m seeing.”
User managed filters
5. Meghnad Sahasrabhojanee, who hosts the ‘Consti-tution’ web series said, ” I feel that more effective filters would be nice. Twitter has started filtering out very abusive comments or people who you can’t identify. You can choose to exclude them from the conversation. They can speak into the void but you should never have to hear them if you don’t want to do so. So more platforms should offer filters that the users can operate themselves.”
Accreditation system and labels
6. While challenging to implement an accreditation system that rates sources of content based on past reliability could allow the user to make an informed choice about what he or she reads. Shruti Rao of APCO Worldwide said, “Maybe the platforms could collaborate with fact checkers to examine content. Even if it is just someone’s opinion as long as it is based on facts it could be graded on a scale. For example, a completely factually correct content may be green, followed by yellow for something in the grey area and something that is completely false could be red.”
7. Even after something has been deemed as fake it still tends to circulate on social media, this is because platforms fail to label content as a hoax or fake news. AltNews’ Pratik Sinha said, “Once a video, news item or image has been deemed fake by third-party fact checkers a platform needs to have a way to communicate that to the users before they see that content.”
8. ICFJ’s Nasr-Ul Hadi expressed a similar point with a key tweak, “This should be similar to how platforms handle graphics images. A certain number of users may report an image as being graphic and I get a label saying that it has been reported as such and asking me if I still want to see it. Labelling of content shouldn’t be as simple as has this been fact-checked or not. Rather once it has been reported for issues by a certain number of users i should get a notice about that before I see it. It’s not a perfect process but it gives me some data.”
Working with public authorities
9. Important public authorities like governments or police forces maintain a presence on social media, the platforms should be able to leverage this to tackle misinformation. Manish from the Centre for Policy Research said, “Platforms should facilitate counter-speech against misinformation through law enforcement channels. Facebook should be able to tell the reader that this has been reported by this public authority an allow him or her to check that out.”
Real names and scoring the user clout
10. Amlan Mohanty from PLR Chambers said, “This is a bit contentious but I think a real name policy is necessary. As much as we want to make the platforms liable there some onus on us as users as well. And maybe some kind of clout score where if you constantly post fake stories your score gets affected negatively.”
Debubbling and access to counterpoints
11. One of the key factors that drive fake news is the partisan nature of the content users consume. Users may be more inclined to fall for false stories that conform to their beliefs and get shared within their social media echo chambers. Venu Arora, Ideosync Medio Combine suggested, “What we need is proactive debubbling. Look at the principle of diversity and proactively debubble us as users.”
12. HR Venkatesh from ICFJ concurred, “What would help is a ‘change my view’ button on every post, something that offers an alternate point of view to what I’m seeing at that moment. I could be incentivised to click on that through smart design.”