“In discussions on misinformation, we generally focus on what the government and platforms should do, when in fact it is a people problem, as much as it is a tech problem. People generally don’t have the agency to act on some of these problems. So how do you empower everyday people to actually respond to misinformation is the bigger question,” a speaker said during MediaNama’s workshop on identifying challenges to Encryption in India. Traceability emerged as perhaps the biggest challenge to encryption, but several speakers questioned its effectiveness in curbing misinformation.

This workshop was held with support from the Internet Society (Asia Pacific Office), and was under the Chatham House rule; the quotes have thus not been attributed.

Challenges to encryption

Traceability of originator: Traceability emerged as the single biggest challenge to encryption, as it has been a constant demand from the government and other law enforcement agencies to address not just misinformation, but issues of “national security” or child sexual abuse material, several speakers concurred. “The concept of the government with respect to traceability seems nebulous as it sees some motivated actors who are hell-bent on spreading misinformation, and wants to trace them,” a speaker noted.

Draft intermediary liability rules have fairly stringent traceability provisions which can potentially lead to breaking of encryption, a speaker said.

  • What the draft intermediary liability rules say: The Intermediary Guidelines Rules govern platforms and their behaviour under the IT Act 2000. The proposed amendments by the IT Ministry seek to place larger responsibility on platforms by inserting legal requirements of traceability, proactive monitoring, registration, etc. Among its several requirements, platforms will have to introduce traceability to find where a piece of information originated, and hand over information or assistance to government bodies within 72 hours, including in matters of security or cybersecurity, and for investigative purposes. [Rule 3(5)]

Timely law enforcement access to information: A lot of law enforcement agencies have complained that end-to-end encryption prevents timely access to information, a speaker said. “Their core complaint is that it [end-to-end encryption] prevents authorities from gaining timely access to the plain text of the data for which they may have obtained legal authorisation,” they added.

Lack of understanding around encryption from the government: After misinformation resulted in lynchings of people in Maharashtra, the government reached out to WhatsApp with a demand for traceability, said a speaker, adding “the government, by its own admission, was saying that it didn’t know how to solve for this, and wanted WhatsApp’s help. “I don’t think their main intention is to even break into encryption. Even the Rajya Sabha committee report, which says that we need to break end-to-end encryption in cases of CSAM, which to me says that they don’t want to break into encryption in general, but they don’t understand it in the first place”.

  • “The technical suggestion put forth by Professor Kamakoti during the WhatsApp traceability case is actually based on an encryption and decryption model that was invented in 1976. It’s not something that looks into the Signal protocol, which is currently being used for encrypted platforms like WhatsApp,” another speaker said.

The government might not always be a good-faith actor: “Why should we trust the government to be a good-faith actor? Why should we assume that the government is a benevolent actor,” a speaker asked. “The debate on whether encryption or traceability is desirable or necessary should also factor in whether we can trust the government with that kind of power. For instance, the reports about how the government was spying on several activists using NSO group’s software, is a case of the government not being a good-faith actor,” they added. “It is also important for us to think whether we want to give the monopoly of curbing misinformation to the government. In fact, several PIB [Press Information Bureau] fact checks have turned out to be fake news,” another speaker added.

Is traceability a legitimate solution to curbing misinformation?

“Traceability ultimately is a means to an end, so I’m not quite sure whether the goal of enforcing traceability when it comes to misinformation is so much to find the person rather than for it to act as a deterrent to misinformation,” a speaker said. However, “once you start asking for attribution, what you’re saying is I don’t want to prosecute anyone who’s forwarding misinformation, but I just want to find the originator of the content,” another speaker noted. Another speaker explained that there are three parts to misinformation: creation, production, and distribution. “At best with traceability, you’re going to possibly reduce some of the production or the distribution,” they noted.

“By specifically framing the traceability problem as tracing it back to the originator of a message, you’re actually inserting a perverse incentive into the misinformation cycle,” another speaker said.  This way, they said, the government is actually sending a signal that it will never prosecute for forwarding hate speech or misinformation, but will only take action if you originate it. “This also goes against all the advice that academics and journalists have tried to advance as a solution for misinformation, which is that all of us be critical of what we read and what the government tells us,” they added.

Traceability also doesn’t solve misinformation that originated on another platform and later propagated to different platforms, two speakers said. “How do you really establish that if a person is the originator of the message on that platform? How do you really establish that it was that person that originated the message to begin with,” they asked.

The speakers also discussed the viability of the following technical solutions in tackling misinformation:

Regulate social media consultants of political parties: “There are always certain wilful platform abusers such as troll farms or social media consultants who run services for different political parties or affiliate marketers. Till date, we have not seen any concrete action on part of the government to regulate the activities of these digital marketing firms or consultants. Can we not explore having them undergo a formal auditing process where they are subjected to people’s scrutiny about how they’re running their businesses,” a speaker asked, and added that when WhatsApp banned automated messages, it found that that a lot of such messages were coming from agencies related to political parties.

How effective are fact checks accompanying potential misinformation? “After that a certain piece of misinformation has been spread, the window for changing someone’s mind is very less. So, what Facebook is doing for instance by putting fact checks next to potential misinformation has turned out to be ineffective. From a behavioural or a psychology perspective, just fact-checking all of these post facto things might not work,” a speaker said.

Continuous product enhancements using AI/ML for behavioural detection: “You have to make continued product enhancements, and you need to use Artificial Intelligence and Machine Learning tools to detect behaviour which results in propagation of misinformation,” a speaker suggested. is abusing that. Aside from that, there is a need for a larger industry-wide initiative to curb misinformation which is going to require societal mobilisation and sensitisation, they added.

Offering authentic information by trusted sources: “One of the things that we saw in the recent COVID-19 outbreak was an absolute spurt in misinformation. The pandemic has also been now tagged as an info-demic. So, WhatsApp created specialised APIs that allowed organisations to build a menu-driven chatbot system. It went and offered this to legitimate government authorities, including the World Health Organisation, which is now offering users accurate information,” a speaker said. This WhatsApp API was also used by several state governments and the central government in India to offer information about the pandemic.

Limiting forwarded messages: “A large chunk of misinformation currently present on encrypted messaging platforms comes from highly forwarded messages,” a speaker said. Admitting that limiting the number of forwards on messaging platforms is not a fool proof solution, they claimed how a limitation that was put on highly forwarded messages by WhatsApp resulted in a drop of about 70% in such messages.

Client side-scanning could be used, though, it has limitations: “We’ve seen these proposals for either local or remote clients scanning,” a speaker said. Client side-scanning broadly refers to systems that scan message contents — text, images, videos, files — for matches against a database of objectionable content before the message is sent to the intended recipient. “However, I’m not sure about its feasibility for tackling misinformation since the amount of content being generated in this case is far more compared to child sexual abuse material. There are, of course, implications to privacy because you are doing some sort of communication once the message is decrypted once its at rest. Or if you have to target people in some way, you’re going to try to target that by location, which means there are further privacy implications,” they added.

Areas where more information is required

  • How do you establish a person that originated a message to start with as it might have originated on another platform?
  • By the time you trace the person who originated a piece of misinformation, it would have already caused its intended damage. How do you solve that?
  • Traceability, attribution vs faith in government: How can we be sure about the government’s motive?

Also in this series: