by Samir Saran and Bedavyasa Mohanty, ORF
As India heads into a month-and-a-half long divisive and polarised general election, has the country woken up to the threat that fake news, misinformation and influence operations, conducted from within and outside, pose to this most sacred of democratic processes? Recent steps taken by the Election Commission, including the convening of top social media companies, are welcome but insufficient. While a voluntary code of ethics amongst these companies is important, the dimensions of the challenge cannot be adequately responded to by just taking down inappropriate political advertisements and establishing more efficient lines of communication with the Election Commission. It is unlikely that these 25th hour measures will address the multiple threats of election interference that have either been commissioned months in advance or are a product of the increasingly perverse election ecosystem of India.
Datasets, Fake News, Lynchmobs and Politics
In 2018, over 30 individuals were lynched by mobs in India over suspicions that they were child-lifters. The trigger for these acts of violence was a campaign of doctored videos and images warning people of child-lifters and organ-harvesters prowling in their neighbourhood. These, like most other instances of mob violence in India, have often targeted nomadic tribes or religious and cultural groups, exploiting old cleavages and uncovering new fissures. A second iteration of the campaign now appears to be underway, and much like the first it could take on communal undertones with direct implications for the upcoming general election.
What distinguishes these campaigns from the rumour-mongering pervasive over the internet is their coordinated nature and the tailoring of the messages (by context and geographies) to create paranoia among certain groups of people. Other, more election-specific content that is currently being generated, also seems to follow the same pattern – driving a deeper wedge into pre-existing social divides. This has manifested in the form of fake news ranging from Priyanka Gandhi wearing a cross around her neck while campaigning, to false images of the Pakistani flag being waved at Rahul Gandhi’s election rally. Similarly, WhatsApp messages spreading fake information painting Indian Prime Minister Narendra Modi in poor and sinister light are also circulating in the social media space. Ironically, as both Congress and BJP increasingly depend on targeted messaging through WhatsApp groups and text messaging, malevolent actors have also found these same mediums useful for their purpose.
“The digital medium is critical for this form of messaging. Hyper-targeted campaigns such as these can hardly be conducted over traditional media like television and radio.
The digital medium is critical for this form of messaging. Hyper-targeted campaigns such as these can hardly be conducted over traditional media like television and radio. Social media allows political parties to build a multiplicity of identities depending on the recipient of their messages: a message of growth for the urban educated, a party of benevolence for the rural poor or a defender of identity for those with an elevated nationalist fervour. Once these data sets and dissemination pathways are created it is only a matter of time before other actors are able to leverage it for subversion. Russia, for instance, famously used Facebook’s hyper-targeted ads during the US presidential election in 2016 to spread inflammatory messages spun around race and immigration to further divide an already polarized voter base.
The speed and distance that social media campaigns are able to cover over the internet have made them a ‘must have’ in the toolkits of both state and non-state actors. Notoriously, influence operations in the run-up to the German election were able to orchestrate political rallies that were coordinated remotely over social media. The effect of influence operations in cyberspace, therefore, is no longer confined to the virtual; rather, they have very real and tangible consequences in the physical world.
The Indian, European and American experience is part of a common and continuing saga that incriminates three actors. The model of capitalism that seeks to create value from identity and personal information will also offer abundant opportunity for the same information sets and personal data to be used against individuals, communities and countries.
Second, the ease (and low cost) and process of building and accessing these databases is the same for startups, corporations and countries wanting to leverage them. Data administration (or the lack of) is the single biggest national security threat in a hyper-connected and hyper-volatile world.
Third, political parties in all these geographies are implicated in helping create a personal information base, which they have found vital for electioneering. From the legendary ‘Obama Campaign‘ based on sophisticated use of social media and citizen engagement (and information) to the highly sophisticated ‘Modi Campaign‘ which bypassed old communication channels to create a personal line to the voter, both of these and others have built huge databases of citizens and their preferences. Who regulates these databases? How safe are these? And is it time for a regulator to intervene to ensure these are not used against the state by external actors or by the proverbial ‘inside man’.
Facebook, Twitter, Algorithms and God: Who is in charge?
Today, the targets of influence operations are democratic structures – seeding doubts over the credibility of institutions such as the mainstream media and regulatory agencies. Far too often, signs of such external interference elicit a knee-jerk reaction among states. For example, in response to the cases of lynching caused by rumours that were spread over WhatsApp, the Indian government in late-2018 proposed amendments to intermediary liability laws. Among other things, the amendments impose an obligation on intermediaries (or communication service providers) to introduce traceability to their systems – the ability to identify the original sender of the message. For platforms that are designed with complete end-to-end encryption, this is a near technical impossibility. Compliance with the law, therefore, would require companies to roll back encryption over their services, fundamentally compromising the integrity of the platforms that users rely on.
“The self-regulatory code that the technology companies have now adopted gives them a wide latitude in determining what qualifies as objectionable political advertising.
The self-regulatory code that the technology companies have now adopted gives them a wide latitude in determining what qualifies as objectionable political advertising. This is especially concerning when social media platforms like Twitter and Facebook are already under review by regulators for allegedly harbouring a liberal bias and unduly stifling conservative voices. More subjective power to the platforms, therefore, assigns an adjudicatory role to an organisation that is not democratically created and has limited accountability to citizens and policymakers in countries such as India. While most would argue against ‘hate speech’ in all its manifestations, letting Facebook and Twitter play the role of the censor should worry us all.
Institutionalising these reactionary responses to the dangers of influence operations threatens to whittle away at the core of the freedoms guaranteed by the Constitution of India. Instead, efforts should be focused on creating resilient and long-term solutions like creating counter-narrative mechanisms that can dispel disinformation. Close coordination between fact-checkers, official channels and the mainstream media can render many sources of disinformation unviable. Companies are already exploring ways in which identification and flagging of coordinated fake news campaigns can be done by artificial intelligence. While this is unlikely to be a silver bullet, automating the process can significantly arrest the spread of malicious content. Regulatory attention should be paid to determining the rules that should govern the algorithms to ensure fairness, accountability and transparency in their operation. In the coming days, algorithmic accountability will be the single most Important debate across the digital sector.
iCovet your Democracy
India and its democracy is an outlier in the region that it resides in. Our two special neighbours to the west and the north would like nothing more than to see the demise of this Indian experience and institution. Democratic India is their enduring failure as states, peoples and communities. Economic travails, perverse politics and terrorism infused across the borders has been unable to deter Indians from the path of plural and democratic politics. Openness and pluralism, however, are not things that we should take for granted as recents developments have proven.
Two trends/realities should be evaluated with utmost seriousness. First, this technological age allows interference with an unprecedented velocity and reach which old institutions protecting elections and the state are not designed to operate on. The instruments (technologies and corporates) that will be active are no longer regulated or sanctioned by the state. The cost and relative ease allows smaller and weaker states to be attracted to this option. And, the data sets used to win voter favour can be deployed to attack democracy itself. The most dangerous feature of digital operations is not to actually interfere or shape outcomes; it is to only create a perception that the outcome was perverted by interventions – a tactic used by the former Soviet Union during the Cold War to demoralise the psyche of the target nation. Response to this will need to be both real and social, actual and perceived.
Second and related to the above is the growing debate on ‘surveillance capitalism’ and its impact on countries and peoples. Even as this perverse ‘data mercantilism’ evokes a variety of responses, in countries such as India we are witnessing a ceding of space to corporations and media platforms to arbitrate the contours of public engagement. The algorithms designed to amplify user engagement and ad revenues are now deployed to restrict political speech that these companies find objectionable – creating a ‘surveillance democracy’ in the process.
“The algorithms designed to amplify user engagement and ad revenues are now deployed to restrict political speech that these companies find objectionable – creating a ‘surveillance democracy’ in the process.
The Indian state, its political parties, its corporates and those operating in its territories and most of all its people need to wake up, work together and respond to this huge challenge, a reality that is now playing out on a daily basis. Each of these actors has contributed knowingly and ignorantly to the perverse political economy of elections and unless each resets their engagement with this most important asset in India’s treasury, the death and diminishing of democracy will be an ‘Inside Job’ even as the hand, handler and beneficiary may well be just another IP address.
Crossposted with permission from ORF.