More than creating real-looking content, deep fakes will make it easier for political parties to flood a social media platform user’s timeline and change their perception of reality, campaign consultant Shivam Shankar Singh said during MediaNama’s virtual discussion on Deep Fakes and Democracy. When it comes to spreading disinformation, political parties want to create a general sentiment “say, religious polarization, it might be a sentiment around someone’s corrupt or not corrupt. It’s a narrative that you’re [the political party] trying to build. When you try to do that, the quality itself of the content doesn’t really matter that much. What matters a lot more is the quantity. You want to flood a person’s feed with content that pushes the person in a certain belief system in a certain direction,” Singh explained.
There are two sides to any kind of political content— the content itself and the distribution channel used to spread it, campaign consultant Shivam Shankar Singh explained. “So, as of right now, the distribution channel exists, and political parties have the capacity to push any content that they want to millions and millions of voters across the country,” Singh said. This could be the party’s own Facebook page, Twitter handles, WhatsApp groups and even influencers. “So, there is capacity for it to wreak-havoc essentially as soon as the content comes out, because distribution is really important,” he mentioned. Given that parties already have distribution channels, it would not be very challenging for them to create a flood of deep fakes.
Even before deep fakes, parties were able to spread misinformation by taking pictures from older events or from different countries to form their desired narratives. “With generative AI, you could get thousands and thousands of images of something happening in real-time. So, if there’s a riot taking place in say some remote part of the country, you could get images of that created and supply it in the hundreds and thousands,” Singh explained.
How the flood of misinformation impacts the Election Commission:
In flooding people with deep fake generated misinformation, whether sophisticated looking or not, the parties would end up flooding the Election Commission of India as well. Singh said that in 2019, the Election Commission had created a mechanism to get misinformation taken down from social media platforms. “But when a political party really wanted to spread a certain piece, they actually flooded the election commission stream itself. So, the election commission was so inundated with complaints and reports and this and that, that they themselves were able to forward very few of them to the platforms,” he explained, adding that the bottleneck for misinformation complaints is not between the commission and platforms but rather at the election commission itself.
“How many people can the election commission really deploy for something like this? It could be thousands and thousands of pieces of content generated every day with something like deep fake technology itself. Generative AI, the entire point is that the cost of generation is like pretty negligible. In such a circumstance, the election commission just stands no chance,” he said.
The scale of deep fake proliferation in the recent past:
“I just took out some data based on what are the kind of stories we did over the last year in 2023. And out of about 1,190 fact checks that we wrote roughly here and there, I think somewhere about 30 stories were related to AI-generated or deepfake videos. And even within those 30 stories, what we found is photos comprised of about 17, videos comprised of about seven, and audio, which according to me is the biggest headache that we are all going to face, comprised of about six stories,” Jency Jacob, the managing editor of Boom Live Fact Check said during the discussion.
Jacob mentioned that the audio deep fakes were the most complex fact checks that he and his team worked on. “Some of the stories that we looked at during the recent state elections in Madhya Pradesh, Chhattisgarh, and Rajasthan, were the clips of Kaun Banega Crorepati, where Amitabh Bachchan is asking some questions and some answers are being given, which is targeting one of the political leaders. In fact, we couldn’t even come to a proper conclusion there whether they were just dubbed audio or whether it was AI-generated cloned audio,” he explained.
Jacob said that India is yet to see a major proliferation of deep fakes with only 12 out of the 30 deep fake stories that Boom Live fact-checked being related to the country. “We haven’t seen a lot of it during the recent elections where we can say it is influencing elections in any form,” he mentioned, adding that while the problem isn’t all that big yet, there have been glimpses of how big it could potentially become. Jacob explained that the news outlet did a story on public figure deep fakes being used to promote scams, get-rich-quick schemes, and diabetes drugs, which highlighted the challenges that will emerge from deep fakes in the future.
How deep fakes could be used in the upcoming elections:
Singh explained that political parties are currently experimenting with deep fakes on two fronts— legitimate uses (like translating a politician’s speech) and nefarious purposes. “Say, if you talk about something like promises during an election campaign, if they want to put the name of a particular political constituency or a particular region or a particular state and dub it into that language, that capability, a lot of companies are pitching right now, and the political parties are experimenting with it themselves,” he explained. Singh mentioned that this exact technology could even be used to create deep fakes of opposition leaders or to make a celebrity endorse a certain candidate falsely.
As of the recent state elections, parties have been using deep fakes for positive purposes, Saikat Datta, the founder of Deep Strat mentioned. “During the Telangana elections, we saw some use of deep fakes, both from the BRS [Bharat Rashtra Samithi] and from the Indian National Congress, but those are mostly trying to create a positive images of the leaders who are participating in the elections to try and reinforce them as popular leaders or sensitive leaders, so on and so forth,” Datta said. What Datta was more concerned about was foreign powers trying to influence the result of an election. “The deepfakes here could be used by an inimical power to therefore influence to a certain extent the outcome of elections through certain kind of messaging,” he said.
Another concern Datta pointed out was that deep fakes could be printed out and pasted all over a constituency. “When you get a picture or a pamphlet with a certain kind of image, say it’s a seditious image of a certain leader, absolutely fake, but created using deepfake and then printed out, the quality of the print, the quality of the paper, et cetera, and how it is distributed physically could make a huge difference,” Datta said.
Lessons from the use of deep fakes in Bangladesh elections:
Jacob mentioned that while India has not seen deep fakes have a significant impact on elections, the same isn’t true for other parts of the world. “In just a neighboring country, Bangladesh, where we run a newsroom, we saw that just a couple of days before the voters went out to vote, in the case of two independent MPs, there were deep fake videos that went out where they said that they have withdrawn from the polls. And then they had to hurriedly come out and clarify that it’s not their video and they are participating,” he said.
Bangladesh also saw the circulation of a deep fake video where an election candidate was supposedly telling the people in Bangladesh why they should not go against Israel and why they should not be supporting the people in Gaza. Jacob said that while Bangladesh’s election had a foregone conclusion where one party was excepted to win, “but when it’s a close election, where no one really knows till the day of the counting whether who’s going to win, these kinds of videos can definitely have an impact.”
What sort of technology is being used in deep fake generation?
So far, parties largely rely on artificial intelligence models created by big tech companies because it’s cheaper to use trained models. “I think OpenAI just mentioned day before yesterday that they wouldn’t allow, say, GPTs to be used on creating deep fakes for election purposes specifically. So, things like that are happening now. So, you’ll see political parties spend more resources on that side,” Singh said.
Detection of these deep fakes, however, is not going to be a problem. “Any content that is going viral anywhere, someone’s going to tweet about it. Someone’s going to post it. And it actually, either it’s having no impact or it’ll surface to the top and you will know about it. There’s no way for it to have tremendous impact and no one to know about it,” he said.
The struggle to hold political parties accountable for deep fakes:
Singh said that attributing a deep fake to a political party will pose a challenge. In past elections, fake news came from circuits that were not directly affiliated with a political party. “These might be different Facebook pages that a party supports somehow, but you could never link them directly to a political party. The funding, the people, all of them are essentially de-linked. They might be paid through some business houses, they might be paid in cash,” he explained.
Singh said that in the context of elections, the scale of the consequences for spreading deep fake generated fake news is very low. “You think of something like generating deep fakes, whatever consequence you might have for it at the scale of a political party, it’s going to be a very small consequence because political parties have captured booths in the past. They have not hesitated from using physical violence,” he said adding that as such, they would be willing to engage in spreading deep fakes as well.
When asked whether the election commission has ever filed an FIR (first information report) against a party for spreading misinformation, Singh said that state governments have previously done so at the commission’s behest. “But what usually happens is that the state government drops the FIR right after the election’s done based on who’s won the state. So, that’s how it’s usually played out,” he explained, adding that the FIRs usually don’t go anywhere because no one cares about pursuing them.
Solutions to tackle deep fakes in the upcoming elections:
“It would be great if EC [the Election Commission] had the capacity to track what is already in its code of conduct, right? And I think that code of conduct is sufficient to capture a lot of violations around misinformation as well,” Tarunima Prabhakar, Co-Founder of Tattle Civic Technologies and a speaker at the event, said.
To this, Jacob added that the Election Commission needs to start the conversation on how to deal with deep fakes. “There is no reason for us to give them an easy pass on this. They need to have this conversation with political parties and at least start a conversation. Whether it will mean anything or not is something that we will only know [later],” Jacob said.
He mentioned that, to begin with, the Election Commission should say that political parties cannot use deep fakes. “Now, I know that there will be a lot of pushback to this, but I don’t know. I can’t think of any other way that the election commission can seem to be a serious player in this,” he mentioned. Jacob suggested starting with a complete ban on deep fake usage and then later setting out certain conditions in which deep fakes could be used like when the deep fake concerned is of a leader of the party and is clearly labeled as AI-generated content. Fellow speaker and Former Senior Director and Group Coordinator, Ministry of Electronics and Information Technology, Rakesh Maheshwari, added that the election commission should parallelly work with social media platforms to figure out how they can collaborate on deep fakes.
MediaNama hosted this discussion with support from Google and Meta.
- 11 Talking Points From MediaNama’s ‘Deep Fakes And Democracy’ Discussion #NAMA
- Video: Deep Fakes And Democracy
- Report: IT Ministry May Bring Amendments To IT Rules, 2021, To Regulate AI, Deep Fakes
STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!